text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
---|---|---|---|---|
tic(1m) tic(1m)
tic - the terminfo entry-description compiler
tic [-01CDGIKLNTUVacfgrstx] [-e names] [-o dir] [-R sub- set] [-v[n]] [-w[n]] file
The tic command translates a terminfo file from source format into compiled format. The compiled format is nec- essary/termin container if it does not exist. For a direc- tory, this would be the "terminfo" leaf, versus a "ter- minfo.db" file. The results are normally placed in the system terminfo database /usr/share/terminfo. The compiled terminal description can be placed in a different terminfo data- base. There are two ways to achieve this: o First, you may override the system default either by using the -o option, or by setting the variable TER- MINFO in your shell environment to a valid database location. o Secondly, if tic cannot write in /usr/share/terminfo or the location specified using your TERMINFO vari- able, it looks for the directory $HOME/.terminfo (or hashed database $HOME/.terminfo.db); if that location exists, the entry is placed there. Libraries that read terminfo entries are expected to check in succession o a location specified with the TERMINFO environment variable, o $HOME/.terminfo, o directories listed in the TERMINFO_DIRS environment variable, o a compiled-in list of directories (/usr/local/ncurses/share/terminfo:/usr/share/ter- minfo), and o the system terminfo database (/usr/share/terminfo).
-0 restricts the output to a single line -1 restricts the output to a single column -a tells tic to retain commented-out capabilities rather than discarding them. Capabilities are com- mented by prefixing them with a period. This sets the -x option, because it treats the commented ter- minfo. o capabilities with more than one delay or with delays before the end of the string will not convert completely. -c tells tic, as well as buggy checking for the buffer length (and a doc- umented limit in terminfo), these entries may cause core dumps with other implementations. tic checks string capabilities to ensure that those with parameters will be valid expressions. It does this check only for the predefined string capabili- ties; summarized con- taining the list if it contains a '/'. (Note: depending on how tic was compiled, this option may require -I or -C.) -f Display complex terminfo strings which contain if/then/else/endif expressions indented for read- ability. . -Rsubset Restrict output to a given subset. This option is for use with archaic versions of terminfo like those on SVr1, Ultrix, or HP/UX that do not support the full set of SVR4/XSI Curses terminfo; and out- right capabilities) even when doing translation to termcap format. This may be needed if you are pre- paring a termcap file for a termcap library (such as GNU termcap through version 1.3 or BSD termcap through 4.3BSD) that does not handle multiple tc capabilities per entry. -s Summarize the compile by showing the database loca- tion into which entries are written, and the number of entries which are compiled. tic to not post-process the data after parsing the source file. Normally, it infers data which is commonly missing in older terminfo data, or in term- caps. -V reports the version of ncurses which was used in this program, and exits. -vn specifies that (verbose) output be written to stan- dardn specifies the width of the output. The parameter is optional. If it is omitted, it defaults to 60. -x Treat unknown capabilities as user-defined. That is, if you supply format [see terminfo(5)]. Each descrip- tion in the file describes the capabilities of a particular terminal. If file is "-", then the data is read from the standard input. The file parameter may also be the path of a character-device.
All but one of the capabilities recognized by tic are doc- umented in terminfo(5). The exception is the use capabil- ity..
There is some evidence that historic tic implementations treated description fields with no whitespace in them as additional aliases or short names. This tic does not do that, but it does warn when description fields may be treated that way and check them for dangerous characters.
Unlike the SVr4 tic command, this implementation can actu- ally compile termcap sources. In fact, entries in ter- min anywhere in the source file, or any- where in the file tree rooted at TERMINFO (if TERMINFO is defined), or in the user's $HOME/.terminfo database explic- itly set to it.
/usr/share/terminfo/?/* Compiled terminal description database.
infocmp(1m), captoinfo(1m), infotocap(1m), toe(1m), curses(3x), term(5). terminfo(5). This describes ncurses version 5.9 (patch 20141220).
Eric S. Raymond <[email protected]> and Thomas E. Dickey <[email protected]> tic(1m) | http://www.invisible-island.net/ncurses/man/tic.1m.html | CC-MAIN-2015-06 | en | refinedweb |
>>.
Finally (Score:2, Insightful)
It's about time PHP has native support for unicode.: (Score:2, Insightful): (Score:3, Interesting)
Re: (Score:3, Funny)
But you can use Boost [boost.com] with any language, I don't see your point.
Re: (Score:3, Informative)
Re:Finally (Score:4, Informative)
PHP is much, much closer to C than C++ with truckloads of STL piled on top. Ask a C programmer to comprehend that mess and you'll likely have a suicide on your hands. It is very un-C-like. The point is that the PHP syntax for arrays is very nearly identical in behavior and syntax to C, just with lots of extra functionality (variable length associative array). I never said that C++ couldn't do those things, but as far as I've seen, when you do it in C++, you're generally way off the deep end as far as being syntactically familiar to C programmers.
I guess what it comes down to is this: if you think templates are elegant, then we will never agree about what makes a good language design. From my perspective, templates are what happens when somebody forgets that we have a perfectly good C preprocessor and decides to reinvent the wheel with a clumsy syntax that doesn't provide anything more than what C preprocessing could already provide, wedging the concept into the language itself for no apparent reason. It is anathema. It is absolutely the antithesis of good language design.
As for OO in PHP, I don't see why you think dynamic typing decreases the value of object-oriented programming. If you really are mostly using the same code with different underlying types, then there's little point in doing OO, but in my experience, that's the exception rather than the rule. Most of the situations where I've used OO with polymorphism, I've had polymorphism, but the underlying implementation has differed substantially, and the only thing similar was the method name (and the general concept for what the function does).
Also, it is nice to use classes even when you don't need polymorphism. This reduces pollution of the global function namespace. It also makes it easy to create complex data structures that make life easier. (PHP doesn't have the notion of a struct, so you have to either use a class or an associative array.)
Finally PHP is still very much a typed language. It's not like there is no notion of types and everything is polymorphic with everything. The type of a variable is determined when the variable is assigned, and some types can be coerced into other types in certain use cases, but it isn't universal. I can't do if ($arrayA < $scalarB), for example. PHP even has the notion of casting to force type conversion just like you do in C. For example:
function myfunc($mynumber) {
...
$mynumber = (int)$mynumber;
}
Dynamic typing doesn't mean the types aren't there. If you call a method on an object that doesn't exist on that object, it is still an error. And so on. Dynamic typing just makes it a little easier to shoot yourself in the foot by not throwing up an error when you make the assignment or function call in the first place.
:-)
Re: (Score:3, Informative)
There's nothing preventing a native foreach notation built into the language instead of glued on. They just didn't do it that way, and they should have.
Sure. It's pretty easy. You just define two macros (e.g. BASE_TYPE and ARRAY_TYPE) and then #include a header.
#define BASE_TYPE uint64_t *
#define ARRAY_TYPE uint64_t_pointer
#include <CustomArray.h>
And in CustomArray.h>:
#define MAX_SIZE 32
class ARRAY_TYPE
{
Re: (Score:3, Informative)
No, there's nothing preventing you from including that header file multiple times for different types. That's the beauty of token gluing. It concatenates the base type as part of the name of the derived array type, so you can create arbitrary numbers of them for arbitrary types. And unlike the template class, whenever you use the resulting type, it just looks like an ordinary C++ class instance with no need for template parameters. Thus, when you actually use the class, you just use "Array_int *foo" or
Re: (Score:2)
Funny, very very dry, but damn funny.
Re: : (Score:2, Informative)
Interesting thought. Does anyone use PHP for anything other than its ubiquity?
Re:Finally (Score:5, Insightful)
Ubiquity is a pretty compelling feature.
I mean, BeOS is pretty bitchin', but I'm not spending any of my time on developing applications for it.
Re: (Score:2, Funny)
Re: (Score:2)
$output = fopen('outputfile.txt', 'wt');
// writes out data in UTF-8 encoding
fwrite($fp, $uni);
..... not where you expect it it dossent....
Re:Finally (Score:5, Funny)
Too bad Slashdot still wonâ(TM)t.
I mean, won't.
Re:Finally (Score:5, Funny)
That's so cliché.
So... (Score:5, Informative)
without wanting to be overly sarcastic..
What features are they gonne break this time?
Re:So... (Score:5, Insightful).
Re: (Score:2)
Re: (Score:2)
mod +1; I also hate the Needle/Haystack Haystack/Needle operator ordering..
of course upgrading your code to make it compliant will be a PITA...
Re:So... (Score:5, Informative)
At my work we host and have build and maintain a little over 200 php websites. We host them all ourselves. ( the CMS that we use is build in PHP )
We earn money from both the hosting and the developing.
Many of our customers don't want to pay for the porting of their websites to PHP5, let alone PHP6. usually this requires upgrading the CMS as well, making modifications to custom extentions written by outsourcing partys, etc. All in all quite expensive for the site owner.
"Threatening" them with PHP4 server shutdowns only makes them go away to other hosting providers that will over PHP4 to them.
So we ended up virtualising all the PHP4 sites together with a good backup system and making our customers understand that we provide no warrenty anymore. We will help them when things blow up on an paid per hour basis.
Another problem is that we cannot reuse a lot of our code anymore now. Many of our new plugins require php5 so we have to modify them to make them php4 compatible again.
when php6 comes out we will have to support three different php versions... the horrors of that vision already scare me today.. t
Re: (Score:2)
Is there a business in supplying coal for instance? Some people still heat their houses with it, but does that mean YOU as a business man have to run a business to supply them?
No, but it does mean if YOU choose not to supply them with coal, somebody else will.
The parent isn't complaining because he doesn't want to stay up to date. He's complaining that they have a lot of customers who don't want to stay up to date, and there's nothing he can do about that except stop taking their money.
Ask yourself, how much time does it cost you to keep the people happy who want PHP4 and how much that same time could have earned you in business from PHP5 customers.
Unfortunately, turning away PHP4 customers doesn't mean more PHP5 customers will suddenly sign up. They are currently supporting both, and while of course there is a cost associated with continue
Re: .
Re: (Score:2)
If you want something that maintains compatibility, go with java.
Depending on your point of view, that could be a negative or a positive.
Re: (Score:2)
If you want something that maintains compatibility, go with java.
Depending on your point of view, that could be a negative or a positive.
This was the first PHP related story on Slashdot that didn't have a few dozen replies that mention Java until you went ahead and ruined it.
I'd choose Java over PHP any day though.
Re: (Score:3, Informative)
At least I ruined it by being informative.
:P
Java applets coded back in 1996 still run [slashdot.org] in the newest JRE. Pretty impressive for the consumer/user, though it must be a nightmare to maintain.
I'm not aware of any huge changes to Apache Tomcat in the past few years - certainly nothing that required re-coding an entire website from scratch.
Re: (Score:3, Funny)
Re: (Score:3, Informative)
I wish i was trolling, but trust me, i work for a company that hosts sites, and there is still plenty of php4 around. Most people don't mind the upgrading and staying up to date part so much. But they usually don't like the price that comes attached with it.
Re: (Score:2)
Still recalling a horror of writing in Perl in 90s, I would say PHP is good, more or less.
Re: (Score:2)
It won't happen to the base functions simply for backwards-compatibility, but given that namespace support is being added into PHP6 (I think it's also in 5.3; I have 5.2.6 on my machine so I don't know for sure) they could re-map all of those old functions in the global namespace into new logically-named and consistent functions. Array and string manipulation functions come to mind as the worst offenders, but there's plenty of other bad stuff as well. I think a lot of it would do well to be remade into bui
Re: (Score:3, Informative)
You must be confused, are you thinking of Perl?
PHP has been VERY careful about breaking features, and have essentially openly mocked the people who suggest they "fix" PHP's functions by randomly swapping argument order on functions that have been working just fine for years.
The only thing I can think of they've broken is MAGIC_QUOTES and registered globals. Both are Very Bad Things that it was important they do away with. Any sane PHP code will react to their removal by simply removing a few chunks of goo
Re: (Score:2)
But if Python does it, its okay?
No. From whose ass did you pull that strawman from?.
Re: (Score:3, Funny)
Re: (Score:3, Informative)
Time to pay the piper... (Score:4, Insightful)
Re:Time to pay the piper... (Score:5, Funny)
This is why I never write legacy code, only progressive forward thinking code!
People who write legacy code are just not thinking of the future.
Re: (Score:2)
You will love it when they add functional approach and constructs. Declarative style in php for more points!
Re:Time to pay the piper... (Score:4, Insightful)
You're not far off track. A lot of PHP's problems stems from the fact that the language itself was more or less kind of thrown together rather than planned out (from the early simple Personal Home Page scripting stuff to PHP3 that just kept extending things and adding more functionality bolted on). They only just began to start to stabalize some of that in PHP4 and really only started to fix a lot of issues in PHP5 and now PHP6. They are making good strides but there's a lot of work to do (and a lot of backwards compatible considerations, I'm sure).
The good news for PHP developers with legacy code is that they've had a long time to fix things. Stuff that is going away has been deprecated for many versions now so none of this should be a surprise. The people that will get hit are the site administrators using PHP based apps that haven't been updated in forever.
Re:Time to pay the piper... (Score:5, Funny)
I think PHP developers with legacy code are going to be paying the price for several versions to come.
I prefer to call it "job security".
question: (Score:5, Insightful)
Re:question: (Score:4, Informative)
Yes
:(
Re:question: (Score:5, Funny)
Re: (Score:3, Interesting)
Re:question: (Score:5, Funny)
Can you blame them for trying to escape?.
A likely story (Score:5, Insightful)
Given that PHP 6 was "rumored" to be out at least a year ago. I can't decide if the title "An Early Look" is meant to be ironic, or is just a sad indicator of progress.
Despite that, I would say that three things have recently happened demonstrating the improvement in quality of PHP:
I would say that (1) and (2) easily are more important for the language than is (3). PHP 5.3's improvements should be a huge change: Namespaces (I know there's a huge amount of hate for this implementation: get over it. It's going to be very useful), Closures / Lambda Functions, and Late Static Bindings in particular make it hard to wait so long for PHP 5.3.
So, stop talking about PHP 6! Lets get PHP 5.3 out.
Hope it handles Search/Replace better (Score:2)
I hope it handles search/replace better. I tried doing a search/replace on a 88MB large string and the stupid script crashed!
;-)
Seriously, though, if anyone knows of any good tactics for large-string searching/replacing, I'd be happy to hear them. My current attempt is multiple page loads in an iFrame while the user is presented with a "working on it..." message.
Re:Hope it handles Search/Replace better (Score:4, Informative)
Re:Hope it handles Search/Replace better (Score:4, Insightful)
Loading 88MB file into memory is not going to work by default anyhow, unless you set the memory limit in PHP from the default you will get out of memory errors every time. I think even a find/replace in a Windows app like Notepad or Notepad++ will "work" but it will definitely be slow. When I used to search large logs I would use some sort of file splitter and search each file itself.
And here the rest of us are grepping and sedding multi-gigabyte files without thinking twice. Seriously, what's your idea of a large file?
Re: (Score:3, Informative)
If you want to process large files (or any large chunks of data such as blob columns) in PHP without loading the entire file into memory, look into streams.
Re: (Score:2)
Seth
Re: (Score:2)
Assuming this 88MB string is in a file, you should never load the whole file. Open the file and read it chunk by chunk. As you read it chunk by chunk, do a search/replace on each chunk and write the replaced chunk to another file. You need to remember to catch the matches that span more than once chunk though.
The question should really be why you are dealing with an 88MB file in PHP...
Re: (Score:2)
Well, I once had to use PHP to re-import a MSSQL DB that was something like 25GB because no SQL machine was able to import even one of the 133 (?) files that made up the DB contents. Had to leave it running for something like 17 hours, but I think it ended up getting the job done well enough for what needed to be done.
But yeah, I tend to avoid dealing with any large files in PHP whenever possible.
Limited cleanup (Score:5, Insightful)
clean-up of several functions
Does that include safe_quote_string_this_time_i_really_freaking_mean_it, or do_foo(needle, haystack) and foo_do(haystack, needle)? At least it gets namespaces after all this time, even if they're almost deliberately ugly.
Re: (Score:3, Insightful)
All I want is for $foo[0] and $foo["0"] to not be the same reference.
Re:Limited cleanup (Score:5, Funny)
But that might break something that two people found convenient in 1997 and therefore can never be repudiated.
One of these things is not like the OOthers (Score:5, Insightful)
One of these things just doesn't belong
python:
myArray.append(myvalue)
ruby:
myArray.push(myvalue)
objective-c:
[myArray addObject: myvalue]
smalltalk:
myArray add: myvalue
PHP:
array_push($myarray, $myvalue)
Re:One of these things is not like the OOthers (Score:5, Informative)
Or...
PHP:
$myarray[] = $myvalue;
Re: (Score:2)
Re: (Score:2)
Python: myArray.append(myvalue)
Well, you could do something like:
and squint until it looks like list_append, but that's kinda silly. And that $myarray[]=$myvalue; syntax? That should be taken out and shot.
Indeed it does not (Score:2, Insightful)
Market share: PHP 50%, ASP 49%, rest perl.
When PHP and ASP don't totally dominate the job listings, please come back to me again. In the meantime I know which of the function calls pays for my food.
Oh and $array[] = $value;
Coding, you should learn it.: (Score:2)
Fun fact: arrays are not objects in PHP*. Not surprisingly, this means that they don't have properties or methods.
*Another poster already pointed out that PHP does have array objects, and having looked, array objects DO have an append method.
Re: (Score:2)
They should have used # instead of the backslash for namespaces, that way, what I type will coincide with what I wish I was doing to the asshat that came up with that namespace delimiter.
Re: (Score:2)
Octothorpe?
Re: (Score:2)
In no language has quote escaping been the correct approach to putting untrusted data into a SQL statement for quite a long time (even if a language provides only this approach, it's still not the right approach). That stuff is kept for legacy support.
Switch to PDO and start using bound parameters. No matter what happens with the database and heretofore unconsidered character sets, this will never suddenly become vulnerable to a SQL injection when you upgrade your database server.
My items to be fixed (Score:3, Insightful)
Re: (Score:3, Informative)
PHP compiles regex's transparently automatically. If you've used a pattern recently, it will not reparse the statement.
Re: (Score:2, Informative)
Improve array speed (for simple arrays, use internally one simple C array/list - current days, any array is a map);
Try the SplFixedArray class [php.net]. The SPL data structures are much, much faster. [blueparabola.com] I actually rather like the "easy by default, fast when you need it" dichotomy.
Re: (Score:3, Funny)
You also forgot: <p>
Re: (Score:2)
Change your default post settings to "Plain old text" and it will.
Re: (Score:2)
No problem.
Incidentally, HTML still works in "Plain old text" posting mode... so you have to use the HTML entities <, >, &, etc.
Re: (Score:3, Insightful)
# Insert optional configurations by project (and not by host);
-1 You can already do this via
.htaccess sans security resourse limits which should be per host on shared hosting.
Re: (Score:2)
Look into ini_set(). There are a couple odd things you can't override through that, but 95% of the standard configuration can be changed that way. The only thing that doesn't that immediately comes to mind is one of the magic_quotes settings, presumably because the superglobals have already been established by the time you've hit the override function - and magic quotes is finally going away in php6 so that'll be a non-issue moving forward.
Re: (Score:2): (Score:3, Funny)
Re: (Score:2)
Use {$Foo} instead. It's the proper way to put variables in a string.
Re: (Score:3, Insightful)
It will print
Hello 3
Because the namespace begins with a backslash ('\foo\n') and when using it inside double quoted strings must be "\\foo\\n".
Re: (Score:2)
Say that $Foo=3
It will print
Hello 3
Because the namespace begins with a backslash ('\foo\n') and when using it inside double quoted strings must be "\\foo\\n".
The example in the article didn't mention leading with a backslash, or at least I don't think it did (it's been slashdotted, apparently).
And seriously? You have to escape the backslashes? What if you want a literal backslash now?
Re: (Score:2)
Hey, I can't imagine that would be asked a lot or anything. There's no way that would be in the PHP FAQ [php.net]!
Oh wait...
Re: (Score:2)
It'll print $Foo followed by a newline.
Foo\$n would print $n in the Foo namespace. I think. Strictly speaking, you should wrap it in curly braces if you're using anything other than a "non-complex" (for lack of a better term) variable, including array contents and object members.
If variable $Foo was a string that contained the name of some namespace ("bar", for example), then if it wasn't in a quoted string context it would look for constant bar\n, but constants aren't echoed when quoted.
That said, I still
Re: (Score:3, Funny)
Gesundheit.
Well wouldn't you know (Score:5, Insightful)
In the finest tradition of PHP, they made Unicode behaviour dependent on a setting. Have these people learnt nothing from the past? magic_quotes anyone? Bleh. All languages have their warts, but the amount of bad design decisions in this one is just staggering.
FTFY (Score:2)
All languages have their warts, but the total lack of design decisions in this one is just staggering.
Re:Well wouldn't you know (Score:5, Insightful)
Stack Overflow has a question from last year titled Worst PHP practice found in your experience?. Earlier today, I submitted the answer whose summary is "The worst practice in PHP is having the language's behavior change based on a settings file."
Great minds think alike!
New to this version (Score:3, Informative)
Its like fast food.. (Score:5, Funny)
PHP: its like fast food..
You know its bad for you...
You feel like crap after eating it...
But damnit, its right there, oh so conveniently located on the way to work, and sometimes a greasy cheeseburger just hits the spot, even though you know you'll pay for it later in heartburn and much later in high cholesterol and love handles, even though right now its really cheap on the wallet.
Its a guilty pleasure.
And while you're sucking down that greaseball burger, you see the local soup and salad restaraunt and think "next time, I'll eat right.."
But come the next day and you see that taco joint and..
Broken Link in Summary (Score:5, Informative)
Re: (Score:3, Informative)
Re: (Score:3, Insightful)
PHP5 has a fairly proper inheritance and member visibility model and is truly reference based (i.e. $objX = $objY means, in PHP5, that they are reference to the same object instance... opposed to PHP4 where $objX = $objY made a FULL copy of the object to $objX).
So they've got to the level of Java 1.0. Congratulations!
Oh, actually, sorry, they didn't, since there are still no namespaces. But there will be soon, and then it'll be at the level of Java 1.0. Once again, congratulations!
Re: (Score:3, Informative)
How good is the object oriented support in PHP these days?
Everyone involved with PHP pretty openly admits that PHP5's OO model is a direct ripoff of Java, so inheritance, abstracts, interfaces, and access modifiers work pretty much the same way as they do in Java. If you like Java's OO, you should be fine with PHP5's.
Re: (Score:2)
Re: (Score:3, Funny)
I hate to say it, but this wouldn't be the first time on
/. when an article was submitted with a link that was a year or more older and the article made it to the main page. Particularly since the article the GP linked to is a year old to the day.
I can only imagine the submitter/approver looked at the date, say May 6th, and went "OMG, that's today!!!111"
Re: (Score:2) | http://developers.slashdot.org/story/09/05/06/180235/an-early-look-at-whats-coming-in-php-v6 | CC-MAIN-2015-06 | en | refinedweb |
64-bit Win7 Apache 2.2.19, mod_wsgi/3.4-BRANCH Python/2.7.3
No errors visible in logs:
[Wed Aug 01 17:44:48 2012] [notice] Apache/2.2.19 (Win64) mod_wsgi/3.4-BRANCH Python/2.7.3 configured -- resuming normal operations
[Wed Aug 01 17:44:48 2012] [notice] Server built: May 28 2011 15:18:56
[Wed Aug 01 17:44:48 2012] [notice] Parent: Created child process 8528
[Wed Aug 01 17:44:48 2012] [debug] mpm_winnt.c(477): Parent: Sent the scoreboard to the child
[Wed Aug 01 17:44:48 2012] [notice] Child 8528: Child process is running
[Wed Aug 01 17:44:48 2012] [debug] mpm_winnt.c(398): Child 8528: Retrieved our scoreboard from the parent.
[Wed Aug 01 17:44:48 2012] [info] Parent: Duplicating socket 556 and sending it to child process 8528
[Wed Aug 01 17:44:48 2012] [debug] mpm_winnt.c(595): Parent: Sent 1 listeners to child 8528
[Wed Aug 01 17:44:48 2012] [debug] mpm_winnt.c(554): Child 8528: retrieved 1 listeners from parent
[Wed Aug 01 17:44:48 2012] [info] mod_wsgi (pid=8528): Initializing Python.
[Wed Aug 01 17:44:48 2012] [info] mod_wsgi (pid=8528): Attach interpreter ''.
[Wed Aug 01 17:44:48 2012] [notice] Child 8528: Acquired the start mutex.
[Wed Aug 01 17:44:48 2012] [notice] Child 8528: Starting 64 worker threads.
[Wed Aug 01 17:44:48 2012] [notice] Child 8528: Starting thread to listen on port 80.
Config straight out of guides:
WSGIScriptAlias /wsgi "C:/Program Files/Apache Software Foundation/Apache2.2/wsgi/wsgi_test.py"
<Directory "C:/Program Files/Apache Software Foundation/Apache2.2/wsgi/">
Order allow,deny
Allow from all
</Directory>
WSGI app:
def application(environ, start_response):
status = '200 OK'
output = 'Hello World from wsgi!'
response_headers = [('Content-type', 'text/plain'),
('Content-Length', str(len(output)))]
start_response(status, response_headers)
return [output]
Get 403 and log message:
Options ExecCGI is off in this directory: C:/Program Files/Apache Software Foundation/Apache2.2/wsgi/wsgi_test.py
Add Options +ExecCGI to Directory (and stick "#!c:\Python27\python.exe" at top of app file)
Get 500 and log message:
Premature end of script headers: wsgi_test.py
Straight python as scripting language via ExecCGI works fine, with #!... at top of script and ExecCGI config for another Directory, etc.
Implication is that WSGIScriptAlias is behaving a lot like Alias and not setting ExecCGI or invoking properly to call the application() function.
How do I debug ? Or is there something obvious missing ?
Your configuration isn't even being used as is picking up CGI configuration from else where in the Apache configuration.
Rename your file to wsgi_test.wsgi and change the WSGIScriptAlias directive correspondingly and it may become more obvious what the issue is. Technically the CGI definition for .py should not override WSGIScriptAlias.
By posting your answer, you agree to the privacy policy and terms of service.
asked
2 years ago
viewed
296 times
active | http://serverfault.com/questions/413359/mod-wsgi-woes-on-x64-windows-7-running-apache-2-2 | CC-MAIN-2015-06 | en | refinedweb |
You can subscribe to this list here.
Showing
11
results of 11)
wow! that's good to hear :-)
On 10/7/05, Lawrence Bruhmuller <lbruhmuller@...> wrote:
>
>?
> >
> >
> >
>
>
>
> -------------------------------------------------------
> This SF.Net email is sponsored by:
> Power Architecture Resource Center: Free content, downloads, discussions,
> and more.
> _______________________________________________
> activegrid-users mailing list
> activegrid-users@...
>
>?
>
>
>
Gr8 idea!
--- à·à¶´à·à¶¸à¶½à· ජයරà¶à·à¶±
<sapumal.jayaratne@...> wrote:
> Hi Activegriders,
>
> I found Activegriding is very interesting. But we
> need more
> documentation, How To s and Tutorials. Aren't we
> have WIKI or
> something like it. If not, why we don't start one?
>
> --
> Thanks,
> à·à¶´à·à¶¸à¶½à· ජයරà¶à·à¶±.
> N¬HS^µéX¬²'²Þu¼£«·!×¶êÞEë(º·
>
éíz±kyç(×§µÚ0ZvÇb±Ë¬²*'±©ÝÞÛiÿû(ëb¢{'{¢¸r¿¹ÈmZrدz
> âvë®ÉX§X¬µ§-÷
®'n±êì+-²Ê.Ç¢¸ëa¶Úlÿùb²Û,¢êÜyú+éÞ·ùb²Û?+-wèý§-÷
®'n±êì
SGkgQWN0aXZlZ3JpZGVycywKCkkgZm91bmQgQWN0aXZlZ3JpZGluZyBpcyB2ZXJ5IGludGVyZXN0
aW5nLiBCdXQgd2UgbmVlZCBtb3JlCmRvY3VtZW50YXRpb24sIEhvdyBUbyBzIGFuZCBUdXRvcmlh
bHMuIEFyZW4ndCB3ZSBoYXZlIFdJS0kgb3IKc29tZXRoaW5nIGxpa2UgaXQuIElmIG5vdCwgd2h5
IHdlIGRvbid0IHN0YXJ0IG9uZT8KCi0tClRoYW5rcywK4LeD4La04LeU4La44La94LeKIOC2ouC2
uuC2u+C2reC3iuC2sS4K
Hi Brian,
You're not being thick--there is no way to add the body tag from the tool.=
=20
I've filed that. In the meantime, you can just insert the tag directly into=
=20
the xml file. If you can find the place for it. :)
Your modified file should look something like this:
<xforms:group ag:name=3D"LeftGroup" appearance=3D"lyt:vertical" ag:cellStyl=
e=3D"">
<xforms:label xsi:nil=3D"true"/>
</xforms:group>
<ag:body cellStyle=3D""/>
<xforms:group ag:name=3D"RightGroup" appearance=3D"lyt:vertical"=20
ag:cellStyle=3D"">
<xforms:label xsi:nil=3D"true"/>
</xforms:group>
where the important bit is the <ag:body tag. Let me know if that doesn't=20
sort you out.
Thanks,
Matt Fryer
On 7/25/05, Brian Young <brian.young@...> wrote:
>=20
> Hi Active Griders! (Gridlers?)
>=20
>)?
>=20
> In any event, the world is watching,
>=20
> Good Luck,
>=20
> BY
>=20
>=20
>=20
> -------------------------------------------------------
> SF.Net email is sponsored by: Discover Easy Linux Migration Strategies
> from IBM. Find simple to follow Roadmaps, straightforward articles,
> informative Webcasts and more! Get everything you need to get up to
> speed, fast.
k
> _______________________________________________
> activegrid-users mailing list
> activegrid-users@...
>
>
Hi Active Griders! (Gridlers?))?
In any event, the world is watching,
Good Luck,
BY
> From: *Paulo Sérgio Medeiros* <pasemes@...
> <mailto:pasemes@...>>
> Date: Jun 20, 2005 8:09 AM
> Subject: [activegrid-users] Web Services
> To: activegrid-users@...
> <mailto:activegrid-users@...>
>
> I have some questions about web services in AG:
>
> Where are the services deployed? There are WSDL for them? (where?)
Exposing ActiveGrid application components as web services isn't there
yet. When it is, users will be able to provide a file specification for
the generated WSDL. The WSDL will also be available via a URL from the
running application (e.g. as a "?WSDL" query parameter).
> How can i utilize services that exist on other servers?
This will be in ActiveGrid 1.0. Developers will point at an external
service's WSDL, creating a "service reference". Then create application
actions that invoke web service operations. We're hoping to support
consumption of both SOAP and REST services in 1.0.
>
> I think we really need more documentation to develop with AG, is this
> planned only in 1.0 release?
Most likely. Although hopefully this forum can help until then.
Alan Mullendore
ActiveGrid Engineering
I have some questions about web services in AG:
Where are the services deployed? There are WSDL for them? (where?)
How can i utilize services that exist on other servers?
I think we really need more documentation to develop with AG, is this=20
planned only in 1.0 release?
Cheers,
Paulo S=E9rgio.
Hello, I'd like to announce the availability of the 0.7 Early Access
version of ActiveGrid technology, freely available for download at.
This is an incremental release to keep our user base updated with bug
fixes and some limited features as we work towards our 1.0 release, to
be released in the July timeframe. The release notes are online, so
check them out to get the latest information.
Getting ActiveGrid up and running on Linux is now much easier than
before; Linux installers are available for download which include all
the required dependencies.
We look forward to hearing lots of feedback! Please send questions and
issues to support@... If you are a contributing developer
for ActiveGrid or would like to be, join
activegrid-developers@..., and email
develop@... to get more information.
Lastly, we at ActiveGrid have not been monitoring these sourceforge.net
mailing lists closely enough; we have been focusing on the requests sent
to develop@... and support@... We will now be
monitoring these lists regularly to assist the members in using and
enhancing ActiveGrid software.
Regards,
Lawrence
*********************************
Lawrence Bruhmuller
Director, Quality and Release Engineering
ActiveGrid, Inc.
lbruhmuller@...
Hi,
Using vanilla Ubuntu Hoary 5.04 Release.
I've installed the prerequired packages per the setup instructions, mainly from
deb packages and source, although I did have a dependency issue with the
libsqlite library (ubuntu-desktop depends on the version it installs)
I installed the ActiveGrid rpm's by first converting them to deb packages
(using alien) and installing via dpkg -i. No problems reported there.
However, when I try to test the installation via the command:
python2.3 ActiveGridAppBuilder.py
I get the following error:
Traceback (most recent call last):
File "ActiveGridAppBuilder.py", line 12, in ?
import wx.lib.pydocview
ImportError: No module named wx.lib.pydocview
I seem to have pydocview scripts in my python2.3 installation but not sure what
else I would be missing. Any ideas on what to verify or action to take?
Thanks in advance,
jchavezb@...
root@...:/usr/lib/python2.3/site-packages # ls -la
total 180
drwxr-xr-x 6 root root 4096 2005-04-13 04:27 .
drwxr-xr-x 18 root root 12288 2005-04-13 04:17 ..
drwxr-xr-x 2 root root 4096 2005-04-13 04:18 dhm
drwxr-xr-x 2 root root 4096 2005-04-13 04:19 pychecker
-rw-r--r-- 1 root root 119 2005-03-29 12:03 README
drwxr-xr-x 2 root root 4096 2005-04-13 04:18 sqlite
-rwxr-xr-x 1 root root 88857 2005-04-13 04:19 _sqlite.so
drwxr-xr-x 3 root root 4096 2005-04-13 04:21 wx-2.5.4-gtk2-unicode
-rw-r--r-- 1 root root 21 2005-03-16 17:11 wx.pth
-rw-r--r-- 1 root root 14397 2004-10-28 15:34 wxversion.py
-rw-r--r-- 1 root root 14406 2005-03-16 17:11 wxversion.pyc
-rw-r--r-- 1 root root 14344 2005-04-13 04:18 wxversion.pyo
root@...:/usr/lib/python2.3/site-packages # cd ../../activegrid/python/
root@...:/usr/lib/activegrid/python # ls
activegrid ActiveGridIDE.py docview.py multisash.py
ActiveGridAppBuilder.bpel ActiveGridIDE.pyc docview.pyc multisash.pyc
ActiveGridAppBuilder.dpl agwebserver.py __init__.py pydocview.py
ActiveGridAppBuilder.py agwebserver.pyc __init__.pyc pydocview.pyc
ActiveGridAppBuilder.pyc datamodel.xsd mphandler.py wxPythonDemos
ActiveGridAppBuilder.xsd demos mphandler.pyc
root@...:/usr/lib/activegrid/python # python2.3 ActiveGridAppBuilder.py
Traceback (most recent call last):
File "ActiveGridAppBuilder.py", line 12, in ?
import wx.lib.pydocview
ImportError: No module named wx.lib.pydocview
root@...:/usr/lib/activegrid/python #
__________________________________
Do you Yahoo!?
Yahoo! Small Business - Try our new resources site! | http://sourceforge.net/p/activegrid/mailman/activegrid-users/ | CC-MAIN-2015-06 | en | refinedweb |
16 January 2013 15:52 [Source: ICIS news]
(adds paragraph 5)
HOUSTON (ICIS)--DuPont Titanium Technologies has signed a distribution agreement with specialty chemicals distributor Omya, it said on Wednesday.
Under the deal Omya, through subsidiaries Durr Marketing, Azalea Color and Lipscomb Chemical, will become a national distributor and represent DuPont Titanium Technologies’ products in the US and western Canada, DuPont said.
“We see an outstanding fit between Omya’s approach to the markets we serve together and expect that this partnership will improve our market position and better serve our customers in North America,” said Miren Salsamendi, ?xml:namespace>
Switzerland-headquartered Omya is a global producer of industrial minerals and a worldwide distributor of chemical products. Its major markets include paper, polymers, and building materials such as paints, coatings, sealants and adhesives.
DuPont produces | http://www.icis.com/Articles/2013/01/16/9632439/dupont-titanium-picks-omya-as-distributor-for-us-western-canada.html | CC-MAIN-2015-06 | en | refinedweb |
This.
ManyTableSample”. Hit OK to continue.
This will bring up a dialog to add Roles to the Cloud Service.
4. Add an ASP.NET Web Role to the Cloud Service, we’ll use the default name of “WebRole1”. Hit OK.
Solution explorer should look as follows:.
19. If you actually instantiated the ContactDataSource and ran this app,.
using Microsoft.WindowsAzure;
:
Please see the updated post for November 2009 and later...
Similar to the table storage walkthrough I posted last week, I updated this blog post for the Nov 2009/v1.0 and later release of the Windows Azure Tools..
3. Under the Visual C# node (VB is also supported), select the “Cloud Service” project type then select the “Windows Azure Cloud Service” project template. Set the name to be “SimpleBlobSample”. Hit OK to continue.
We’ll now cover the implementation, which can be broken up into 5 different parts:
Implementing the UI
5. Next open up Default.aspx and add the code for the UI. The UI consists of:
.
10. If you actually tried to connect to Blob Storage at this point, you would find that the CloudStorageAccount.FromConfigurationSetting() call would fail with the following message:.
Stay tuned...
Today,:.
Had:
(Update.
[Update:!. | http://blogs.msdn.com/b/jnak/default.aspx?PostSortBy=MostViewed&PageIndex=1 | CC-MAIN-2015-06 | en | refinedweb |
#include <db.h> typedef struct __typedef struct __db_lsn DB_LSN; ;
The
DB_LSN object is a log sequence number
which specifies a unique location in a log file. A
DB_LSN
consists of two unsigned 32-bit integers -- one specifies the log file number, and the other
specifies an offset in the log file. | http://docs.oracle.com/cd/E17276_01/html/api_reference/C/lsn.html | CC-MAIN-2015-06 | en | refinedweb |
arrowskin part that visually displays the direction toward the owner.
The following image shows a Callout container labeled 'Settings':
You can also use the CalloutButton control to open a callout container. The CalloutButton control encapsulates in a single control the callout container and all of the logic necessary to open and close the callout. The CalloutButton control is then said to the be the owner, or host, of the callout.
Callout uses the
horizontalPosition and
verticalPosition properties to determine the position of the
Callout relative to the owner that is specified by the
open()
method.
Both properties can be set to
CalloutPosition.AUTO which selects a
position based on the aspect ratio of the screen for the Callout to fit
with minimal overlap with the owner and and minimal adjustments at the
screen bounds.
Once positioned, the Callout positions the arrow on the side adjacent to the owner, centered as close as possible on the horizontal or vertical center of the owner as appropriate. The arrow is hidden in cases where the Callout position is not adjacent to any edge.
You do not create a Callout container as part of the normal layout of its parent container. Instead, it appears as a pop-up container Callout is initially in its
closed skin state.
When it opens, it adds itself as a pop-up to the PopUpManager,
and transition to the
normal skin state.
To define open and close animations, use a custom skin with transitions between
the
closed and
normal skin states.
Callout changes the default inheritance behavior seen in Flex components and instead, inherits styles from the top-level application. This prevents Callout's contents from unintentionally inheriting styles from an owner (i.e. Button or TextInput) where the default appearance was desired and expected.
The Callout container has the following default characteristics:MXML Syntax
The
<s:Callout> tag inherits all of the tag
attributes of its superclass and adds the following tag attributes:
<s:Callout Properties
Default MXML Property
mxmlContentFactory
More examples
Use the Callout container to create a callout
Define an inline Callout container
Pass data back from the Callout container
Add a ViewNavigator to a Callout
Size and position a callout container
Configure a popup for use with the soft keyboard
Learn more
Related API Elements
spark.skins.mobile.CalloutSkin
spark.components.ContentBackgroundAppearance
spark.components.CalloutPositionHorizontalPosition:String
Fully resolved horizontal position after evaluating CalloutPosition.AUTO.
Update this property in
commitProperties() when the
explicit
horizontalHorizontalPosition():String
protected function set actualHorizontalPosition(value:String):void
actualVerticalPosition:String
Fully resolved vertical position after evaluating CalloutPosition.AUTO.
Update this property in
commitProperties() when the
explicit
verticalVerticalPosition():String
protected function set actualVerticalPosition(value:String):void
arrowDirection:String[read-only]
A read-only property that indicates the direction from the callout towards the owner.
This value is computed based on the callout position given by
horizontalPosition and
verticalPosition.
Exterior and interior positions will point from the callout towards
the edge of the owner. Corner and absolute center positions are not
supported and will return a value of
"none".
The default value is
none.
Implementation
public function get arrowDirection():String
Related API Elements
horizontalPosition:String
Horizontal position of the callout relative to the owner.
Possible values are
"before",
"start",
"middle",
"end",
"after",
and
"auto" (default).
The default value is
CalloutPosition.AUTO.
Implementation
public function get horizontalPosition():String
public function set horizontalPosition(value:String):void
Related API Elements
verticalPosition:String
Vertical position of the callout relative to the owner.
Possible values are
"before",
"start",
"middle",
"end",
"after",
and
"auto" (default).
The default value is
CalloutPosition.AUTO.
Implementation
public function get verticalPosition():String
public function set verticalPosition(value:String):void
Related API Elements
public function Callout()
Constructor.
protected function updateSkinDisplayList():void
Sets the bounds of
arrow, whose geometry isn't fully
specified by the skin's layout.
Subclasses can override this method to update the arrow's size,
position, and visibility, based on the computed
arrowDirection.
By default, this method aligns the arrow on the shorter of either
the
arrow bounds or the
owner bounds. This
implementation assumes that the
arrow and the Callout skin
share the same coordinate space.
<?xml version="1.0" encoding="utf-8"?> <s:Application xmlns: <fx:Style> <!-- Styling Callout with backgroundColor and contentBackgroundAppearance style as flat which gives no shadow --> @namespace s "library://ns.adobe.com/flex/spark"; s|Callout{ backgroundColor:"red"; contentBackgroundAppearance:"flat"; } </fx:Style> <fx:Declarations> <!-- Declaring a Callout which closes when it loses focus. Setting a custom skin to skinClass --> <s:Callout </fx:Declarations> <s:HGroup> <!-- Focusing in the textInput opens a callout --> <s:TextInput </s:HGroup> </s:Application>
Sun Jan 11 2015, 02:24 PM -08:00 | http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/spark/components/Callout.html | CC-MAIN-2015-06 | en | refinedweb |
27 March 2012 11:13 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
The producer started building the plant in mid-2010 and the plant is expected to achieve on-spec production at the end of March as scheduled, the source added.
ENN Energy Holdings holds a 55% stake in Xinneng Fenghuang and ENN’s total methanol capacity is expected to reach
ENN has another 360,000 tonne/year coal-based methanol plant at Tengzhou in
The minority partners of Xinneng Fenghuang are Legend Holdings, Minsheng Investment and Shandong Tengzhou Chenlong Energy Group, which hold 17.5%, 17.5%, and 10% stakes in the joint-venture, | http://www.icis.com/Articles/2012/03/27/9545072/chinas-xinneng-fenghuang-energy-starts-up-methanol.html | CC-MAIN-2015-06 | en | refinedweb |
10 July 2012 09:03 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
The shutdown is expected to last 20 days, the source said.
The unit is currently operating at full capacity, the source Du-Pont-Mitsui Polychemicals; China’s BASF-YPC; Beijing Organic; DuPont Packaging & Industrial Polymers, and; The polyolefin Co of Singapore | http://www.icis.com/Articles/2012/07/10/9576713/thai-tpi-polene-to-shut-map-ta-phut-eva-plant-for-maintenance.html | CC-MAIN-2015-06 | en | refinedweb |
Decorator in Python is an important feature used to add functionalities to an existing function, object, or code without modifying its structure permanently. It allows you to wrap a function to another function and extend its behavior. Decorators in Python are usually called before the definition of the to-be decorated function.
Before going any further about decorators, it is essential to know more about functions. Functions in Python are first-class objects that support various operations. The properties of functions and other first-class objects can be:
- Passed as an argument
- Returned from a function
- Modified
- Assigned to other variables
- Stored in data structures such as hash tables and lists
Syntax of Python Decorators
Since you now know a bit about functions and what decorators in Python do, it’s time to get acquainted with the syntax, which is:
def func_name():
print("Simplilearn") #function code
func_name = name_decorator(func_name)
In the above syntax:
- func_name: Name of the function
- name_decorator: Name of the decorator
Python also provides a simpler syntax to write decorators in Python using @symbol and the decorator’s name. This simpler way of calling decorators is also called the syntactic way or the “pie” syntax. With the simpler syntax, the code is equal to -
@name_decorator
def func_name():
print("Simplilearn")
The only difference in using both the syntax is that if you want to assign the decorated function to another variable of a function, use the extended syntax. On the other hand, if you’re going to change the same function’s variables and content, you can use both the long and short syntax.
Example of Decorators in Python
Let’s look at an example to better understand the concept of decorators in Python. In the example below, you will create a decorator to add the functionality of converting a string from another function to the uppercase.
#defining the decorator
def name_decorator(function):
#defining the wrapper function
def inner_wrapper():
#defining and calling the actual function
func_name = function()
uppercase_func = func_name.upper()
print (uppercase_func)
return inner_wrapper
#defining the function that will be called in decorator
def greet():
return ("welcome to simplilearn")
print (greet())
decorated = name_decorator(greet)
decorated()
print (greet())
Output:
In the example above, name_decorator() is the decorator. The function ‘greet’ got decorated and was assigned to “decorated.” As you can see, when you print the decorated function, it prints the string in uppercase. However, when you call the greet function again, it prints the exact string in the lowercase. Thus, the greet function is not changed permanently.
Decorators in Python With Parameters
In the example above, you worked with a function that did not have any parameters. But what if the function has parameters? Let’s have a look.
def arguments_decorator(function):
def wrapper_arguments(ar1, ar2):
print("Arguments passed are: {0}, {1}".format(ar1,ar2))
function(ar1, ar2)
return wrapper_arguments
@arguments_decorator
def Name(first_name, last_name):
print("My Name is {0} {1}".format(first_name, last_name))
Name("George", "Hotz")
Output:
How to Reuse Decorators in Python?
Since a decorator is also like a normal function, you can easily reuse it like the others by moving it to the ‘decorators’ module. In the code below, you will create a file named “decorators.py.” The file will contain a decorator function that will print an output multiple times. Next, you will import the decorator function in the main coding file and reuse it.
# code for decorators.py file
def print_thrice(func):
def wrapper_print_thrice():
func()
func()
func()
return wrapper_print_thrice
# code for main.py file
from decorators import print_thrice
@print_thrice
def say_name():
print("Simplilearn!")
say_name()
Output:
As you can see in the output stated above, you could import and reuse the print_thrice decorator function to print the output “Simplilearn!” three times.
Chaining Multiple Decorators in Python
It is possible to chain numerous decorators in Python. It is like applying multiple decorators to a single function. All you have to do is to place all the decorators in separate lines before defining the wrapper function. Here’s an example of a chaining decorator in Python.
def exclamation_decorator(function):
def wrapper(*args, **kwargs):
print("!" * 25)
function(*args, **kwargs)
print("!" * 25)
return wrapper
def hashtag_decorator(function):
def wrapper(*args, **kwargs):
print("#" * 25)
function(*args, **kwargs)
print("#" * 25)
return wrapper
@exclamation_decorator
@hashtag_decorator
def message(greet):
print(greet)
message("Welcome to Simplilearn")
Output:
Defining the General Purpose Decorators in Python
If you noticed, we used *args and **kwargs in the previous example. These variables help define general-purpose decorators in Python. They will collect all the positional and keyword arguments that you pass, and store them in the args and kwargs variables, respectively. These variables allow you to give as many arguments as you want while calling the function, thereby making it general. Here’s an example of general-purpose decorators in Python.
def general_purpose_decorator(function):
def a_wrapper(*args,**kwargs):
print('Positional arguments:', args)
print('Keyword arguments:', kwargs)
function(*args)
return a_wrapper
@general_purpose_decorator
def no_argument():
print("There are no arguments.")
no_argument()
@general_purpose_decorator
def positional_arguments(a, b, c):
print("These are positional arguments")
positional_arguments(1,2,3)
@general_purpose_decorator
def keyword_arguments():
print("These are keyword arguments")
keyword_arguments(city1_name="Mumbai", city2_name="Delhi")
Output:
In the example above, you defined a general-purpose decorator in Python. You also passed the positional and keyword arguments separately with different functions.
Working With Some Fancy Decorators in Python
The examples you have seen until now are pretty basic ones to get you acquainted with the concept of decorators. But now, since you have a basic understanding, let’s look at some advanced decorator functions.
Example: Using Stateful Decorators in Python
Stateful decorators are the decorator function that can keep the track of state, or in other words, remember their previous run’s state. In the code below, you have to create a stateful decorator to decorate a dictionary that can store the number of times the function is called.!")
Output:
Example: Using Classes as Decorators in Python
Although you can use functions to track the state, the typical way is by using a class. Similar to the previous example, you will create a decorator function to maintain the state of a function. But this time the decorator function will be a class. You will have to call .__inti__ and() .__call__() methods to initialize and call the class instance, respectively. Note that the .__call__() method will be called in this example instead of the decorator function. It will act as the wrapper function that you have been using until now.
from functools import update_wrapper
class example:
def __init__(self, func):
update_wrapper(self, func)
self.func = func
self.example = {}
self.n_calls = 0
def __call__(self, *args, **kwargs):
self.n_calls += 1
self.example[self.func.__name__] = self.n_calls
print("The count is :", self.example)
return self.func(*args, **kwargs)
@example
def hi(name):
return f"Hi {name}!"
print(hi("Simplilearn"))
print(hi("George"))
print(hi("Amit"))
Output:
As you can see in the output, you have used a class as a decorator to track the state of the hi() function.
What is the Decorators Library in Python?
The Python Decorators Library serves as a repository of numerous decorators’ codes and examples. You can refer to, use, or tweak them according to your wish. You can also add your decorators to this repository to help others learn. It is a great resource to learn about decorators in Python in-depth with many examples. You can refer to the Decorator Library by clicking here.
Looking forward to making a move to the programming field? Take up the Python Training Course and begin your career as a professional Python programmer
Summing It Up
In this article, you have learned the A to Z of decorators in Python, right from how to create them to the different ways to use them. However, it is an advanced topic. Hence, it is recommended to get a firm understanding of Python basics before delving deep into decorators.
You can refer to Simplilearn’s Python Tutorial for Beginners to clear the basics first. Once your basics are clear, you can move ahead by going into the advanced topics. You can excel in Python programming by further opting for our Online Python Certification Course. The course comes with 38 hours of blended learning, 30 hours of instructor-led learning, and 20+ assisted practices on modules to help you excel in Python development.
Have any questions for us? Leave them in the comments section of this article. Our experts will get back to you on the same, ASAP! | https://www.simplilearn.com/tutorials/python-tutorial/decorators-in-python | CC-MAIN-2021-39 | en | refinedweb |
Package com.openinventor.inventor.fields
Class SoMFUInt32
- java.lang.Object
- com.openinventor.inventor.Inventor
- com.openinventor.inventor.fields.SoField
- com.openinventor.inventor.fields.SoMField
- com.openinventor.inventor.fields.SoMFUInt32
public class SoMFUInt32 extends SoMFieldMultiple-value field containing any number of uint32_t integers. A multiple-value field that contains any number of uint32_t (32-bit) integers.
SoMFUInt32s are written to file as one or more uint32_t integers, in decimal, hexadecimal or octal format.
When more than one value is present, all of the values are enclosed in square brackets and separated by commas; for example:
[ 17, 0xFFFFE0, 0755 ]FUInt32
public SoMFUInt32(SoFieldContainer fieldContainer, java.lang.String fieldName, SoField.FieldTypes fieldType)Default constructor.
Method Detail
find
public int find(int targetValue)Calls find(targetValue, false).
disableDeleteValues
public void disableDeleteValues()Temporary disable value deleting.
setValue
public void setValue(int newValue)Sets the first value in the array to newValue, and deletes. the second and subsequent values.ValuesBuffer
public void setValuesBuffer(java.nio.ByteBuffer userData)Sets the field to contain the values stored in userData. This data will not be copied into the field: it will be directly used by the field.ValueAt
public int getValueAt(int i)
getValues
public int[] getValues(int start)Returns a pointer into the array of values in the field, starting at index start. The values are read-only. See the startEditing()/finishEditing() methods for a way of modifying values in place. | https://developer.openinventor.com/refmans/latest/RefManJava/com/openinventor/inventor/fields/SoMFUInt32.html | CC-MAIN-2021-39 | en | refinedweb |
Suppose.
Views: 18298
Sorry folks! im using a system that is out of dated and os xp thats too time consuming, and not supporting any app. nevethless I must give you complete solution. Dont waste your time you have to make your assignment yourself. I'll assist you
for graphical user interface we use command
import javax.swing.*;
@Anam "
@Muhammad Tariq I've the complete coding but need to implement and execute. without execution how can I say this is the complete solution. however if any body share his or her system via team viewer, i can give complete solution
Please send me at my email address. [email protected]
@Anam You have complete code, very good. How can I help you to execute your code or what kind of difficulties are you facing to execute your code, by the way you can still share your code by attaching files, if you want to.;)
try this code and uplaod error report here if any
import javax.swing.*;
import java.util.Scanner;
public class ExceptionA
{
private int numerator;
private int denominator;
private double quotient;
Public static void main(String[] args)
{
ExceptionA oneTime = new ExceptionA();
oneTime.doIt();
}
public void doIt()
{
try
{
String input = JOptionPane.showInputDialog("Enter Numerator");
Scanner keyboard = new Scanner(System.in);
numerator = keyboard.nextInt();
String input = JOptionPane.showInputDialog("Enter Denominatorr");
Denominator = keyboard.nextInt();
if (denominator == 0)
throw new ExceptionAException();
quotient = numerator /(double) denominator;
System.out.println(numerator + "/" + denominator +"=" + quotient);
JOptionPane.showMessageDialog(null, numerator + "/" + denominator +"=" + quotient);
}
catch (ExceptionAException e)
{
JOptionPane.showMessageDialog(null, "No number can be devisible by Zero");
system.exot(0);
}
}
}
Hi
Can you please connect through teamviewer? Please email me at [email protected]
Thanks so much for the help.
@Anam Sorry to say but I really unable to understand this piece of code, Where are ExceptionB and ExceptionC Classes? where is Inheritance? where is pure virtual function? how are you dealing with controls? there is some spelling mistakes e.g system.exot(0); and some upper and lower case problem e.g Public static void main(String[] args).??
bro pls ia ka solution upload kr den pls pls pls
catch is not working.
msg displayed illegal start of exception
© 2021 Created by + M.Tariq Malik.
Promote Us | Report an Issue | Privacy Policy | Terms of Service | https://vustudents.ning.com/group/cs506webdesignanddevelopment/forum/topics/cs506-assignment-no-2-has-benn-uploaded-on-vulms-its-due-date-is?commentId=3783342%3AComment%3A5817179&groupId=3783342%3AGroup%3A59376 | CC-MAIN-2021-39 | en | refinedweb |
Configuration element for setting a fixed size band. More...
#include <seqan3/alignment/configuration/align_config_band.hpp>
Configuration element for setting a fixed size band.
Configures the banded alignment algorithm. Currently only a fixed size band is allowed. The band is given in form of a seqan3::align_cfg::lower_diagonal and a seqan3::align_cfg::upper_diagonal. A diagonal represents the cells in the alignment matrix that are not crossed by the alignment either downwards by the lower diagonal or rightwards by the upper diagonal. Thus any computed alignment will be inside the area defined by the lower and the upper diagonal.
If this configuration is default constructed or not set during the algorithm configuration the full alignment matrix will be computed.
Before the execution of the alignment algorithm the band configuration is validated. If the user provided an invalid band, e.g. the upper diagonal is smaller than the lower diagonal, the alignment matrix would be ill configured such that the requested alignment method cannot be computed (because the global alignment requires the first cell and the last cell of the matrix to be reachable), then a seqan3::invalid_alignment_configuration will be thrown.
Initialises the fixed size band by setting the lower and the upper matrix diagonal.
The lower diagonal represents the lower bound of the banded matrix, i.e. the alignment cannot pass below this diagonal. Similar, the upper diagonal represents the upper bound of the alignment. During the alignment configuration and execution the band parameters will be checked and an exception will be thrown in case of an invalid configuration. | https://docs.seqan.de/seqan/3-master-user/classseqan3_1_1align__cfg_1_1band__fixed__size.html | CC-MAIN-2021-39 | en | refinedweb |
This tutorial demonstrates a way to automate cloud infrastructure by using Cloud Composer. The example shows how to schedule automated backups of Compute Engine virtual machine (VM) instances.
Cloud Composer is a fully managed workflow orchestration service on Google Cloud. Cloud Composer lets you author workflows with a Python API, schedule them to run automatically or start them manually, and monitor the execution of their tasks in real time through a graphical UI.
Cloud Composer is based on Apache Airflow. Google runs this open source orchestration platform on top of a Google Kubernetes Engine (GKE) cluster. This cluster manages the Airflow workers, and opens up a host of integration opportunities with other Google Cloud products.
This tutorial is intended for operators, IT administrators, and developers who are interested in automating infrastructure and taking a deep technical dive into the core features of Cloud Composer. The tutorial is not meant as an enterprise-level disaster recovery (DR) guide nor as a best practices guide for backups. For more information on how to create a DR plan for your enterprise, see the disaster recovery planning guide.
Defining the architecture
Cloud Composer workflows are defined by creating a Directed Acyclic Graph (DAG). From an Airflow perspective, a DAG is a collection of tasks organized to reflect their directional interdependencies. In this tutorial, you learn how to define an Airflow workflow that runs regularly to back up a Compute Engine virtual machine instance using Persistent Disk snapshots.
The Compute Engine VM used in this example consists of an instance with an associated boot persistent disk. Following the snapshot guidelines, described later, the Cloud Composer backup workflow calls the Compute Engine API to stop the instance, take a snapshot of the persistent disk, and restart the instance. In between these tasks, the workflow waits for each operation to complete before proceeding.
The following diagram summarizes the architecture:
Before you begin the tutorial, the next section shows you how to create a Cloud Composer environment. The advantage of this environment is that it uses multiple Google Cloud products, but you don't have to configure each one individually.
- Cloud Storage: The Airflow DAG and logs are stored in a Cloud Storage bucket.
- Google Kubernetes Engine: The Airflow platform is based on a micro-service architecture, and is suitable to run in GKE.
- Airflow workers load workflow definitions from Cloud Storage and run each task, using the Compute Engine API.
- The Airflow scheduler makes sure that backups are executed in the configured cadence, and with the proper task order.
- Redis is used as a message broker between Airflow components.
- Cloud SQL Proxy is used to communicate with the metadata repository.
- Cloud SQL and App Engine Flex: Cloud Composer also uses a Cloud SQL instance for metadata and an App Engine Flex app that serves the Airflow UI. These resources are not pictured in the diagram because they live in a separate Google-managed project.
For more details, see the Overview of Cloud Composer.
Scaling the workflow
The use case presented in this tutorial is simple: take a snapshot of a single virtual machine with a fixed schedule. However, a real-world scenario can include hundreds of VMs belonging to different parts of the organization, or different tiers of a system, each requiring different backup schedules. Scaling applies not only to our example with Compute Engine VMs, but to any infrastructure component for which a scheduled process needs to be run
Cloud Composer excels at these complex scenarios because it's a
full-fledged workflow engine based on Apache Airflow hosted in the cloud, and
not just an alternative to
Cloud Scheduler
or
cron.
Airflow DAGs, which are flexible representations of a workflow, adapt to real-world needs while still running from a single codebase. To build DAGs suitable for your use case, you can use a combination of the following two approaches:
- Create one DAG instance for groups of infrastructure components where the same schedule can be used to start the process.
- Create independent DAG instances for groups of infrastructure components that require their own schedules.
A DAG can process components in parallel. A task must either start an asynchronous operation for each component, or you must create a branch to process each component. You can build DAGs dynamically from code to add or remove branches and tasks as needed.
Also, you can model dependencies between application tiers within the same DAG. For example: you might want to stop all the web server instances before you stop any app server instances.
These optimizations are outside of the scope of the current tutorial.
Best practices for persistent disks and snapshots
Persistent Disk is durable block storage that can be attached to a virtual machine instance and used either as the primary boot disk for the instance or as a secondary non-boot disk for critical data. PDs are highly available—for every write, three replicas are written, but Google Cloud customers are charged for only one of them.
A snapshot is an exact copy of a persistent disk at a given point in time. Snapshots are incremental and compressed, and are stored transparently in Cloud Storage.
It's possible to take snapshots of any persistent disk while apps are running. No snapshot will ever contain a partially written block. However, if a write operation spanning several blocks is in flight when the backend receives the snapshot creation request, that snapshot might contain only some of the updated blocks. You can deal with these inconsistencies the same way you would address unclean shutdowns.
We recommend that you follow these guidelines to ensure that snapshots are consistent:
- Minimize or avoid disk writes during the snapshot creation process. Scheduling backups during off-peak hours is a good start.
- For secondary non-boot disks, pause apps and processes that write data and freeze or unmount the file system.
For boot disks, it's not safe or feasible to freeze the root volume. Stopping the virtual machine instance before taking a snapshot might be a suitable approach.
To avoid service downtime caused by freezing or stopping a virtual machine, we recommend using a highly available architecture. For more information, see Disaster recovery scenarios for applications.
Use a consistent naming convention for the snapshots. For example, use a timestamp with an appropriate granularity, concatenated with the name of the instance, disk, and zone.
For more information on creating consistent snapshots, see snapshot best practices.
If a persistent disk was created automatically as part of a Composer environment that no longer exists, we recommend that you delete the persistent disk.
Objectives
- Create custom Airflow operators and a sensor for Compute Engine.
- Create a Cloud Composer workflow using the Airflow operators and a sensor.
- Schedule the workflow to back up a Compute Engine instance at regular intervals.
Costs
You can.
Create a Cloud Composer environment. To minimize cost, choose a disk size of 20 GB.GO TO THE CREATE ENVIRONMENT PAGE
It usually takes about 15 minutes to provision the Cloud Composer environment, but it can take up to one hour.
The full code for this tutorial is available on GitHub. To examine the files as you follow along, open the repository in Cloud Shell:GO TO Cloud Shell
- In the Cloud Shell console home directory, run the following command:
git clone
When you finish this tutorial, you can avoid continued billing by deleting the resources you created. For more information, see Cleaning up.
Setting up a sample Compute Engine instance
The first step is to create the sample Compute Engine virtual machine instance to back up. This instance runs WordPress, an open source content management system.
Follow these steps to create the WordPress instance on Compute Engine:
- In Google Cloud Marketplace, go to the WordPress Certified by Bitnami launch page.
- Click Launch.
A pop-up window appears with a list of your projects. Select the project you previously created for this tutorial.
Google Cloud configures the required APIs in your project, and after a short wait it shows a screen with the different configuration options for your WordPress Compute Engine instance.
Optionally, change the boot disk type to SSD to increase the instance boot speed.
Click Deploy.
You are taken to the Deployment Manager screen, where you can see the status of the deployment.
The WordPress Deployment Manager script creates the WordPress Compute Engine instance and two firewall rules to allow TCP traffic to reach the instance through ports 80 and 443. This process might take several minutes, with each item being deployed and showing a progress-wheel icon.
When the process is completed, your WordPress instance is ready and serving the default content on the website URL. The Deployment Manager screen shows the website URL (Site address), the administration console URL (Admin URL) with its user and password, documentation links, and suggested next steps.
Click the site address to verify that your WordPress instance is up and running. You should see a default WordPress blog page.
The sample Compute Engine instance is now ready. The next step is to configure an automatic incremental backup process of that instance's persistent disk.
Creating custom Airflow operators
To back up the persistent disk of the test instance, you can create an Airflow workflow that stops the instance, takes a snapshot of its persistent disk, and restarts the instance. Each of these tasks is defined as code with a custom Airflow operator.
In this section, you learn how to build custom Airflow operators that call the Compute Engine Python Client library to control the instance lifecycle. You have other options for doing this, for example:
- Use the Airflow
BashOperatorto execute
gcloud computecommands.
- Use the Airflow
HTTPOperatorto execute HTTP calls directly to the Compute Engine REST API.
- Use the Airflow
PythonOperatorto call arbitrary Python functions without defining custom operators.
This tutorial doesn't explore those alternatives.
Authorize calls to the Compute Engine API
The custom operators that you create in this tutorial use the Python Client Library to call the Compute Engine API. Requests to the API must be authenticated and authorized. The recommended way is to use a strategy called Application Default Credentials (ADC).
The ADC strategy is applied whenever a call is made from a client library:
- The library verifies if a service account is specified in the environment variable
GOOGLE_APPLICATION_CREDENTIALS.
- If the service account is not specified, the library uses the default service account that Compute Engine or GKE provides.
If these two methods fail, an error occurs.
Airflow operators in this tutorial fall under the second method. When you create the Cloud Composer environment, a GKE cluster is provisioned. The nodes of this cluster run Airflow worker pods. In turn, these workers execute the workflow with the custom operators you define. Because you didn't specify a service account when you created the environment, the default service account for the GKE cluster nodes is what the ADC strategy uses.
GKE cluster nodes are Compute Engine instances. So it's straightforward to obtain the credentials associated with the Compute Engine default service account in the operator code.
def get_compute_api_client(self): if self._compute is None: credentials = GoogleCredentials.get_application_default() self._compute = googleapiclient.discovery.build( 'compute', 'v1', cache_discovery=False, credentials=credentials) return self._compute
This code uses the default application credentials to create a Python client that will send requests to the Compute Engine API. In the following sections, you reference this code when creating each Airflow operator.
As an alternative to using the default Compute Engine service account, it's possible to create a service account and configure it as a connection in the Airflow administration console. This method is described in the Managing Airflow connections page and allows for more granular access control to Google Cloud resources. This tutorial doesn't explore this alternative.
Shut down the Compute Engine instance safely
This section analyzes the creation of the first custom Airflow operator,
StopInstanceOperator. This operator calls the Compute Engine API to stop
the Compute Engine instance that's running WordPress:
In Cloud Shell, use a text editor such as nano or vim to open the
gce_commands.pyfile:
vi $HOME/composer-infra-python/no_sensor/dags/gcp_custom_ops/gce_commands.py
Examine the imports at the top of the file:
import datetime import logging import time from airflow.models import BaseOperator from airflow.utils.decorators import apply_defaults import googleapiclient.discovery from oauth2client.client import GoogleCredentials
The notable imports are:
BaseOperator: base class that all Airflow custom operators are required to inherit.
apply_defaults: function decorator that fills arguments with default values if they are not specified in the operator constructor.
GoogleCredentials: class used to retrieve the app default credentials.
googleapiclient.discovery: client library entry point that allows the discovery of the underlying Google APIs. In this case, the client library builds a resource to interact with the Compute Engine API.
Next, look at the
StopInstanceOperatorclass below the imports:
class StopInstanceOperator(BaseOperator): """Stops the virtual machine instance.""" @apply_defaults def __init__(self, project, zone, instance, *args, **kwargs): self._compute = None self.project = project self.zone = zone self.instance = instance super(Stop('Stopping instance %s in project %s and zone %s', self.instance, self.project, self.zone) self.get_compute_api_client().instances().stop( project=self.project, zone=self.zone, instance=self.instance).execute() time.sleep(90)
The
StopInstanceOperatorclass has three methods:
__init__: the class constructor. Receives the project name, the zone where the instance is running, and the name of the instance you want to stop. Also, it initializes the
self._computeproperty to
None, so it can be asynchronously loaded (also known as lazy loading) later in the execution phase.
get_compute_api_client: helper method that returns an instance of Compute Engine API. On the first call the method will initialize the
self._computeproperty, and on subsequent calls the method will return the existing instance. It uses the ADC provided by the class
GoogleCredentialsto authenticate with the API and authorize subsequent calls.
execute: main operator method overridden from
BaseOperator. Airflow calls this method to run the operator. The method prints an info message to the logs and then calls the Compute Engine API to stop the Compute Engine instance specified by the three parameters received in the constructor. The
sleep()function at the end waits until the instance has been stopped. In a production environment, you must use a more deterministic method such as operator cross-communication. That technique is described later in this tutorial.
The
stop() method from the Compute Engine API
shuts down the virtual machine instance cleanly.
The operating system executes the
init.d shutdown scripts, including the one
for WordPress at
/etc/init.d/bitnami. This script also handles the WordPress
startup when the virtual machine is started again. You can examine the service
definition with the shutdown and startup configuration at
/etc/systemd/system/bitnami.service.
Create uniquely named incremental backup snapshots
This section creates the second custom operator,
SnapshotDiskOperator. This
operator takes a snapshot of the instance's persistent disk.
In the
gce_commands.py file that you opened in the previous section,
look at the
SnapshotDiskOperator class:
class SnapshotDiskOperator(BaseOperator): """Takes a snapshot of a persistent disk.""" @apply_defaults def __init__(self, project, zone, instance, disk, *args, **kwargs): self._compute = None self.project = project self.zone = zone self.instance = instance self.disk = disk super(SnapshotDiskOperator, self).__init__(*args, **kwargs) def get_compute_api_client(self): if self._compute is None: credentials = GoogleCredentials.get_application_default() self._compute = googleapiclient.discovery.build( 'compute', 'v1', cache_discovery=False, credentials=credentials) return self._compute def generate_snapshot_name(self, instance): # Snapshot name must match regex '(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?)' return ('' + self.instance + '-' + datetime.datetime.now().strftime('%Y-%m-%d-%H%M%S')) def execute(self, context): snapshot_name = self.generate_snapshot_name(self.instance) logging.info( ("Creating snapshot '%s' from: {disk=%s, instance=%s, project=%s, " "zone=%s}"), snapshot_name, self.disk, self.instance, self.project, self.zone) self.get_compute_api_client().disks().createSnapshot( project=self.project, zone=self.zone, disk=self.disk, body={'name': snapshot_name}).execute() time.sleep(120)
The
SnapshotDiskOperator class has the following methods:
__init__: the class constructor. Similar to the constructor in the
StopInstanceOperatorclass, but in addition to the project, zone, and instance name, this constructor receives the name of the disk to create the snapshot from. This is because an instance can have more than one persistent disk attached to it.
generate_snapshot_name: This sample method creates a simple unique name for each snapshot using the name of the instance, the date, and the time with a one-second granularity. Adjust the name to your needs, for example: by adding the disk name when multiple disks are attached to an instance, or by increasing the time granularity to support ad hoc snapshot creation requests.
execute: the main operator method overridden from
BaseOperator. When the Airflow worker executes it, it generates a snapshot name using the
generate_snapshot_namemethod. Then it prints an info message and calls the Compute Engine API to create the snapshot with the parameters received in the constructor.
Start the Compute Engine instance
In this section, you create the third and final custom operator,
StartInstanceOperator. This operator restarts a Compute Engine
instance.
In the
gce_commands.py file you previously opened, look at the
SnapshotDiskOperator class toward the bottom of the file:
class StartInstanceOperator(BaseOperator): """Starts a virtual machine instance.""" @apply_defaults def __init__(self, project, zone, instance, *args, **kwargs): self._compute = None self.project = project self.zone = zone self.instance = instance super(Start('Starting instance %s in project %s and zone %s', self.instance, self.project, self.zone) self.get_compute_api_client().instances().start( project=self.project, zone=self.zone, instance=self.instance).execute() time.sleep(20)
The
StartInstanceOperatorclass has the following methods:
__init__: the class constructor. Similar to the constructor in the
StopInstanceOperatorclass.
execute: the main operator method overridden from
BaseOperator. The difference from the previous operators is the invocation of the appropriate Compute Engine API to start the instance indicated in the constructor input parameters.
Defining the Airflow workflow
Earlier, you defined three custom operators. These operators define the tasks that form part of an Airflow workflow. The workflow presented here is simple and linear, but Airflow workflows can be complex Directed Acyclic Graphs.
This section creates the DAG using these operators, deploys the DAG to Cloud Composer, and runs the DAG.
- Close the
gce_commands.pyfile.
Configure the Directed Acyclic Graph
The DAG defines the workflow that Airflow executes. For the DAG to know which disk to back up, you need to define a few variables: which Compute Engine instance the disk is attached to, the zone the instance is running on, and the project where all the resources are available.
You could hard-code these variables in the DAG source code itself, but it's a best practice to define them as Airflow variables. This way, any configuration changes can be managed centrally and independently from code deployments.
Define the DAG configuration:
In Cloud Shell, set the location of your Cloud Composer environment:
LOCATION=[CLOUD_ENV_LOCATION]
The location is the Compute Engine region where the Cloud Composer environment is located, for example:
us-central1or
europe-west1. It was set at the time of environment creation and is available in the Cloud Composer console page.
Set the Cloud Composer environment name:
ENVIRONMENT=$(gcloud composer environments list \ --format="value(name)" --locations $LOCATION)
The
--formatparameter is used to select only the
namecolumn from the resulting table. You can assume that only one environment has been created.
Create the
PROJECTvariable in Airflow using the name of the current Google Cloud project:
gcloud composer environments run $ENVIRONMENT --location $LOCATION \ variables -- --set PROJECT $(gcloud config get-value project)
Where:
gcloud composer environments runis used to run Airflow CLI commands.
- The
variablesAirflow command sets the
PROJECTAirflow variable to the value returned by
gcloud config
Create the
INSTANCEvariable in Airflow with the name of the WordPress instance:
gcloud composer environments run $ENVIRONMENT --location $LOCATION \ variables -- --set INSTANCE \ $(gcloud compute instances list \ --format="value(name)" --filter="name~'.*wordpress.*'")
This command uses the
--filterparameter to select only the instance whose
namematches a regular expression containing the string
wordpress. This approach assumes that there is only one such instance, and that your instance and disk have "wordpress" as part of their name, which is true if you accepted the defaults.
Create the
ZONEvariable in Airflow using the zone of the WordPress instance:
gcloud composer environments run $ENVIRONMENT --location $LOCATION \ variables -- --set ZONE \ $(gcloud compute instances list \ --format="value(zone)" --filter="name~'.*wordpress.*'")
Create the
DISKvariable in Airflow with the name of the persistent disk attached to the WordPress instance:
gcloud composer environments run $ENVIRONMENT --location $LOCATION \ variables -- --set DISK \ $(gcloud compute disks list \ --format="value(name)" --filter="name~'.*wordpress.*'")
Verify that the Airflow variables have been created correctly:
In the Cloud Console, go to the Cloud Composer page.
Go to the Cloud Composer page
In the Airflow web server column, click the Airflow link. A new tab showing the Airflow web server main page opens.
Click Admin and then Variables.
The list shows the DAG configuration variables.
Create the Directed Acyclic Graph
The DAG definition lives in a dedicated Python file. Your next step is to create the DAG, chaining the three custom operators.
In Cloud Shell, use a text editor such as nano or vim to open the
backup_vm_instance.pyfile:
vi $HOME/composer-infra-python/no_sensor/dags/backup_vm_instance.py
Examine the imports at the top of the file:
import datetime from airflow import DAG from airflow.models import Variable from gcp_custom_ops.gce_commands import SnapshotDiskOperator from gcp_custom_ops.gce_commands import StartInstanceOperator from gcp_custom_ops.gce_commands import StopInstanceOperator from airflow.operators.dummy_operator import DummyOperator
Summarizing these imports:
DAGis the Directed Acyclic Graph class defined by Airflow.
DummyOperatoris used to create the beginning and ending no-op operators to improve the workflow visualization. In more complex DAGs,
DummyOperatorcan be used to join branches and to create SubDAGs.
- The DAG uses the three operators that you defined in the previous sections.
Define the values of the parameters to be passed to operator constructors:
INTERVAL = '@daily' START_DATE = datetime.datetime(2018, 7, 16) PROJECT = Variable.get('PROJECT') ZONE = Variable.get('ZONE') INSTANCE = Variable.get('INSTANCE') DISK = Variable.get('DISK')
Where:
INTERVALdefines how often the backup workflow runs. The preceding code specifies a daily recurrence using an Airflow cron preset. If you want to use a different interval, see the DAG Runs reference page. You could also trigger the workflow manually, independent of this schedule.
START_DATEdefines the point in time when the backups are scheduled to start. There is no need to change this value.
- The rest of the values are retrieved from the Airflow variables that you configured in the previous section.
Use the following code to create the DAG with some of the previously defined parameters. This code also gives the DAG a name and a description, both of which are shown in the Cloud Composer UI.
dag1 = DAG('backup_vm_instance', description='Backup a Compute Engine instance using an Airflow DAG', schedule_interval=INTERVAL, start_date=START_DATE, catchup=False)
Populate the DAG with tasks, which are operator instances:
## Dummy tasks begin = DummyOperator(task_id='begin', retries=1, dag=dag1) end = DummyOperator(task_id='end', retries=1) ## Compute Engine tasks stop_instance = StopInstanceOperator( project=PROJECT, zone=ZONE, instance=INSTANCE, task_id='stop_instance') snapshot_disk = SnapshotDiskOperator( project=PROJECT, zone=ZONE, instance=INSTANCE, disk=DISK, task_id='snapshot_disk') start_instance = StartInstanceOperator( project=PROJECT, zone=ZONE, instance=INSTANCE, task_id='start_instance')
This code instantiates all the tasks needed for the workflow, passing the defined parameters to the corresponding operator constructors.
- The
task_idvalues are the unique IDs that will be shown in the Cloud Composer UI. You use these IDs later to pass data between tasks.
retriessets the number of times to retry a task before failing. For
DummyOperatortasks, these values are ignored.
dag=dagindicates that a task is attached to the previously created DAG. This parameter is only required in the first task of the workflow.
Define the sequence of tasks that comprise the workflow DAG:
# Airflow DAG definition begin >> stop_instance >> snapshot_disk >> start_instance >> end
Close the
gce_commands.pyfile.
Run the workflow
The workflow represented by the operator DAG is now ready to be run by
Cloud Composer. Cloud Composer reads the DAG and the custom
operator definitions from an
associated Cloud Storage bucket.
This bucket and the corresponding
dags directory were
automatically created when you created the Cloud Composer environment.
Using Cloud Shell, you can copy the DAG and custom operators files into the associated Cloud Storage bucket:
In Cloud Shell, get the bucket name:
BUCKET=$(gsutil ls) echo $BUCKET
There should be a single bucket with a name of the form:
gs://[REGION]-{ENVIRONMENT_NAME]-{ID}-bucket/.
Execute the following script to copy the DAG and custom operators files into the corresponding bucket directories:
gsutil cp $HOME/composer-infra-python/no_sensor/dags/gcp_custom_ops/gce_commands.py "$BUCKET"dags/gcp_custom_ops/gce_commands.py gsutil cp $HOME/composer-infra-python/no_sensor/dags/backup_vm_instance.py "$BUCKET"dags
The bucket name already includes a trailing slash, hence the double quotes around the
$BUCKETvariable.
In the Cloud Console, go to the Cloud Composer page.
Go to the Cloud Composer page
In the Airflow web server column, click the Airflow link. A new tab showing the Airflow web server main page opens. Wait two to three minutes and reload the page. It might take a few cycles of waiting and then reloading for the page to be ready.
A list showing the newly created DAG is shown, similar to the following:
If there are syntax errors in the code, a message appears on top of the DAG table. If there are runtime errors, they are marked under DAG Runs. Correct any errors before continuing. The easiest way to do this is to recopy the files from the GitHub repo into the bucket.
To see a more detailed stack trace, run the following command in Cloud Shell:
gcloud composer environments run $ENVIRONMENT --location $LOCATION list_dags
Airflow starts running the workflow immediately, shown under the column Dag Runs.
The workflow is already underway, but if you need to run it again, you can trigger it manually with the following steps:
- In the Links column, click the first icon, Trigger Dag, marked with an arrow in the previous screenshot.
In the pop-up confirming Are you sure?, click OK.
In a few seconds, the workflow starts and a new run appears as a light green circle under DAG Runs.
In the Links column, click the Graph View icon, marked with an arrow in the previous screenshot.
The Graph View shows the workflow, the successfully executed tasks with a dark green border, the task being executed with a light green border and the pending tasks with no border. You can click the task to view logs, see its details, and perform other operations.
To follow the execution along, periodically click the refresh button at the top-right corner.
Congratulations! You completed your first Cloud Composer workflow run. When the workflow finishes, it creates a snapshot of the Compute Engine instance persistent disk.
In Cloud Shell, verify that the snapshot has been created:
gcloud compute snapshots list
Alternatively, you can use the Cloud Console menu to go to the Compute Engine Snapshots page.
One snapshot should be visible at this point. Subsequent workflow runs, triggered either manually or automatically following the specified schedule, will create further snapshots.
Snapshots are incremental. The size of the first snapshot is the largest because it contains all the blocks from the Persistent Disk in compressed form. Successive snapshots only contain the blocks that were changed from the previous snapshot, and any references to the unchanged blocks. So subsequent snapshots are smaller than the first one, take less time to produce, and cost less.
If a snapshot is deleted, its data is moved into the next corresponding snapshot to keep the consistency of consecutive deltas being stored in the snapshot chain. Only when all snapshots are removed is all the backed-up data from the persistent disk removed.
Creating the custom Airflow sensor
When running the workflow, you might have noticed that it takes some time to
complete each step. This wait is because the operators include a
sleep()
instruction at the end to give time to the Compute Engine API to finish its
work before starting the next task.
This approach is not optimal, however, and can cause unexpected issues. For example, during snapshot creation the wait time might be too long for incremental snapshots, which means you're wasting time waiting for a task that has already finished. Or the wait time might be too short. This can cause the whole workflow to fail or to produce unreliable results because the instance is not fully stopped or the snapshot process is not done when the machine is started.
You need to be able to tell the next task that the previous task is done. One solution is to use Airflow Sensors, which pause the workflow until some criteria is met. In this case, the criterion is the previous Compute Engine operation finishing successfully.
Share cross-communication data across tasks
When tasks need to communicate with each other, Airflow provides a mechanism known as XCom, or "cross-communication." XCom lets tasks exchange messages consisting of a key, a value, and a timestamp.
The simplest way to pass a message using XCom is for an operator to return a
value from its
execute() method. The value can be any object that Python can
serialize using the
pickle module.
The three operators described in previous sections call the
Compute Engine API. All these API calls return an
Operation resource
object. These objects are meant to be used to manage asynchronous requests such
as the ones on the Airflow operators. Each object has a field
name that you
can use to poll for the latest state of the Compute Engine operation.
Modify the operators to return the
name of the Operation resource
object:
In Cloud Shell, use a text editor such as nano or vim to open the
gce_command.pyfile, this time from the
sensor/dags/gcp_custom_opsdirectory:
vi $HOME/composer-infra-python/sensor/dags/gcp_custom_ops/gce_commands.py
In the execute method of the
StopInstanceOperator, notice how the following code:
self.get_compute_api_client().instances().stop( project=self.project, zone=self.zone, instance=self.instance).execute() time.sleep(90)
has been replaced with this code:
operation = self.get_compute_api_client().instances().stop( project=self.project, zone=self.zone, instance=self.instance).execute() return operation['name']
Where:
- The first line captures the return value from the API call into the
operationvariable.
- The second line returns the operation
namefield from the
execute()method. This instruction serializes the name using pickle and pushes it into the XCom intra-task shared space. The value will later be pulled in last-in, first-out order.
If a task needs to push multiple values, it's possible to give XCom an explicit
keyby calling
xcom_push()directly instead of returning the value.
Similarly, in the execute method of the
SnapshotDiskOperator, note how the following code:
self.get_compute_api_client().disks().createSnapshot( project=self.project, zone=self.zone, disk=self.disk, body={'name': snapshot_name}).execute() time.sleep(120)
has been replaced with this code:
operation = self.get_compute_api_client().disks().createSnapshot( project=self.project, zone=self.zone, disk=self.disk, body={'name': snapshot_name}).execute() return operation['name']
There are two unrelated names in this code. The first one refers to the snapshot name, and the second is the operation name.
Finally, in the execute method of the
StartInstanceOperator, note how the following code:
self.get_compute_api_client().instances().start( project=self.project, zone=self.zone, instance=self.instance).execute() time.sleep(20)
has been replaced with this code:
operation = self.get_compute_api_client().instances().start( project=self.project, zone=self.zone, instance=self.instance).execute() return operation['name']
At this point, there should not be any calls to the
sleep()method throughout the
gce_commands.pyfile. Make sure this is true by searching the file for
sleep. Otherwise, double-check the previous steps in this section.
Since no calls to
sleep()are made from the code, the following line was removed from the imports section at the top of the file:
import time
Close the
gce_commands.pyfile.
Implement and expose the sensor
In the previous section, you modified each operator to return a Compute Engine operation name. In this section, using the operation name, you create an Airflow Sensor to poll the Compute Engine API for the completion of each operation.
In Cloud Shell, use a text editor such as nano or vim to open the
gce_commands.pyfile, making sure you use the
sensor/dags/gcp_custom_opsdirectory:
vi $HOME/composer-infra-python/sensor/dags/gcp_custom_ops/gce_commands.py
Note the following line of code at the top of the import section, just below the
from airflow.models
import BaseOperatorline:
from airflow.operators.sensors import BaseSensorOperator
All sensors are derived from the
BaseSensorOperatorclass, and must override its
poke()method.
Examine the new
OperationStatusSensorclass:
class OperationStatusSensor(BaseSensorOperator): """Waits for a Compute Engine operation to complete.""" @apply_defaults def __init__(self, project, zone, instance, prior_task_id, *args, **kwargs): self._compute = None self.project = project self.zone = zone self.instance = instance self.prior_task_id = prior_task_id super(OperationStatusSensor, self).__init__(*args, **kwargs) def get_compute_api_client(self): if self._compute is None: credentials = GoogleCredentials.get_application_default() self._compute = googleapiclient.discovery.build( 'compute', 'v1', cache_discovery=False, credentials=credentials) return self._compute def poke(self, context): operation_name = context['task_instance'].xcom_pull( task_ids=self.prior_task_id) result = self.get_compute_api_client().zoneOperations().get( project=self.project, zone=self.zone, operation=operation_name).execute() logging.info( "Task '%s' current status: '%s'", self.prior_task_id, result['status']) if result['status'] == 'DONE': return True else: logging.info("Waiting for task '%s' to complete", self.prior_task_id) return False
The class
OperationStatusSensorhas the following methods:
__init__: the class constructor. This constructor takes similar parameters to the ones for the Operators, with one exception:
prior_task_id. This parameter is the ID of the previous task.
poke: the main sensor method overridden from
BaseSensorOperator. Airflow calls this method every 60 seconds until the method returns
True. Only in that case are downstream tasks allowed to run.
You can configure the interval for these retries by passing the
poke_intervalparameter to the constructor. You can also define a
timeout. For more information, see the BaseSensorOperator API reference.
In the implementation of the preceding
pokemethod, the first line is a call to
xcom_pull(). This method obtains the most recent XCom value for the task identified by
prior_task_id. The value is the name of a Compute Engine Operation and is stored in the
operation_namevariable.
The code then executes the
zoneOperations.get()method, passing
operation_nameas a parameter to obtain the latest status for the operation. If the status is
DONE, then the
poke()method returns
True, otherwise
False. In the former case, downstream tasks will be started; in the latter case, the workflow execution remains paused and the
poke()method is called again after
poke_intervalseconds.
Close the
gce_commands.pyfile.
Update the workflow
After you create the sensor, you can add it to the workflow. In this section, you update the workflow to its final state, which includes all three operators plus sensor tasks in between. You then run and verify the updated workflow.
In Cloud Shell, use a text editor such as nano or vim to open the
backup_vm_instance.pyfile, this time from the
sensor/dagsdirectory:
vi $HOME/composer-infra-python/sensor/dags/backup_vm_instance.py
In the imports section, notice that the newly created sensor is imported below the line
from airflow operators import StartInstanceOperator:
from airflow.operators import OperationStatusSensor
Examine the lines following the
## Wait taskscomment
## Wait tasks wait_for_stop = OperationStatusSensor( project=PROJECT, zone=ZONE, instance=INSTANCE, prior_task_id='stop_instance', poke_interval=15, task_id='wait_for_stop') wait_for_snapshot = OperationStatusSensor( project=PROJECT, zone=ZONE, instance=INSTANCE, prior_task_id='snapshot_disk', poke_interval=10, task_id='wait_for_snapshot') wait_for_start = OperationStatusSensor( project=PROJECT, zone=ZONE, instance=INSTANCE, prior_task_id='start_instance', poke_interval=5, task_id='wait_for_start')
The code reuses
OperationStatusSensorto define three intermediate "wait tasks". Each of these tasks waits for the previous operation to complete. The following parameters are passed to the sensor constructor:
- The
PROJECT,
ZONE, and
INSTANCEof the WordPress instance, already defined in the file.
prior_task_id: The ID of the task that the sensor is waiting for. For example, the
wait_for_stoptask waits for the task with ID
stop_instanceto be completed.
poke_interval: The number of seconds that Airflow should wait in between retry calls to the sensor's
poke()method. In other words, the frequency to verify whether
prior_task_idis already done.
task_id: The ID of the newly created wait task.
At the bottom of the file, note how the following code:
begin >> stop_instance >> snapshot_disk >> start_instance >> end
has been replaced with this code:
begin >> stop_instance >> wait_for_stop >> snapshot_disk >> wait_for_snapshot \ >> start_instance >> wait_for_start >> end
These lines define the full backup workflow.
Close the
backup_vm_instance.pyfile.
Now you need to copy the DAG and custom operators files from the associated Cloud Storage bucket:
In Cloud Shell, get the bucket name:
BUCKET=$(gsutil ls) echo $BUCKET
You should see a single bucket with a name of the form:
gs://[REGION]-[ENVIRONMENT_NAME]-[ID]-bucket/.
Execute the following script to copy the DAG and custom operators files into the corresponding bucket directories:
gsutil cp $HOME/composer-infra-python/sensor/dags/gcp_custom_ops/gce_commands.py "$BUCKET"dags/gcp_custom_ops/gce_commands.py gsutil cp $HOME/composer-infra-python/sensor/dags/backup_vm_instance.py "$BUCKET"dags
The bucket name already includes a trailing slash, hence the double quotes around the
$BUCKETvariable
Upload the updated workflow to Airflow:
In the Cloud Console, go to the Cloud Composer page.
Go to the Cloud Composer page
In the Airflow column, click the Airflow web server link to show the Airflow main page.
Wait for two or three minutes until Airflow automatically updates the workflow. You might observe the DAG table becoming empty momentarily. Reload the page a few times until the Links section appears consistently.
Make sure no errors are shown, and in the Links section, click Tree View.
On the left, the workflow is represented as a bottom-up tree. On the right, a graph of the task runs for different dates. A green square means a successful run for that specific task and date. A white square means a task that has never been run. Because you updated the DAG with new sensor tasks, all of those tasks are shown in white, while the Compute Engine tasks are shown in green.
Run the updated backup workflow:
- In the top menu, click DAGs to go back to the main page.
- In the Links column, click Trigger DAG.
- In the pop-up confirming Are you sure?, click OK. A new workflowA run starts, appearing as a light green circle in the DAG Runs column.
Under Links, click the Graph View icon to observe the workflow execution in real time.
Click the refresh button on the right side to follow the task execution. Note how the workflow stops on each of the sensor tasks to wait for the previous task to finish. The wait time is adjusted to the needs of each task instead of relying on a hard-coded sleep value.
Optionally, during the workflow, go back to the Cloud Console, select the Compute Engine menu, and click VM instances to see how the virtual machine gets stopped and restarted. You can also click Snapshots to see the new snapshot being created.
You have now run a backup workflow that creates a snapshot from a Compute Engine instance. This snapshot follows best practices and optimizes the flow with sensors.
Restoring an instance from a snapshot
Having a snapshot available is only part of the backup story. The other part is being able to restore your instance from the snapshot.
To create an instance using a snapshot:
In Cloud Shell, get a list of the available snapshots:
gcloud compute snapshots list
The output is similar to this:
NAME DISK_SIZE_GB SRC_DISK STATUS wordpress-1-vm-2018-07-18-120044 10 us-central1-c/disks/wordpress-1-vm READY wordpress-1-vm-2018-07-18-120749 10 us-central1-c/disks/wordpress-1-vm READY wordpress-1-vm-2018-07-18-125138 10 us-central1-c/disks/wordpress-1-vm READY
Select a snapshot and create a standalone boot persistent disk from it. Replace the bracketed placeholders with your own values.
gcloud compute disks create [DISK_NAME] --source-snapshot [SNAPSHOT_NAME] \ --zone=[ZONE]
Where:
DISK_NAMEis a the name of the new standalone boot persistent disk.
SNAPSHOT_NAMEis the selected snapshot from the first column of the previous output.
ZONEis the compute zone where the new disk will be created.
Create a new instance, using the boot disk. Replace
[INSTANCE_NAME]with the name of the instance you want to create.
gcloud compute instances create [INSTANCE_NAME] --disk name=[DISK_NAME],boot=yes \ --zone=ZONE --tags=wordpress-1-tcp-443,wordpress-1-tcp-80
With the two tags specified in the command, the instance is automatically allowed to receive incoming traffic on ports 443 and 80 because of the pre-existing firewall rules that were created for the initial WordPress instance.
Take note of the new instance's External IP returned by the previous command.
Verify that WordPress is running on the newly created instance. On a new browser tab, navigate to the external IP address. The WordPress default landing page is shown.
Alternatively, create an instance using a snapshot from the console:
In the Cloud Console, go to the Snapshots page:
Click the most recent snapshot.
Click Create Instance.
In the New VM Instance form, click Management, security, disks, networking, sole tenancy and then Networking.
Add
wordpress-1-tcp-443and
wordpress-1-tcp-80to the Network tags field, pressing enter after each tag. See above for an explanation of these tags.
Click Create.
A new instance based on the latest snapshot is created, and is ready to serve content.
Open the Compute Engine instances page and take note of the new instance's external IP.
Verify that WordPress is running on the newly created instance. Navigate to the external IP on a new browser tab.
For more details, see Creating an instance from a snapshot.
Clean up
- In the Cloud Console, go to the Manage resources page.
- In the project list, select the project that you want to delete, and then click Delete.
- In the dialog, type the project ID, and then click Shut down to delete the project.
What's next
- Read about best practices for enterprise organizations.
- Read about designing and implementing a disaster recovery plan.
- Read more about Cloud Composer concepts.
- Read more about Apache Airflow.
- Explore reference architectures, diagrams, tutorials, and best practices about Google Cloud. Take a look at our Cloud Architecture Center. | https://cloud.google.com/architecture/automating-infrastructure-using-cloud-composer?hl=ru&skip_cache=true | CC-MAIN-2021-39 | en | refinedweb |
Examples of transformations between JSON and objects and collections
- 2020-05-27 04:47:39
- OfStack
Interconversion between JSON strings and java objects [json-lib]
In the development process, it is often necessary to exchange data with other systems. Data exchange formats include XML, JSON, etc. As a lightweight data format, JSON is more efficient than xml. XML needs a lot of tags, which undoubtedly occupies network traffic.
JSON can be in two formats, one for objects and one for array objects,
{"name":"JSON","address":" Xicheng district, Beijing ","age":25}//JSON The object format of the string
[{"name":"JSON","address":" Xicheng district, Beijing ","age":25}]// Data object format
As can be seen from the above two formats, the only difference between the object format and array object format is the addition of [] on the basis of the object format. As for the specific structure, it can be seen that both of them are in the form of key-value pairs separated by a comma (,) in the English state.
This format is also popular when data is transferred between the front end and the back end. The back end returns a string in json format. The front end USES the JSON.parse () method in js to parse the JSON string into an json object and then traverses it for the front end to use.
Now let's get down to business and introduce the interchanges between JSON and java objects in JAVA.
To achieve interconversion between JSON and java objects, the third party jar package is needed. Here, the jar package json-lib is used. The download address is: https: / / sourceforge net/projects/json lib/json lib need commons - beanutils - 1.8.0 comes with. jar, commons - collections - 3.2.1. jar, commons lang - 2.5 jar, commons - logging - 1.1.1. jar, ezmorph - 1.0.6. jar5 package of support, You can download it from the Internet by yourself. The download address is no longer posted here.
json-lib provides several classes to do this, for example, JSONObject, JSONArray. As you can see from the name of the class, JSONObject should be converted in object format, while JSONArray should be converted in array objects (that is, with the form []).
1. Interconversion of java ordinary objects and json strings
java object -- "" string
java ordinary objects refer to 1 java bean in java, that is, 1 entity class, such as,
package com.cn.study.day3; public class Student { // The name private String name; // age private String age; // address private String address; public String getName() { return name; } public void setName(String name) { this.name = name; } public String getAge() { return age; } public void setAge(String age) { this.age = age; } public String getAddress() { return address; } public void setAddress(String address) { this.address = address; } @Override public String toString() { return "Student [name=" + name + ", age=" + age + ", address=" + address + "]"; } }
Above is my ordinary java entity class, see how json-lib converts it to a string,
public static void convertObject() { Student stu=new Student(); stu.setName("JSON"); stu.setAge("23"); stu.setAddress(" Xicheng district, Beijing "); //1 , the use of JSONObject JSONObject json = JSONObject.fromObject(stu); //2 , the use of JSONArray JSONArray array=JSONArray.fromObject(stu); String strJson=json.toString(); String strArray=array.toString(); System.out.println("strJson:"+strJson); System.out.println("strArray:"+strArray); }
I defined an Student entity class, and then converted it to JSON string using JSONObject and JSONArray respectively. See the printed result below.
strJson:{"address":" Xicheng district, Beijing ","age":"23","name":"JSON"} strArray:[{"address":" Xicheng district, Beijing ","age":"23","name":"JSON"}]
It can be seen from the results that both methods can convert java objects into JSON strings, but the structure after conversion is different.
JSON string -- "java object"
The above explains how to convert java object into JSON string. Now let's see how to convert JSON string format into java object.
First, you need to define two strings in different formats. You need to escape the double quotes with \.
public static void jsonStrToJava(){ // Defines a string in two different formats String objectStr="{\"name\":\"JSON\",\"age\":\"24\",\"address\":\" Xicheng district, Beijing \"}"; String arrayStr="[{\"name\":\"JSON\",\"age\":\"24\",\"address\":\" Xicheng district, Beijing \"}]"; //1 , the use of JSONObject JSONObject jsonObject=JSONObject.fromObject(objectStr); Student stu=(Student)JSONObject.toBean(jsonObject, Student.class); //2 , the use of JSONArray JSONArray jsonArray=JSONArray.fromObject(arrayStr); // To obtain jsonArray The first 1 An element Object o=jsonArray.get(0); JSONObject jsonObject2=JSONObject.fromObject(o); Student stu2=(Student)JSONObject.toBean(jsonObject2, Student.class); System.out.println("stu:"+stu); System.out.println("stu2:"+stu2); }
The printed result is:
stu:Student [name=JSON, age=24, address= Xicheng district, Beijing ] stu2:Student [name=JSON, age=24, address= Xicheng district, Beijing ]
Can be seen from the above code, the use of JSONObject can easily put JSON format string into java object, but using JSONArray is not so easy, because it has "[]" sign, so we here after getting JSONArray object, take the first element is the one we need student deformation, and then use the JSONObject won easily.
2. Interconversion of list and json strings
list-- "" json string
public static void listToJSON(){ Student stu=new Student(); stu.setName("JSON"); stu.setAge("23"); stu.setAddress(" Haidian district, Beijing "); List<Student> lists=new ArrayList<Student>(); lists.add(stu); //1 , the use of JSONObject //JSONObject listObject=JSONObject.fromObject(lists); //2 , the use of JSONArray JSONArray listArray=JSONArray.fromObject(lists); //System.out.println("listObject:"+listObject.toString()); System.out.println("listArray:"+listArray.toString()); }
I've just written out the way JSONObject is used, so let's look at the results before the comments,
Exception in thread "main" net.sf.json.JSONException: 'object' is an array. Use JSONArray instead
I was told that there was an exception. I found that when using fromObject method, I would judge the parameter type first. Here, I was told that the parameter passed in is 1 array type, because ArrayList is used.
listArray:[{"address":" Haidian district, Beijing ","age":"23","name":"JSON"}]
The result is normal.
json string -- "list
As you can see from the above example, list objects can only be converted into array objects, so let's look at the following string conversion to list.
public static void jsonToList(){ String arrayStr="[{\"name\":\"JSON\",\"age\":\"24\",\"address\":\" Xicheng district, Beijing \"}]"; // into list List<Student> list2=(List<Student>)JSONArray.toList(JSONArray.fromObject(arrayStr), Student.class); for (Student stu : list2) { System.out.println(stu); } // Convert to an array Student[] ss =(Student[])JSONArray.toArray(JSONArray.fromObject(arrayStr),Student.class); for (Student student : ss) { System.out.println(student); } }
Print the result,
00
[{"name":"JSON","address":" Xicheng district, Beijing ","age":25}]// Data object format
Due to the format of the string as "[]" format, so this choice JSONArray this object, it has toArray, toList methods available, the former is converted into an array of java, or into the list java, because there are corresponding to the entity class, so when use specifies the generic type (Student. class), so that you can get into the object.
3. Interconversion of map and json strings
map-- "" json string
11
[{"name":"JSON","address":" Xicheng district, Beijing ","age":25}]// Data object format
Print the result,
22
[{"name":"JSON","address":" Xicheng district, Beijing ","age":25}]// Data object format
Two forms are printed on it.
json string -- "" map
The JSON string cannot be directly converted to the map object. You need another way to get the corresponding value of the key in map.
public static void jsonToMap(){ String strObject="{\"first\":{\"address\":\" Shanghai, China \",\"age\":\"23\",\"name\":\"JSON\"}}"; //JSONObject JSONObject jsonObject=JSONObject.fromObject(strObject); Map map=new HashMap(); map.put("first", Student.class); // Using the toBean Method, need 3 A parameter MyBean my=(MyBean)JSONObject.toBean(jsonObject, MyBean.class, map); System.out.println(my.getFirst()); }
Print the result,
44
[{"name":"JSON","address":" Xicheng district, Beijing ","age":25}]// Data object format
Here's the code for MyBean,
55
[{"name":"JSON","address":" Xicheng district, Beijing ","age":25}]// Data object format
Using the toBean() method is passed in three parameters, the first is an JSONObject object, the second is an MyBean.class, and the third is an Map object. It can be known from MyBean that there should be 1 attribute of first in this class, and its type is Student, which corresponds to the key and value types in map, that is, first corresponds to the key and value types in first. | https://ofstack.com/Java/21151/examples-of-transformations-between-json-and-objects-and-collections.html | CC-MAIN-2021-39 | en | refinedweb |
Sets the minimal score (maximal errors) allowed during an distance computation e.g. edit distance. More...
#include <seqan3/alignment/configuration/align_config_min_score.hpp>
Sets the minimal score (maximal errors) allowed during an distance computation e.g. edit distance.
This configuration can only be used for computing the edit distance. It restricts the number of substitutions, insertions, and deletions within the alignment to the given value and can thereby speed up the edit distance computation. A typical use case is to verify a candidate region during read mapping where the number of maximal errors is given beforehand. If this configuration is used for an alignment algorithm that does not compute the edit distance, a seqan3::invalid_alignment_configuration exception will be thrown.
Initialises the minimal score. | https://docs.seqan.de/seqan/3-master-user/classseqan3_1_1align__cfg_1_1min__score.html | CC-MAIN-2021-39 | en | refinedweb |
Delete By Query¶
The delete by query API allows to delete documents from one or more indices and one or more types based on a query. The query can either be provided the Query DSL. Here is an example:
import static org.elasticsearch.index.query.FilterBuilders.*; import static org.elasticsearch.index.query.QueryBuilders.*; DeleteByQueryResponse response = client.prepareDeleteByQuery("test") .setQuery(termQuery("_type", "type1")) .execute() .actionGet();
For more information on the delete by query operation, check out the delete_by_query API docs. | https://pyes.readthedocs.io/en/latest/guide/reference/java-api/delete-by-query.html | CC-MAIN-2021-39 | en | refinedweb |
As you are aware now, that constructors are called when an object is created, so static constructors are used to initialize any static data. It is called automatically before first instance of class is created.
It can only access the static member(s) of the class. Static constructor cannot be parameterized constructors, means they cannot have paramters inside it.
Also, each Class can have only one static constructor only, while we can have multiple paramterized constructors.
Syntax
public Class Class_Name{ static Class_Name(){ //do something with static members. } }
One more things to note about static constructor is that, static constructors cannot have access modifiers.
Let's take a look at an example of static constructor.
using System; public class A { static A() { Console.WriteLine("Static A Constructor"); } public static void display() { Console.WriteLine("Display the Static Method of A Class"); } } public class B { static B() { Console.WriteLine("Static B Constructor"); } public static void display() { Console.WriteLine("Display the Static Method of B Class"); } } public class StaticConstructorExample { public static void Main() { A.display(); Console.WriteLine(); B.display(); } }
Output:
Static A Constructor Display the Static Method of A Class Static B Constructor Display the Static Method of B Class
As you can see in the above example, we have created static constructor and static method "
Display()" for both classes
A and
B, when calling methods of each class, we are not creating any object as methods are also static, also notice Static Constructor is called before method is called and code inside it is executed.
We didn't needed to create object to access static constructor because a static constructor is called automatically to initialize the class before the first instance is created or any static members are referenced.
A static constructor cannot be called directly and the user has no control on when the static constructor is executed in the program. | https://qawithexperts.com/tutorial/c-sharp/30/c-sharp-static-constructor | CC-MAIN-2021-39 | en | refinedweb |
This forum has migrated to Microsoft Q&A. Visit Microsoft Q&A to post new questions.
Does Azure Storage or Azure File provides ReadWrite Stream of a blob, without downloading the blob on a local machine memory or disk. There are some operation on Streams requires Stream with CanRead() and CanWrite() as true e.g. below example on Zip file
stored as Azure blob and I would like to update a file inside a zip file. Here Package.Open() expects an ReadWrite Stream
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Blob;
using System.IO.Packaging;
using Microsoft.WindowsAzure.Storage.File;
using System.IO.Compression;
using (Package package = Package.Open(zipFileStream, FileMode.Open, FileAccess.ReadWrite))
{
PackagePart part = package.GetPart(new Uri(relativeFilePath, UriKind.Relative));
using (StreamWriter writer = new StreamWriter(part.GetStream()))
{
writer.Write(updateContent);
writer.Flush();
}
}
Blob storage is optimized for storing massive amounts of unstructured data. There was no build in stream of blob to update files in zip.
I would request you to submit an idea at
Feedback.
All of the feedback you share in these forums will be monitored and reviewed by the Microsoft engineering teams responsible for building Azure. | https://social.msdn.microsoft.com/Forums/en-US/b14a9983-c293-416d-95e7-7ea1acb667c5/readwrite-stream-for-azure-storage?forum=windowsazuredata | CC-MAIN-2021-39 | en | refinedweb |
In this example we will use Micropython on an ESP32, the tool I will use is called uPyCraft which makes the task easy
You will need to download the tool first, the latest version I have seen is available from –.
Start the tool and connect your ESP32 board
Open up the Tools and set the Serial port for your ESP32 board
Open up the Tools and set the ESP32 board in the board menu
Parts List
1 x ESP32 board
1 x LED
1 x 470 ohm resistor
1 x breadboard
Connecting wire
Schematic
Code
Insert the code below into the editor and click on the DownloadRun button
import time from machine import Pin led=Pin(15,Pin.OUT) while True: led.value(1) time.sleep(0.5) led.value(0) time.sleep(0.5)
Links
Official DOIT ESP32 Development Board WiFi+Bluetooth Ultra-Low Power Consumption Dual Core | https://www.esp32learning.com/tutorials/micropython-and-esp32-blink-an-led.php | CC-MAIN-2021-39 | en | refinedweb |
Created on 2019-06-25 08:48 by Dima.Tisnek, last changed 2019-07-24 18:49 by scoder. This issue is now closed.
Example:
# mre.py
from xml.etree import ElementTree
XML = "<a>foo<!-- comment -->bar</a>"
a = ElementTree.fromstring(XML)
print(list(a.itertext()))
# Testing 3.7.3 vs. 3.8.0b1; macOS
… ~> python3.7 mre.py
['foobar']
… ~> python3.8 mre.py
['bar']
Bisecting gives me commit 43851a202c (issue36673) before which "foobar" was returned and after the commit "bar" is returned.
Just to add the Python implementation seems to return "foobar" on commenting the C accelerators imports. So I guess it's a problem with the C implementation in the commit being different from Python implementation.
I think it might be this call that strikes here:
treebuilder_flush_data() is not made for concatenating text, it simply replaces it. If both text parts come separately, and the comment between the two is discarded, then the last one overwrites the first one.
Yes that does look suspicious!
I'm working on a patch. It's not entirely trivial, so it might take a couple of days.
New changeset c6cb4cdd21c0c3a09b0617dbfaa7053d3bfa6def by Stefan Behnel in branch 'master':
bpo-37399: Correctly attach tail text to the last element/comment/pi (GH-14856)
New changeset bb697899aa65d90488af1950ac7cceeb3877d409 by Stefan Behnel in branch '3.8':
[3.8] bpo-37399: Correctly attach tail text to the last element/comment/pi (GH-14856) (GH-14936) | https://bugs.python.org/issue37399 | CC-MAIN-2021-39 | en | refinedweb |
Concerns have been raised that the DWF project is causing “confusion” in the community
The board responsible for overseeing the CVE vulnerability identification program has criticized the DWF project for publishing what it says are “unauthorized” CVE records.
The Common Vulnerabilities and Exposures (CVE) system is a widely used program for cataloging and tracking security vulnerabilities.
Due to the countless vulnerabilities found and reported every day, the Mitre Corporation’s CVE board authorizes organizations to act as CVE Numbering Authorities (CNAs) which are permitted to assign CVE numbers for bugs.
Clearing the backlog
The CVE system aims to provide organizations, researchers, and security specialists the means to track and remedy vulnerabilities, with each disclosed security bug receiving a unique identifier.
However, the CVE process has not been without criticism.
As far back as 2016, Mitre came under fire for alleged backlogs in CVE assignments, leading to concerns that the program – called by some a “cornerstone” of the industry – could become less relevant in the fight against cybersecurity threats in the future.
RELATED CNAs and CVEs – Can allowing vendors to assign their own vulnerability IDs actually hinder security?
These complaints led to the creation of the Distributed Weakness Filing (DWF) system.
Co-founded by Kurt Seifried and Josh Bressers, DWF is a community-based open source project described as an effort to “modernize and improve the security identifier ecosystem”.
The DWF project says that the point of the exercise is to address pain points in the CVE assignment process, as well as improving speed, latency, and volume with the assistance of automation.
The CVE program is overseen and operated by Mitre Corporation
Mistaken identity
There are currently 169 registered CNAs worldwide. The DWF was previously a CNA but no longer acts in this capacity.
Additionally, Seifried resigned from his position as a CVE board member in January 2021 due to “a lack of innovation and forward movement”.
DWF has previously said that in order to prevent overlaps between itself and the CVE project, the numbers is assigns to vulnerabilities started in the 1000000+ range in the past – and as legacy CVE assignments top out at roughly 17,000 CVEs per year, clashes and confusion should not occur.
If existing CNAs began publishing 100,000 CVEs or more on a frequent basis, for example, the DWF said it would move to the 2000000+ range to maintain “significant buffer space”.
“This is trivially achieved as the CVE identifier includes the year allowing changes to be made trivially every year,” Seifried says. “This process can of course be repeated assuming legacy CVE assignments grow rapidly.”
However, this does not appear to have placated the CVE board, which includes representatives from numerous cybersecurity vendors, academic, researchers, government departments and agencies, and other prominent security experts.
Confusion in the community
On May 27, the organization said the DWF project is not following established CVE program rules, and therefore, any CVE assignments issued by it are not valid and will not be included in the main CVE list.
The organization claims the DWF has been causing “confusion” in the community by “infringing on the CVE namespace by issuing IDs using the CVE Program syntax in the CVE-2021-xxxxxxx (million) range”.
“The CVE Board wants to make this clear to community stakeholders to eliminate the confusion caused by the unauthorized use of the CVE namespace,” it said.
The issue has not been resolved, and on April 2 a further message from the CVE board, directly addressed to the DWF’s co-founders, was published online.
RECOMMENDED Google Chrome Web Store is ranking suspicious web extensions above popular plugins
The organization claims that the DWF has begun “attempting” to issue CVE IDs via its GitHub repository, and that at least eight records have been pushed. But as the project is not a named or approved CNA, issuing CVE IDs is causing “confusion in the CVE contributor and user communities”.
Furthermore, the group says that this activity – no matter the numbering order used – could “undermine public trust in the entire CVE system”.
“This erosion of trust degrades the CVE community’s ability to provide a free public resource to track vulnerabilities and reduce cybersecurity risk,” the statement reads.
To push the matter from the realm of confusion into the legal, the CVE board also says that issuing unauthorized CVE IDs is a “misappropriation” of the CVE brand, unfair competition, and may allegedly be an abuse of a “registered trademark of the Mitre Corporation”.
The CVE board has invited DWF to re-apply for CNA status, but in the meantime asks that DWF should cease issuing CVE IDs and “rename all current and future IDs that DWF issues”.
The organization told The Daily Swig that the CVE board has been in “direct communication with DWF as well as the broader community regarding DWF’s activities”.
Mitre declined to comment further. Seifried pointed us to the DWF press pack, which was updated at the time of enquiry.
INTERVIEW ‘Being serious about security is a must’ – Apache Software Foundation custodians on fulfilling its founding mission | https://portswigger.net/daily-swig/cve-board-slams-distributed-weakness-filing-project-for-publishing-unauthorized-cve-records | CC-MAIN-2021-39 | en | refinedweb |
I’ve been having an issue with this project where it appears I am stuck in a loop where the code prints out the else statement ("Invalid Move. Try Again) even when I intentionally put in inputs that meet the criteria of the if or elif statements that proceed it.
From what I can see, my code is identical to the solution video. Am I missing something???
from stack import Stack print("\nLet's play Towers of Hanoi!!") #Create the Stacks stacks = [] left_stack = Stack('Left') middle_stack = Stack('Middle') right_stack = Stack('Right') stacks += [left_stack, middle_stack,(i) num_optimal_moves = (2 ** num_disks) - 1 print("\nThe fastest you can solve this game is in {0} moves".format(num_optimal_moves)) #Get User Input def get_input(): choices = [stack.get_name()[0] for stack in stacks] while True: for i in range(len(stacks)): name = stacks[i].get_name() letter = choices[i] print('Enter {0} for {0} moves, and the optimal number of moves is {1}".format(num_user_moves, num_optimal_moves)) | https://discuss.codecademy.com/t/towers-of-hanoi-loop/440135 | CC-MAIN-2019-43 | en | refinedweb |
Creating Document-Centric Applications in Windows Forms, Part 3
Chris Sells
Microsoft Corporation
November 12, 2003
Summary: In this final installment, Chris Sells refactors his original application and splits the application-neutral functionality out into a component that allows for both ease of maintenance and reuse of the document-management functionality in other applications. (12 printed pages)
Download the wfdocs3src.msi sample file.
Recall from the first two parts of this article that we've been building an application for calculating average and annualized rates of return, shown in Figure 1 as an SDI application and in Figure 2 as an MDI application.
Figure 1. The SDI version of the RatesOfReturn application
Figure 2. The MDI version of the RatesOfReturn application
As I mentioned in part 1, building the main functionality of the application was mostly a matter of making use of the DataSet and Windows Forms designer, with the code boiling down into a single, small event handler method. However, saving the user's data to a file and allowing them to open it again, something supported thoroughly in the Microsoft Foundation Classes (MFC), isn't directly supported in .NET. So, I spent the last two articles of my column building this functionality, first for SDI applications, then for MDI application, throwing in some basic shell integration along the way. What I ended up with is a bunch of code that is chiefly concerned with the standard document-management behavior that users are accustomed to seeing in file-based applications. Because this code isn't specific to just this application, but should be packaged to be reusable across applications, it really belongs in a component.
Components
A component is a .NET class that integrates with a design-time environment (like Visual Studio® .NET). A component can show up on the Toolbox along with controls and can be dropped onto any design surface. Dropping a component onto a design surface makes it available to set the property or handle the events in the Designer, just like a control. Figure 3 shows the difference between a hosted control and a hosted component.
Figure 3. Where components and controls are hosted on a form
What makes components useful is that while they're non-visual at run time, they can be manipulated by the developer at design time. For example, dropping a PrintDocument component onto a form allows you to use the Property Browser to set the DocumentName property and handle the PrintPage event, which will cause the Designer to generate code like the following in the form's InitializeComponent method:
void InitializeComponent() { this.components = new System.ComponentModel.Container(); this.printDocument1 = new System.Drawing.Printing.PrintDocument(); ... this.printDocument1.DocumentName = "Rates of Return"; this.printDocument1.PrintPage += new PrintPageEventHandler(this.printDocument1_PrintPage); ... }
Using a pre-built component is a productivity boost. Likewise, building your own code into a component allows you give yourself the same productivity boost the next time you or someone you love needs to reuse your code. This is exactly the right way for us to gather together the application-neutral portions of the document-management code we've been building. And, because our component bundles up file operations in the same way that the PrintDocument component bundles up print operations, we'll call our component "FileDocument."
The FileDocument Component
For maximum reuse, you'll want to build your components into assemblies that are separate from any one application that makes use of them. Visual Studio .NET facilitates this with the Windows Control Library project template, which creates a DLL assembly that we can reference in our application projects. This project template generates a class file that isn't much use for our needs and can be deleted. Providing a much more helpful start on the road to the FileDocument component is adding a Component from the Project menu, which produces a class that derives from the System.ComponentModel.Component class, which is all we need to integrate with the Visual Studio .NET Designers.
After a component assembly is compiled, it can be added to the Toolbox by right-clicking on the Toolbox, choosing Add/Remove Items (or Customize Toolbox in Visual Studio .NET 2002), browsing to the assembly in the file system and making sure that the checkbox next to the component that you'd like to pull in is checked, as shown in Figure 4.
Figure 4. Adding a component to the Toolbox
Once it's there, you can drag it from the Toolbox, drop it onto a form, and use the Property Browser to set the properties as appropriate, as shown in Figure 5.
Figure 5. Managing a FileDocument component in the Property Browser
Figure 5 shows the completed FileDocument component after I migrated the code out of the form and into the component. Now let's see how to put the component to use.
Using the FileDocument Component
Before dropping the FileDocument component onto the main form in the SDI version of the RatesOfReturn application, I first stripped it back to the application-specific code, which consisted of a bunch of Designer-generated code and my single event handler:
public class RatesOfReturnForm : Form { ... void dataView1_ListChanged(object s, ListChangedEventArgs e) {...} }
With the application-neutral code out of there, I dropped an instance of the FileDocument component onto the main form to implement the dirty bit and file name management.
Dirty Bit Management
The file name is managed as part of the implementation of the File menu items, but first the document needs to know when the data has changed so that it can prompt the user about saving at various times (that is, creating a new document, opening an existing one, and closing the window). For this to happen properly, we need to track when the data has changed in our data set and set the document's dirty bit appropriately. We can do this in the data set's ListChanged event:
public class RatesOfReturnForm : Form { ... void dataView1_ListChanged(object s, ListChangedEventArgs e) { ... // Update the dirty bit fileDocument1.Dirty = true; ... } }
Since the FileDocument component knows about the hosting form (using a component trick discussed in the writings listed in the References section), it can update the caption text appropriately to reflect the dirty bit.
File Management
Next, I went back to the Property Browser to set the FileDocument's DefaultExt, DefaultExtDescription, and RegisterDefaultExtension properties. This information is used to set the file dialog properties appropriately for my custom file extension, but also to register it with the shell so that double-clicking on one of the application's custom files causes the application to be launched by the shell. Since a double-click launches a new instance of my application, passing the name of the file to open to Main through the string array arguments, I still need the code in Main that opens an initial file if one is passed in:
static void Main(string[] args) { // Load main form, taking command line into account RatesOfReturnForm form = new RatesOfReturnForm(); if( args.Length == 1 ) { form.OpenDocument(Path.GetFullPath(args[0])); } Application.Run(form); }
The main form still needs to expose a public OpenDocument method as it did before, but it can just pass the file name along to the FileDocument component:
public class RatesOfReturnForm : Form { ... // For opening document from command line arguments public bool OpenDocument(string fileName) { return fileDocument1.Open(fileName); } ... }
When the FileDocument object is first created or it's reused through the New method (as the implementation of the File->New menu item), it fires the NewDocument event, which we can use to set the initial seed data on our form:
public class RatesOfReturnForm : Form { ... void fileDocument1_NewDocument(object sender, EventArgs e) { // Clear existing data ClearDataSet(); // Set initial document data and state this.periodReturnsSet1.PeriodReturn.AddPeriodReturnRow( "start", 0M, 1000M); } ... }
If a file is passed through the command line or we're implementing the File->Open menu item, we'll get the ReadDocument event:); } ... }
Notice that the FileDocument component's ReadDocument event passes along an object of the custom type SerializeDocumentEventArgs, which contains the file name of the document to be read and a stream already opened on that file. This is where our FileDocument component shines. All we need to do is ask the FileDocument to Open and it checks the dirty bit to see if the current document needs to be saved first, prompts the user, saves the document as necessary, uses the DefaultExt to show the file open dialog, gets the file name, updates the hosting form's caption with the new file name, and even puts the newly opened file into the Start->Documents menu. The FileDocument component will ask us to do the small application-specific part (reading the data from the stream) by firing the ReadDocument event at the right time in the process.
In the same way, implementing the File->Save family of menu items is a matter of handling the WriteDocument event:
public class RatesOfReturnForm : Form { ... void fileDocument1_WriteDocument( object sender, SerializeDocumentEventArgs e) { // Serialize object to text format IFormatter formatter = new SoapFormatter(); formatter.Serialize(e.Stream, this.periodReturnsSet1); } ... }
Just like Open, the FileDocument component handles all of the chores of the Save family of operations, including the slightly different semantics of Save, Save As, and Save Copy As. The component also makes sure to change the current file and dirty bit (or not) as appropriate.
Handling the Menu Items
The NewDocument, ReadDocument, and WriteDocument events are called as part of the implementation of the File menu items. You can implement those menu items by handling the menu items in your form and calling the corresponding FileDocument methods:
public class RatesOfReturnForm : Form { ... void fileNewMenuItem_Click(object sender, EventArgs e) { fileDocument1.New(); } void fileOpenMenuItem_Click(object sender, EventArgs e) { fileDocument1.Open(); } void fileSaveMenuItem_Click(object sender, EventArgs e) { fileDocument1.Save(); } void fileSaveAsMenuItem_Click(object sender, EventArgs e) { fileDocument1.SaveAs(); } void fileSaveCopyAsMenuItem_Click(object sender, EventArgs e) { fileDocument1.SaveCopyAs(); } ... }
However, because this code will be the same for most applications, there's no reason for everyone to write it when we've got the magic of components combined with the smarts of Visual Studio .NET. By exposing a MenuItem property for each of the File menu items that the FileDocument component supports, we can simply select the appropriate menu item in the Property Browser from a drop-down list and let the FileDocument component itself handle the menu items, as shown in Figure 6.
Figure 6. Letting the FileDocument handle the File menu items
Notice that File->Exit isn't on this list. That's up to the form to implement:
public class RatesOfReturnForm : Form { ... void fileExitMenuItem_Click(object sender, EventArgs e) { // Let FileDocument component decide if this is OK this.Close(); } ... }
All the main form has to do to implement File->Exit is to do what it normally would in any application—close itself. Since the FileDocument knows which form is hosting it, it can handle the main form's Closing event and let it close or not based on the dirty bit and what the user has to say about whether they want to save their data or not. You don't need to write any special code to make this happen.
Simplified Serialization
In the spirit of further code reduction, let's take another quick look at the ReadDocument and WriteDocument event handlers:); } void fileDocument1_WriteDocument( object sender, SerializeDocumentEventArgs e) { // Serialize object to text format IFormatter formatter = new SoapFormatter(); formatter.Serialize(e.Stream, this.periodReturnsSet1); } ... }
You'll notice that in addition to deserializing the data from the stream in the ReadDocument event handler, we need to do some special things to clear the existing data set and merge the new data in. This is necessary to preserve the data binding settings. However, it's often the case that we don't need to do anything to an object except deserialize it, just like we don't have to do anything special in our case when we're serializing. Towards that end, the FileDocument component exposes the Data and Formatter properties to be used if one or both of the ReadDocument and WriteDocument events aren't implemented.
For our application, this means we need to set the Data property of the FileDocument to the typed data set that we've been serializing by hand and make sure that we're using the same formatter between the FileDocument and the ReadDocument event handlers:
public class RatesOfReturnForm : Form { ... public RatesOfReturnForm() { // Required for Windows Form Designer support InitializeComponent(); // Set up the data and the formatter // so that the FileDocument can write the data // by itself (we still need to read, though) fileDocument1.Data = this.periodReturnsSet1; fileDocument1.Formatter = new SoapFormatter(); } void fileDocument1_ReadDocument( object sender, SerializeDocumentEventArgs e) { // Deserialize object from text format // using the FileDocument's formatter IFormatter formatter = this.fileDocument1.Formatter; PeriodReturnsSet ds = (PeriodReturnsSet)formatter.Deserialize(e.Stream); // Clear existing data ClearDataSet(); // Merge in new data, keeping data bindings intact this.periodReturnsSet1.Merge(ds); } // Let FileDocument component handle WriteDocument itself ... }
The flexibility to handle the serialization events or not allows you to handle a full range of cases, ranging from where the data is completely managed by the "view" of the data (for example, a text editor that manages all of its data in an instance of the TextBox control) to where the data is completely separate from the view a la the classic MFC document-view scenario. In fact, ex-MFC programmers that are so inclined could derive a custom document type from the FileDocument component, put the custom data into this new document type and handle the serialization events itself, providing a good start on duplicating the MFC Document/View model of programming. Once the basics are in place, all kinds of things are possible.
MDI and the FileDocument Component
The MDI usage of the FileDocument component is just as easy as the SDI usage. An MDI child form implements the File->Save family of menu items like the SDI main form does, with the addition of the File->Close menu item, which can also be handled directly by the FileDocument component through a menu property.
The MDI parent form creates new instances of the MDI child form to implement File->New and File->Open, as it did in part 2 of this series:(); } ... }
Likewise, File->Exit is implemented in the MDI parent as it was in the SDI main form by closing the form and letting the FileDocument component judge whether the MDI children can be closed based on the dirty bit and the user's input.
Genghis
As handy as the FileDocument is, it's not quite done yet. For example, I haven't found a model that I like for multiple views on the same data. Likewise, I'm not handling the case where an application understands several file extensions. I find these rare enough in my own world that I haven't needed this support, but other folks, especially those bringing their MFC applications forward, might need it. Towards that end, the FileDocument component is part of the Genghis shared source project (as listed in the References section). You can always get the latest version of the FileDocument from the Genghis project, which may have this functionality added by some kind soul in the future. Even better, you may decide to join the project and be that kind soul.
Where Are We?
I started this series very happy to use the Windows Forms and Visual Studio .NET support for data binding, letting it generate most of the code to provide the main functionality. I then progressed to writing all kinds of code that wasn't specific to my application, but that I needed to get a full-featured application. From there, I progressed to refactoring my application, splitting the application-neutral functionality out into a component so that I could both cleanup my custom code for ease of maintenance and reuse the document-management functionality in other applications. This is a model that I favor heavily. I find that while refactoring in this manner takes some extra time up front, it always give me better, clearer, more maintainable, more robust, and more reusable code. Further, this increases the overall quality of my applications and ultimately saves me time. If that's not magic, I don't know what is.
References
There are a lot of component-specific features that I don't discuss in this article, like hiding some of the properties, and the need for the ISupportInitialize interface and the implementation of the HostForm property. You can learn more about these and other nifty component tricks in the following sources:
- Windows Forms Programming in C#, August, 2003, Addison-Wesley
- Building Windows Forms Controls and Components with Rich Design-Time Features, Part 1, Michael Weinhardt and Chris Sells, 4/2003
- Building Windows Forms Controls and Components with Rich Design-Time Features, Part 2, Michael Weinhardt and Chris Sells, 5/2003
- The Genghis shared source library. | https://docs.microsoft.com/en-us/previous-versions/dotnet/articles/ms951301(v=msdn.10)?redirectedfrom=MSDN | CC-MAIN-2019-43 | en | refinedweb |
Introduction to Qt Quick.
Contents
- 1 Overview
- 2 A short introduction to QML
- 3 Qt Creator
- 4 Getting started++..
QML language. Quick 1.1.. {} ] }t..
import Qt 4.7 6:. Area 7::
import QtQuick 1.1" } }
import QtQuick 1.1 Column { spacing: 10 Button { text: "Apple" } Button { text: "Orange" } Button { text: "Pear" } Button { text: "Grape" } } snippet and screen shot.. Snippet 14 cap that makes this file an accessible component.)
import QtQuick 1.1 14: Defining a List Model in QML
Snippet 15 expand it. Figure 4 shows a typical result.
Figure 4: Implementing an over-sized MouseArea
Integration with C++ applications
Qt Quick comes with its own runtime view.,. 19:.
QDeclarativeContext *context = …; context->setContextProperty("frameColor", QColor(Qt::red));.
Snippet 19DataSet); QDeclarativeComponent component(&engine); component.setData("import Qt 4.7\nListView { model: myModel }", QUrl()); component.create(context);
Snippet 20: interface.
To expose a QAbstractItemModel to QML a context property is used:
QAbstractItemModel *model = …; context->setContextProperty("dataModel", model);
Tech Note: Within Qt,.
class CallableClass : public QObject { -> QML handler
Qt C++ signals can be handled by JavaScript executing in a QML context. For instance, the CallableClass class from the previous example also declares a signal, cppSignal.
class CallableClass : public QObject { Q_OBJECT … signals: void; };
Snippet 21: | https://wiki.qt.io/Introduction_to_Qt_Quick | CC-MAIN-2019-43 | en | refinedweb |
These are chat archives for dry-rb/chat
dry-types, is there an analogue to the Virtus "private setter" idiom, so that one can define an "attribute" with a default (fixed) value?
def my_attr; "static value"; end?
def to_h; super.merge(my_attr: “hi”); endfor now?
Dry::Typesdefinition for a UUID, which would let me write clearer code than
attribute :identifier, Types::Strict::String.default { UUID.generate }. constrained(format: /\A\h{8}(-\h{4}){3}\-\h{12}\z/ )
module Types UUID = Types::Strict::String.defaults { UUID.generate }.constrained(…) end
#callinterface, perhaps an example of actually switching out a dependency to show that as long as we adhere to the same interface the only code we need to change is registering the object?
register(:user_validator) { Users::Validator.new } register(:user_validator) { Some::Other::Validator.new }
#call(hash)we're golden
ResultObject #call(hash)
Any implementation that wants to behave as low-level “article validator” or “article persister” module must now adhere to the interface expectations set here in this high-level module.explains the problem away, but might not be immediately obvious to less experienced programmers
modulecould confuse people there
For example, we might decide that our original validator implementation is no longer what we need, and we now want to validate each article by submitting it to a web service for bad pun detection. We can create a
BadPunValidatorthat responds to
#call, register it with the container as
validate_article, and now we have a completely different validation approach that continues to work with the high-level code.
Our higher-level modules are now more reusable because they’re no longer couple=>
Our higher-level modules are now more reusable because they’re no longer coupled
dependency inversion principleto
I have this module
module UniquePredicate
include Dry::Logic::Predicates
predicate(:unique?) do |value,input|
find_by(input => value)
end
end
and this validation
require 'json'
require 'dry-validation'
include UniquePredicate
UserSchema = Dry::Validation.JSON do
configure { config.messages_file = 'config/validations/errors.yml' }
key(:username).required(:unique?)
key(:name).required
configure do
config.predicates = UniquePredicate
end
end
require 'dry-validation' User = Struct.new(:username, :name) class UserRepository USERS = [] def create(attributes) USERS << User.new(*attributes.values_at(:username, :name)) end def find_by_username(username) USERS.find { |user| user.username == username } end end UserSchema = Dry::Validation.JSON do configure do option :finder config.messages_file = 'config/validations/errors.yml' def unique?(username) finder.call(username).nil? end end required(:username).required(:unique?) required(:name).required end user_repo = UserRepository.new user_repo.create(username: 'john', name: 'John') UserSchema.with(finder: user_repo.method(:find_by_username)).call( username: 'john', name: 'John' ) UserSchema.with(finder: user_repo.method(:find_by_username)).call( username: 'jill', name: 'Jill' )
=> #<Dry::Validation::Result output={:username=>"john", :name=>"John"} messages={:username=>["must be unique"]}> => #<Dry::Validation::Result output={:username=>"jill", :name=>"Jill"} messages={}>
Types::Form::Stringbut that doesn’t appear to exist anymore | https://gitter.im/dry-rb/chat/archives/2016/05/09?at=57306114c2a86dcf791976b9 | CC-MAIN-2019-43 | en | refinedweb |
Introduction.
This is a tutorial for searching a element in array in Java. The program is given below that takes the array from user and a number to be found as input and search it and print the position of the element in array. Go enjoy the program. Lets begin…
Program for Searching a Element in Array.
//import Scanner as we require it. import java.util.Scanner; // the name of our class its public public class SearchArray { //void main public static void main (String[] args) { //declare int int a[] = new int[10],no; //declare scanner object. Scanner input = new Scanner(System.in); //a loop to enter elements of array. for(int i=0;i<10;i++) { System.out.println("Enter number:"); a[i] = input.nextInt(); } //input a number to find. System.out.println("Enter the number to be searched:"); no = input.nextInt(); //find the number. int i; for(i=0;i<10;i++) { if(a[i]==no) break; } //print the output. if(i==10) System.out.println("No. not found"); else System.out.println("No. is at "+(i+1)+" Position"); } }
Output
Enter number: 1
Enter number: 2
Enter number: 3
Enter number: 4
Enter number: 5
Enter number: 6
Enter number: 7
Enter number: 8
Enter number: 9
Enter number: 10
Enter the number to be searched: 9
No. is at 9 Position
How does it work
- You enter the array and number to be found.
- The number is searched.
- The result is printed.
Extending it
The program cannot be extended.
Explanation.
- Import the Scanner.
- Declare the class as public
- Add the void main function
- Add system.out.println() function with the message to enter number.
- Declare input as Scanner.
- Take the array and and number and save it in variables.
- Add a loop and search the array.
- Print the result.
At the end.
You learnt creating the Java program for Searching a Element in Array. So now enjoy the program.
Please comment on the post and share it. | https://techtopz.com/java-programming-searching-an-element-in-array/ | CC-MAIN-2019-43 | en | refinedweb |
change the Icon or the Title property of the master detail page whenever the user has some communication to read. When I change the Icon or the Title the code run smoothly, but to have the changes rendered in the view I need to open and then Close the Menu.. Here is the code that I'm using...
Firstly the menuView, AKA the master page for the master detail page
public class MenuView : ContentPage
{
public Action<MenuItem> MenuItemSelected;
private MenuViewModel _viewModel;
public MenuView()
{
NotificationsHandler.UpdateAppIconBadge ();
Title = "Notifications"+"("+ AppContext.NotificationNumber+")";
_viewModel = new MenuViewModel (this)
BindingContext = _viewModel;
var header = SetupHeader();
var menuItems = SetupMenuItems();
var listView = SetupMenuList(menuItems);
Content = new StackLayout {
Spacing = 1,
Children = {
header,
listView
}
};
}
}
And here is the ViewModel for it:
public class MenuViewModel : ViewModel
{
MenuView _view;
public MenuViewModel (MenuView view)
{
_view = view;
_view.Title = "Notifiche"+"("+ AppContext.NotificationNumber+")";
MessagingCenter.Subscribe<CommunicationHelper> (this, Messages.NotificationNumber, (sender) => {
_view.Title = "Notifiche"+"("+ AppContext.NotificationNumber+")";
});
}
}
As you can see I'm using the MessagingCenter to trigger the event... In my case the event is triggered whenever I receive a RemoteNotification
As I said the problem is that the Title (the result is the same if I try to change the Icon) does not change when I change the property.. But only after I open and close the menu. Even using the Binding the result is the same.
@Davide
I have tried to reproduce this issue but not able to reproduce this issue.
Could you please provide us a sample project? so that I can reproduce this issue at my end efficiently.
Thanks.
Created attachment 11905 [details]
Example of code replicating the error
The error is not replicable on emulator.. Only on device..
@Parmendra Just to be clear, the code that I provided does not work in any ambient, device or emulator but both have different behaviour. In the emulator the Icon is not changed at all, never. Meanwhile in a device the Icon changes when opening or closing the Menu.
@Davide
I have checked this issue with attached sample project in comment #2 and I am getting blank screen on both emulator and device.
Screencast:
Please check the screencast and let me know if I have missed anything.
Thanks.
@Parmendra The behaviour that I want:
the icon on top left (the 3 gray bar ) after clicking on the button should become the image NotificationBadge.png. But this does not happen.
What happens instead for iOS:
In iOS in the device if you click the button change icon nothing will happen, but if you open then close the menu the icon will change becoming the NotificationBadge.png.
On Android nothing at all happens.
Any news?
Thanks @Davide
I have rechecked this issue and observed that after clicking on the button should not become
Please ignore comment #7
Thanks @Davide
I have checked this issue and observed that after clicking on the button it doesn't changed
Should be fixed in 1.5.0-pre1
I have checked this issue with Xamarin.Forms 1.5.0-pre1 and its working fine at my end.
Hence closing this issue.
Yeah one thing, why the image is blue? Can't it be of the native color?
Also there is a little problem in how it is solved, it doesn't work in my solution... I reproduced the error case, do you read me here and I can post it here or do I need to open a new bug?
@Davide: The reported issue has been FIXED.
Please file a new bug for the behavior you mentioned in comment #11 and comment #12.
Thanks.
@Davide is blue in iOS? That's the default tint color, you can change it by setting BarTextColor on your NavigationPage | https://xamarin.github.io/bugzilla-archives/31/31664/bug.html | CC-MAIN-2019-43 | en | refinedweb |
Current Version:
Linux Kernel - 3.80
Synopsis
#include <linux/hdreg.h> /* for HDIO_GETGEO */ #include <linux/fs.h> /* for BLKGETSIZE and BLKRRPART */
Configuration
SCSI disks have a major device number of 8, and a minor device number of the form (16 * drive_number) + partition_number, where drive_number is the number of the physical drive in order of detection, and partition_number is as follows:
partition 0 is the whole drive
partitions 1-4 are the DOS "primary" partitions
partitions 5-8 are the DOS "extended" (or "logical") partitions
- HDIO_GETGEO
-read of the SCSI disk partition tables. No parameter is needed.
The SCSI ioctl(2) operations are also supported. If the ioctl(2) parameter is required, and it is NULL, then ioctl(2) will fail with the error EINVAL.
Files
/dev/sd[a-h][0-8]: individual block partitions
Colophon
License & Copyright
sd.4 Copyright 1992 Rickard E. Faith ([email protected] | https://community.spiceworks.com/linux/man/4/sd | CC-MAIN-2019-43 | en | refinedweb |
1.037 2019-07-23 Released-By: PERLANCAR; Urgency: medium - Add to builtin skip list: Shipment-3.02.tar.gz (segfaults Compiler::Lexer 0.22). 1.036 2019-07-11 Released-By: PERLANCAR; Urgency: medium -. - scripts: - [ux] related-mods: Instead of specifying min_score, let users just specify number of result they want; hide score-related fields by default in result unless user specifies --with-scores. 1.011 2016-10-14 Released-By: PERLANCAR - authors-by-rdep-count: fix SQL again, derp (didn't really count the number of dists). 1.010 2016-10-14 Released-By: PERLANCAR - authors-by-rdep-count: fix SQL which caused duplicated author rows, include author name in the result, only count latest distributions, add options --exclude-same-author. - [doc] Add Synopsis and Description for lcpanm. 1.009 2016-10-09 Released-By: PERLANCAR -] mods: Add shortcut options -x (for --query-type=exact-name), -n (for --query-type=name), -N (for --namespace). - [ux] authors, dists, rels: Also add shortcut options -x & -n. - [ux] [completion] mods, dists, authors, rels: Do completion on query when query type is exact name. - [ux] [completion] Don't do completion when word is empty, to avoid slow response. 0.81 2016-03-02 Released-By: PERLANCAR - doc: Encoding fixes (use pod2man -u option). -. - mods: Add options --include-core, --include-noncore. -] - deps: Assume 'perl' is a core dependency instead of non-core. 0.79 2016-02-19 Released-By: PERLANCAR - Add subcommand: related-mods. - [Optimization] mentions: No need to include both clause when type=known-module or type=unknown-module. -. - dists: Forgot to update field name. - 10 to support subroutine indexing and force reindexing of content/script/mention. [Enhancements] - New subcommands: sub, subs-by-count, script2author, mod2author, dist2author. 0.77 2016-02-17 Released-By: PERLANCAR - No functional changes. - [Bugfix] mods: fix Rinci argument specification for 'sort'. 0.76 2016-02-16 Released-By: PERLANCAR - [Bugfix] dists: release name was shown instead of size in rel_size result field. 0.75 2016-02-16 Released-By: PERLANCAR - dists: Add options --sort, --rel-mtime-newer-than. 0.74 2016-02-16 Released-By: PERLANCAR - Add subcommand: stats-last-index-time. This is a faster version of 'stats' for App::lcpan::Call by only returning the result that is needed. - stats: Add total_filesize. - doc: Add option --html to open browser and show the HTML documentation. 0.73 2016-02-14 Released-By: PERLANCAR - doc: Search .pod using content path to be able to show POD when there is no package declaration in the .pod. 0.72 2016-02-14 Released-By: PERLANCAR - [Bugfix] doc: Fix SQL error when name contains /\.(pod|pm)$/. - mods: Add result field rel_mtime. -. - script2dist: Add option --all. - script2rel: Accept multiple arguments. - dists: Add options --sort, --has-multiple-rels. - authors-by-*: Add percentage result field. -] deps: Change default --include-core to 1. - copy_*, extract_*: Check that release file exists. 0.65 2016-01-23 Released-By: PERLANCAR - [Bugfix] deps: Dependency to unindexed modules was not shown. - Use authors/00whois.xml (produced by e.g. OrePAN) when authors/01mailrc.txt.gz does not exist. - [doc] Update mini CPAN current size. - mods: (PERLANCAR) (It's *this* Christmas!) - Replace --exact-match with the more general --query-type to be able to search certain fields only or exact match on certain fields only. - - stats: Add num_authors_with_releases. - [dist] Bump version of IPC::System::Options to fixed 0.22 version. 0.55 2015-10-20 Released-By: PERLANCAR - - dists: Add filtering options --has-{makefilepl,buildpl,metayml,metajson}. 0.34 2015-05-15 Released-By: PERLANCAR - mod2dist: Accept multiple modules. - - rdeps: Implement recursive (-l/-R). - deps: Ordering when in -l/-R mode. - deps: Optimize query (avoid retrieving module ID's twice when in recursive mode). - [Bugfix]. - mods-by-rdep-count: Add filter options --phase & --rel. 0.22 2015-04-15 Released-By: PERLANCAR -. - update: - mods-from-same-dist: Accept multiple modules. 0.17 2015-04-14 Released-By: PERLANCAR - Add command: mods-from-same-dist. - command:. | https://metacpan.org/changes/release/PERLANCAR/App-lcpan-1.037 | CC-MAIN-2019-43 | en | refinedweb |
- 02 questions a bout imageList
- Profiling & performance testing
- Unselecting all rows in a grid
- Web Services Help: Can you choose a Web Service at run time?
- Spawning dialog boxes from dlls
- Optional parameters
- DataGrid selected rows
- Dhcp Server API
- Process
- How do you convert a VB.net Proj to C#?
- working with reusable business objects
- message boxes
- COM Object setting a property problem
- tabs or spaces probleme...
- Converting string or int to an Enum
- xsl transform
- HELP ME (Images)
- What does m_ represent?
- Q: Ordering/Sorting [Category] text in a PropertyGrid
- ExecuteReader() v. ExecuteNonQuery()
- Development Practice: multiple assemblies
- Maximized windows form inside parent form
- Can't view DataGrid in design mode
- create a non-security group in AD wont work!
- removing Tab
- KeyDown event
- using inside or outside my namespace
- change fore color of single cell in a datagrid
- data concurrency
- MDI dbt
- SQL provider
- Reflection emit consturctor problem.
- SocketException
- Is it a bug? Event handler: PictureBox on Form.
- Best way to store Key and IV
- DataGrid: How to expand columnwidth to fit column contents
- vbTab?
- struct to byte[] conversion
- Conversion of byte[]
- Calling unmanaged C++ DLLs
- MouseDown Event
- using process class
- ComboBox DrawItem
- DHTML editing control C# problem
- Passing Parameters to Console Application
- Error when comparing a date in an access database
- Basic ASP.NET problems
- MeasureString
- Drag-N-Drop from MS Outlook
- Attaching a process Programatically
- Passing NULL value to a DLL
- UdpClient throws Socket Exception on subsequent Send calls
- Com Variant Marshalling Question
- Socket.Select List Limit
- ComboBox - change textbox control
- COM interop problem.
- creating color object
- How do I get info regarding the calling method
- C# enum
- VS.NET repositions code
- string 2 date conversion
- Handle to a window
- Flash and Alpha in windows forms
- Excel RowCount
- BIG PROBLEM! Debugger hanging on sql server code debugging
- HSV to RGB
- Data Validation
- Function name
- System.Runtime.InteropServices.SEHException
- urgetnt
- HELP ! URGENT
- New GREEEATE Tool for compononet developers
- InputPanel
- using System.Management
- PC-PJ2-X3 mediums
- How test if System.Object's are same type and equal?
- name.exe.config is getting killed by 2003?
- How do you prevent garbage collection?
- Why do I keep getting a warning that a reference cannot be overwritten when I compile?
- New .NET related news portal
- add reference programmatically
- How best implement ASP.NET Forms Authentication?
- Dialing number and sending text message?
- statically link assemblies
- treeView afterLabelEdit deadly embrace
- SMTP- Email to Fax
- Checkbox Problem
- Listbox handle - crash...
- dataset update through webservice
- How to instantiate an object using a string in C#
- ?? Array of Pen
- DeviceIoControl to compress files
- how to use Frames with C#
- Web user control - Server.execute
- DataGrid Retrieval?
- Listbox problems
- Combobox populating Value ??????????
- Captionless window
- what way to get path of gacutil?
- Magic Library Docking problem
- Property Assesor
- a question about communication between C# and C++
- Why in the world...
- RichTextBox.CanUndo()
- Application Path of Parent EXE
- How to get Application Path?
- Class Design Question
- How do I determine the exe filename at runtime?
- Suspended process creation from c# app?
- Converting Regular numbers to Roman Numerals
- Using CreateWindowEx API to create a RichTextBox
- Thread
- WMI Events Help
- Simple question : how to strip carriage returns from a string?
- static & abstract
- csc /lib compiler option (VS.NET 2003)
- Thread Exception
- How to disable a form being resized
- Inherited form with DLLImport
- Inheritance sort question
- [DataGrid:HyperLinkColumn] How to make HyperLinkColumn with two datafields?
- DataGrid and the "Enter" Key pressed...
- Binding Data at Design Time
- instance reference; qualify it with a type name instead
- Fetching the name of a class
- OdbcParameter problem
- Retrieving clicked text with .net?? is it possible??
- design-time errors - user control
- C#( Is it possible to add the Enum in dynamically?)
- Get description of the computer on the network
- Repair Database Access
- C++ and C# communication
- line feed and tab
- batch job for mailing out hundreds of mails....
- Grid data group
- C# terminology questions
- Help with Windows DataGrid and DataBinding (Urgent)
- User Controls functions
- myPen.Width does not take 2 param
- How to create smaller image from a bigger image - please help !
- datetime culture problems
- Run Office Macros by Using Automation from Visual C# .NET
- computer name
- How to run message pump in .NET?
- VB InputBox
- how to use C# dll files in the C++.net??
- Event Sink
- How to create a Pluggin
- Querry Problem
- problem with C# lock and windows2000 callback
- ASCII encryption
- Deployment - Pre load
- BindingContext between Query Table
- Check if client Socket is Disconnected
- zoom in PictureBox
- string to Point
- Persisting Classes
- Tmer
- Customized CheckBox
- clone
- NAT
- FormTitle
- dragurl
- Displaying images from a database
- Please help. How to create a small image from a big image - Any Suggestions Please ?
- C#
- Screen scraping data from windows app - help needed
- String search in ComboBox
- byte[] ba.GetHashCode() -- Curious result!-)
- how to use the 'description' attribute in c#
- predefined preprocessor macros
- HELP : PropertyGrid and Objects
- 'object' does not contain a definition for 'MyArrayList'
- What's all this WndProc stuff?
- Get shell icon from file extension
- System.Net.CredentialCache
- how to use C# dll file in the c++ 6.0?
- XML2HTML with multiple xslt template files!
- Get current user name from system
- Threading question
- Drawing an arrow
- Events in dynamic loaded assembly
- CRC64
- How to transform a Struct into a byte array
- COM failure (Interop PIA)
- Computational Cluster with C# and .Net
- SQLConnection - probably simple
- static class
- serious encoding problem
- Call a DLL from C# with structure of callback routine
- Unsure about destructor & IDisposable for C#
- Interface with indexer
- How can i change System Date?
- Asc in C#?
- Data marshaling between unmanaged C++ & C#
- Data grid new column's default value
- Visual Studio resx files
- need urgent help !
- Activator.CreateInstance problem
- all key events not registering
- Index out of range
- So many comparison methods,But why
- How can I get the checkbox value in treeview.
- How come DataGrid does not accept NumPad minus "-"
- how to shrink a GraphicsPath object.
- Problems with function arguments - need help!
- Reference Types Performance MS Peeps please sound off!
- How to Clip a portion of an Image
- programmatically creating powerpoint macro
- Sharing Violation on XSL Transform Output File
- How to specify the dimensions of a panel
- problem with C# lock and windows2000 callback
- Remoting Bitmaps and Sound files
- Focus on a custom listview
- Delete of file in use
- closing webform
- C# windows programming
- Windows service
- Download file from web page.
- (proverbially) Throwing Myself to the C# Wolves
- ctx sensitive help in C#
- Grokking Active Threads
- C# Interfaces in C++?
- Lists
- WEB SERVICE HELP
- Customized CheckBox
- C# and ASP.Net beginner guides/sites?
- Changing line style makes line drawing very slow
- IDBDataInitialize.GetInitializationString
- How to throw an executable without throwing a dos windows
- Help with Media Encoder 9, examples don't compile
- How to define STATIC event handle by CodeDom
- Persistence layer for C# .Net
- Command TimeOut Data Access Application Block
- Closing a Windows Form application
- Memory/performance efficient collection with fixed size and supporting several types?
- Covert to typeB from typeA
- delegates and type safety !
- Repeat DataGrid?
- How to retrive target file from a shorcut icon using C#
- fLICKERING
- FlushFinalBlock() method was called twice on a CryptoStream
- How to get file Information
- Disable Ctl+Alt+Del C#
- Fractions
- Implement IList Interface
- Retrieving user from process list
- construct HTMLDocument
- Casting as Base Class on Deserialize ?
- Log4Net
- Help about Exchangen
- Drawling custom listview column headings
- OleDbCommand Problem
- XML Error help
- How To Freez DataGrid Columns
- Accessing a Listbox from another Class?
- xpath query with namespace
- New edition of Troelsen's book
- Capture the HTTP request header?
- Am I doing something wrong? (Variables)
- Windows messages between a console application and unmanaged DLL
- automatic refresh when page processes event or method
- an unmanaged app with C#
- Object To Variant
- Can't read XML document and parse into dataset
- Add code to a specific event?
- add a custom control icon to the tool box?
- String parameter in WHERE clause not providing exact matches.
- Initialising variables inside or outside contructor
- Declare a user32.dll function to accept an Array
- passing an object from one application to another..
- AutoScrollPosition, bug ?
- Conditional Compilation/Conditional Constants
- Databinding data to an asp:repeater
- System.IO Copy() and Move() question
- Override vs Events
- Custom ComboBox -- Easy or Not?
- Microsoft.Office.Interop.Outlook - Registering Eventhandlers
- OleDbCommand Problem
- removing special characters
- Click event seems to fire multiple times
- C# Client to acess a WebService in Java
- Use other class member function in same namespace
- MouseWheel & ScrollBars
- out parameter for arrays, copy function
- CeSetUserNotificationEx troubles
- XSLTransform class cannot handle QNames
- data binding in textbox
- changing datagrid column width dynamically
- Databinding for datasets (more than one table)
- assembly loading problem
- Using a check box in a datagrid
- Inheritance design question
- Nested buildfiles with Nant.
- override ++ operater and Compiler problem when casting?
- textboxs
- Java vs. .Net support for Oracle
- MVP....
- Help with references in web project (Web Form Designer failed to load)
- Bind web forms datagrid to Dataset??
- Windows service onStart event
- Embedding fonts into a Crystal PDF export
- Send output using Console.Write.
- GetReferenceAssemblies() Issue
- Owner drawn tray context menu
- DataGridTableStyle in ASP.NET?
- File Name(Word)
- nmake
- language pack - how to 'activate' it in an application
- configuration file is missing
- Name of File(Word)
- string class - memory reallocation
- Update Cookie
- System.InvalidCastException: Specified cast is not valid
- Windows service
- ReadFileEx callback doesn't work - simple question
- SelectedValue in listbox
- Button Border
- [BUG]VS.NET 2003/C# locks up and locks up windows shell when entering breakpoint
- regd font
- TabbedGroups in Magic Library
- WebBrowser interop - legal to distribute wrapper classes?
- FontDialog
- Getting VScrollBar properties from within an OnPaint method
- conevrt dbf file to sql server 2000
- How to select root node in TreeView?
- PRB: Enter and leave events fire only once.
- PRB: Exception can only be caught once???
- Plugin Application Help
- web browser control customization help
- c# web server with persistent data
- Resolve Hostname not working...
- Book recommendation
- Web Service debugging ?
- Hashtable.Item property
- ASP.NET user roles
- .NET and COM
- Exception Handling and Garbage Collection
- Globalizaton:
- Enum.GetNames
- non-overridable operators...
- How can I register a dll (COM) developed by C#? Desperately asking for help
- Page Filter in C#
- How to check if a Mutex is already locked.
- converting dates
- Hiding Files ?
- want to read a text file in cab file....
- DataColumnStyle question
- Help! Debugger reporting System.IO.FIleLoadException.
- "ArrayList does not contain a definition for Item" - Tell me I'm being stupid
- Form minimize
- Connecting to Oracle8i with ODBC?
- PDF sdk......
- Performance Counters
- Exposing form controls to other classes
- Programming MMC using C# question?
- Not Random....
- load safe
- AxImp and OCX
- Get member name
- Image Resource
- Dispose GDI+ object
- question about web.config contents
- debugging web application from Internet Explorer
- c# program file association
- MS Access database Help
- <example> tag doesn't work
- client hosted remote objects
- XML comment problem with [ClassInterfaceAttribute(...
- Why isn't the If's statement, after the expression, being executed?
- X509Certificate
- Custom Collection Confusion
- adding a reference path manually
- Displaying Contents of ArrayList
- C#(Quit the application) - shift+Ctrl+Q
- C++ Wmi App to C# Wmi App - System.Mgmt Access
- Real Full Screen of the VS IDE
- 5 seconds to load program
- Random...
- Algorithm for building category and LDAP membership
- Keyboard input
- Deserialization
- dll problem... please help.....
- Rendering HTML without using MSHTML/IE
- Linked servers in SQL server 2000
- Get the name of the calling function of an executing function
- flash player
- Memory usage. ( H E L P ! ! )
- special treenode text?
- How to move to next node in TreeView?
- Grabbing tables and databases from SQL2000 using C#
- Spreadsheet Component in Academic Version
- Filtering a Custom Collection of business Objects - C#
- CheckBoxList Question - C#
- Did this not make sense ?
- Using decimal type for money
- doc'ing intended const-qualification of references
- string to System.DateTime conversion
- ConfigurationSettings.AppSettings.Get("value")
- Form submit error
- Access serial port on a client via web?
- Open console from windows app
- Performance comparision of C# and Java
- .NET: garbage collection for data type returrned from COM Interop
- Control for InputBox in C#
- Reading Excel using Odbc
- Globalization and Localization Resources anyone?
- Combination or Treeview & ListView - your opinions
- Debugging
- .DataFiles and .ReportFileName with Crystal Report 9
- distributed app book?
- inherit DataColumnCollection
- Comp.std.csharp newsgroup
- threadpool
- Default stored proc parameters?
- Exchange Event Sink
- Can C# count?
- moving web application to IIS web server on PC without .Net
- Strategy in developing Modular application
- Preferred coding style
- Dynamically Resize Text
- WebDAV .NET
- giving a user ALL_ACCESS to a file/directory
- ASP.NET with Outlook
- Dynamic Contorls & Classes
- Regex for HTML
- 2nd Request on error "Unable to Start Debugging on the Web Server."
- Managed code with C# and VC++
- saving Oulook.MailItem objects
- How to 3-D a Pie Chart?
- Lost Post
- dll files
- scrollbar invalidation on change
- Using SendInput to emulate *double* click
- Expand datagrid
- Accessing a file on the CD?
- How to load the assembly at runtime in ASP.NET application.
- Recognising a Tab click
- Windows media station control
- timer question
- ScrollWindowEx
- Help: How do I load a raster font?
- WebControls, FreeTextBox & Dynamic Creation
- hex byte array to integer
- Wierd Windows Service Errors
- Unknown connection option in connection string: provider.
- How to position the scrollbar at the last ligne writen in the listView
- Class accessing form objects
- MDI form & Keydown event problem
- UtilityLibrary ReBar problem
- MouseEventArgs
- Preventing Key/Mouse Events from Interruping Processing
- How to call PHP code
- how to get the variable from page_load
- Datagrid help...
- GDI+ : Simulate AutoRedraw
- Random Filenames
- COMS Virtual Cable
- GDI+ : Simulate AutoRedraw
- Bad design of C# collections framework. Where is the "Set" collection?
- Setting up Text Boxes
- about C# and web probramming
- GraphicsUnit.Point and GraphicsUnit.Display
- Reload query Not the total dataset
- Outlook Express - style tree datagrid
- COM+ bulit on c#
- Compression Library
- Visual Inheritance Using Winforms
- .NET runtime disk footprint
- programmatically select a listview item...???
- Multithreading, DataTables, and Garbage Collection
- MFC => C# Problem, Dynamic Views
- Casting : example in VB.NET
- How to change working dir in a NT service app?
- Casting to a derived TreeNode object (from a TreeViewEventArgs Node)
- Submit a HTML Form programmatically
- Book on Forms
- Splitting a string
- inheriting from a base class and using an interface
- Handling errors from .Net components in VB6 code
- Disable hints for a module
- licensing Icons for commercial use
- How to send data to a physical address (SOCKET)
- How to get all child components from base class
- Regex help
- Creating Collections in C#
- Non-blocking socket question
- Problem: Class Library
- System Right Click.
- <code> xml documentation not working
- [Richtextbox].Rtf.Replace();
- DragDropEffects
- Attribute programming
- Microsoft WebBrowser control
- Drag and Drop in TreeView
- Index Server
- DirectorySearcher & sorting
- .Net component and locked classid
- SerializationException
- storing an xml element
- howto: Save "Embedded Object" from clipboard to file??
- App with tray icon
- debugging a C++ dll in a c# program
- Make a Beep
- HTML control
- Preview control
- Setup & Deployment projects - .Net Framework detection and config file management - how to?
- Object identity for Hashtables
- Reporting 'Drilldowns'
- international font selection
- Dataset and table names.....
- C# version of the Java JDK GraphLayout applet
- Equals and "==" operator
- Special Comment
- Collection property
- Unloading web application instance w/o trashing aspnet_wp process
- how do I....
- Regex Question
- How to connect strings in HTML?
- VisibleClipBounds property | https://bytes.com/sitemap/f-326-p-174.html | CC-MAIN-2019-43 | en | refinedweb |
Created on 09-22-2017 09:03 PM - edited 08-17-2019 10:54 AM
We will assume you have the following:
Here's the finished product:
This NiFi flow accomplishes the following:
Here's an example of our input data:
MSH|^~\&|XXXXXX||HealthOrg01||||ORU^R01|Q1111111111111111111|P|2.3|<cr>PID|||000000001||SMITH^JOHN||19700101|M||||||||||999999999999|123456789|<cr>PD1||||1234567890^LAST^FIRST^M^^^^^NPI|<cr>OBR|1|341856649^HNAM_ORDERID|000000000000000000|648088^Basic Metabolic Panel|||20150101000100|||||||||1620^Johnson^John^R||||||20150101000100|||M|||||||||||20150101000100|<cr>OBX|1|NM|GLU^Glucose Lvl|159|mg/dL|65-99^65^99|H|||F|||20150101000100|
And the output data after being written to HDFS:
{
"OBX_1": { "UserDefinedAccessChecks": "20150101000100", "ObservationIdentifier": { "Text": "Glucose Lvl", "Identifier": "GLU" }, "ReferencesRange": "H", "Units": { "NameOfCodingSystem": "99", "Identifier": "65-99", "Text": "65" }, "ObservationSubID": "159", "NatureOfAbnormalTest": "F", "SetIDOBX": "1", "ValueType": "NM", "ObservationValue": "mg\/dL" }, "OBR_1": { "OrderingProvider": { "FamilyName": "Johnson", "IDNumber": "1620", "GivenName": "John", "MiddleInitialOrName": "R" }, "UniversalServiceIdentifier": { "Text": "Basic Metabolic Panel", "Identifier": "648088" }, "FillerOrderNumber": { "EntityIdentifier": "000000000000000000" }, "PlacerOrderNumber": { "NamespaceID": "HNAM_ORDERID", "EntityIdentifier": "341856649" }, "ResultStatus": "M", "ObservationDateTime": "20150101000100", "ScheduledDateTime": "20150101000100", "SetIDObservationRequest": "1", "ResultsRptStatusChngDateTime": "20150101000100" }, "MSH": { "MessageControlID": "Q1111111111111111111", "SendingApplication": { "NamespaceID": "XXXXXX" }, "ReceivingApplication": { "NamespaceID": "HealthOrg01" }, "ProcessingID": { "ProcessingID": "P" }, "MessageType": { "MessageType": "ORU", "TriggerEvent": "R01" }, "EncodingCharacters": "^~\&", "VersionID": "2.3", "FieldSeparator": "|" }, "uuid": "de394ca2-cbaf-4703-9e25-4ea280a8c691", "PID": { "SSNNumberPatient": "123456789", "PatientAccountNumber": { "ID": "999999999999" }, "DateOfBirth": "19700101", "Sex": "M", "PatientName": { "GivenName": "JOHN", "FamilyName": "SMITH" }, "PatientIDInternalID": { "ID": "000000001" } }, "path": ".\/", "PD1": { "PatientPrimaryCareProviderNameIDNo": { "IDNumber": "1234567890", "FamilyName": "LAST", "GivenName": "FIRST", "AssigningAuthority": "NPI", "MiddleInitialOrName": "M" } }, "filename": "279877223850444", "kafka": { "partition": "0", "offset": "221", "topic": "test" } }
To read from Kafka, we will need to first create a topic. Execute the following to create a topic:
./kafka-topics.sh --create --zookeeper <ZOOKEEPER_HOSTNAME>:2181 --replication-factor 1 --partitions 1 --topic test
You should have a topic named test available to you.
We can now create the ConsumeKafka processor with this configuration:
You can test that this topic is functioning correctly by using the command-line kafka-console-producer and kafka-console-consumer tools in two different command prompts.
The text data we are using had to be compressed into one line since we are using Kafka to read it (Kafka needs each entry as a single line without any carriage returns), so we now need to de-compress our FlowFile into the appropriate number of lines. We will be replacing all '<cr>' strings with a carriage return ('\r', 0x0D). Use the following configuration on the ReplaceText processor to do so:
We will use the built-in ExtractHL7Attributes processor to transform our formatted text into "Attributes" that are native to NiFi and understood by it.
We can now do a simple conversion of those attributes into JSON. Note that our data is destined for the flowfile-content, so we will now overwrite the formatted data that we used to extract our attributes from. Up until now, we had both the content and attributes which are duplicates of the content.
To format the JSON, we will use a Jolt Shift specification. This specification will fully depend on the data you are using and expecting. The sample spec below is based on my sample input data.
{ "OBX_1.UserDefinedAccessChecks": "OBX_1.UserDefinedAccessChecks", "OBR_1.OrderingProvider.FamilyName": "OBR_1.OrderingProvider.FamilyName", "MSH.MessageControlID": "MSH.MessageControlID", "OBX_1.ObservationIdentifier.Text": "OBX_1.ObservationIdentifier.Text", "MSH.SendingApplication.NamespaceID": "MSH.SendingApplication.NamespaceID", "OBR_1.UniversalServiceIdentifier.Text": "OBR_1.UniversalServiceIdentifier.Text", "MSH.ReceivingApplication.NamespaceID": "MSH.ReceivingApplication.NamespaceID", "MSH.ProcessingID.ProcessingID": "MSH.ProcessingID.ProcessingID", "uuid": "uuid", "PID.SSNNumberPatient": "PID.SSNNumberPatient", "OBR_1.FillerOrderNumber.EntityIdentifier": "OBR_1.FillerOrderNumber.EntityIdentifier", "path": "path", "PID.PatientAccountNumber.ID": "PID.PatientAccountNumber.ID", "PID.DateOfBirth": "PID.DateOfBirth", "PD1.PatientPrimaryCareProviderNameIDNo.IDNumber": "PD1.PatientPrimaryCareProviderNameIDNo.IDNumber", "PID.Sex": "PID.Sex", "MSH.MessageType.MessageType": "MSH.MessageType.MessageType", "OBX_1.ReferencesRange": "OBX_1.ReferencesRange", "OBR_1.OrderingProvider.IDNumber": "OBR_1.OrderingProvider.IDNumber", "PD1.PatientPrimaryCareProviderNameIDNo.FamilyName": "PD1.PatientPrimaryCareProviderNameIDNo.FamilyName", "OBX_1.Units.NameOfCodingSystem": "OBX_1.Units.NameOfCodingSystem", "OBX_1.Units.Identifier": "OBX_1.Units.Identifier", "filename": "filename", "PID.PatientName.GivenName": "PID.PatientName.GivenName", "OBX_1.ObservationSubID": "OBX_1.ObservationSubID", "PD1.PatientPrimaryCareProviderNameIDNo.GivenName": "PD1.PatientPrimaryCareProviderNameIDNo.GivenName", "OBR_1.PlacerOrderNumber.NamespaceID": "OBR_1.PlacerOrderNumber.NamespaceID", "MSH.MessageType.TriggerEvent": "MSH.MessageType.TriggerEvent", "PD1.PatientPrimaryCareProviderNameIDNo.AssigningAuthority": "PD1.PatientPrimaryCareProviderNameIDNo.AssigningAuthority", "OBR_1.ResultStatus": "OBR_1.ResultStatus", "PID.PatientName.FamilyName": "PID.PatientName.FamilyName", "MSH.EncodingCharacters": "MSH.EncodingCharacters", "MSH.VersionID": "MSH.VersionID", "kafka.partition": "kafka.partition", "OBR_1.UniversalServiceIdentifier.Identifier": "OBR_1.UniversalServiceIdentifier.Identifier", "OBR_1.ObservationDateTime": "OBR_1.ObservationDateTime", "OBR_1.ScheduledDateTime": "OBR_1.ScheduledDateTime", "OBX_1.ObservationIdentifier.Identifier": "OBX_1.ObservationIdentifier.Identifier", "OBR_1.OrderingProvider.GivenName": "OBR_1.OrderingProvider.GivenName", "OBR_1.SetIDObservationRequest": "OBR_1.SetIDObservationRequest", "OBR_1.ResultsRptStatusChngDateTime": "OBR_1.ResultsRptStatusChngDateTime", "OBR_1.PlacerOrderNumber.EntityIdentifier": "OBR_1.PlacerOrderNumber.EntityIdentifier", "OBX_1.NatureOfAbnormalTest": "OBX_1.NatureOfAbnormalTest", "OBX_1.SetIDOBX": "OBX_1.SetIDOBX", "MSH.FieldSeparator": "MSH.FieldSeparator", "PD1.PatientPrimaryCareProviderNameIDNo.MiddleInitialOrName": "PD1.PatientPrimaryCareProviderNameIDNo.MiddleInitialOrName", "OBX_1.Units.Text": "OBX_1.Units.Text", "OBX_1.ValueType": "OBX_1.ValueType", "kafka.offset": "kafka.offset", "PID.PatientIDInternalID.ID": "PID.PatientIDInternalID.ID", "kafka.topic": "kafka.topic", "OBX_1.ObservationValue": "OBX_1.ObservationValue", "OBR_1.OrderingProvider.MiddleInitialOrName": "OBR_1.OrderingProvider.MiddleInitialOrName" }
Here's the associated NiFi configuration:
Finally, we will persist this JSON-transformed data to HDFS for further analysis and storage. Configure your PutHDFS processor as follows
Note that the 'Hadoop Configuration Resources' is a required field for your connection from NiFi to HDFS to work properly, so be sure to fill that in with wherever your core-site.xml and hdfs-site.xml HDFS configuration files are on disk.
Be sure to create the directory /tmp/tst/helloworld in hdfs and change ownership to the nifi user:
hdfs dfs -mkdir -p /tmp/tst/helloworld hdfs dfs -chown -R nifi /tmp/tst
Now you should be able to connect all of your processors, start them, and see your data move through.
To send the sample data through our new flow, do the following:
1. SSH into a kafka broker in your cluster and cd to the binaries folder - in my environment, that is located at /usr/hdp/current/kafka-broker/bin.
2. Create a file in your home directory and paste the contents of your sample input data (from the Overview section above). This file should be exactly 1 line, no more.
3. Feed your sample data into the kafka producer with the following command:
/usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list <KAFKA_BROKER_HOSTNAME>:6667 --topic test < hl7data.txt
If all goes well, when you NiFi flow refreshes you should see it go all the way through. At this point you can check HDFS with the following command:
hdfs dfs -ls /tmp/tst/helloworld
You should see a file there. You can read it with the following:
hdfs dfs -cat /tmp/tst/helloworld/<FILENAME>
Thanks for reading!
Full NiFi template: hl7flow-template.xml
As I am new to Kafka I followed the steps for installing Kafka. Also tested Kafka using the Kafka Producer and Consumer bat file.
As a next step instead of Kafka Consumer console, I used NiFi ConsumeKafka Processor as a Consumer. Whatever data I send from Kafka Producer console I am able to receive it in ConsumeKafka Processor which I tested by sending the data through PutFile Processor. So as per my understanding the Topic name mentioned in the Producer console application and the ConsumeKafka Processor matches the topic name. Hence the Processor is able to read the message from the stream.
In my scenario I want to receive the HL7 message from a HL7simulator or through some asset which send the HL7 message to the configured IP:Port. I tried receiving HL7 message through ListenTCP or GetTCP with both of these Processor. But I am unable to receive the HL7 message in NiFi.
So I tried to follow the steps mentioned here (I thought If I use consumeKafka I may able to receive the HL7 message). But here the problem is I don't have a configuration section where I can mention topic. I can only set the IP, Port and Message in the simulator app.
Any help on receiving HL7 message into NiFi will be really helpful.
Thanks..
Nice article - very well illustrated and explained. Wanted to try this out without Kafka and HDF and locally within NiFi. So modified the flow with GenerateFlowFile and PutFile processors. Worked great! Attached is the template - .hl7-processingexample.xml
@Dhamotharan P Why did the PutTCP and ListenTCP not work. Please see the attached template for working the same example using TCP connections. tcp-hl7-example.xml
@Muki Soomar Thanks for the template it will help me during my implementation. But I am stuck with the first step that is listening for the message over ListenTCP Processor. In the template which you shared It has generate flow file which send HL7 message and it flow through PutTCP. This work for me. But instead of generate flow file I tried using different HL7 simulators. But I am not able to receive in ListenTCP Processor. Now I am in the process of creating a custom processor on top of ListenTCP which will receive the HL7 message and also send back the acknowledgement.
Because HL7 uses MLLP which is a protocol above TCP. I wonder if there is a plan to add MLLP support to NiFi.
Hi @Muki Soomar Thanks for extending your help on this. I have created the custom processor to solve the above issue. But the answer for your question and the issue which I currently stuck with during HL7 message processing are posted in the link.... Please have a look on that and let me know your inputs on that. Thanks again.. | https://community.cloudera.com/t5/Community-Articles/NiFi-in-Healthcare-Ingesting-HL7-Data-in-NiFi/tac-p/246984?attachment-id=2710 | CC-MAIN-2019-43 | en | refinedweb |
So we know that matplotlib is awesome for generating graphs and figures.
But what if we wanted to display a simple RGB image? Can we do that with matplotlib?
Of course!
This blog post will show you how to display a Matplotlib RGB image in only a few lines of code…as well as clear up any caveats that you may run into when using OpenCV and matplotlib together.
Looking for the source code to this post?
Jump right to the downloads section.
Tutorial: How to Display a Matplotlib RGB Image
Alright, let’s not waste any time. Let’s jump into some code:
The first thing we are going to do is import our
matplotlib package. Then we’re going to import the
image sub-package of
matplotlib, aliasing it as
mpimg for convenience. This sub-package handles matplotlib’s image manipulations.
A simple call to the
imread method loads our image as a multi-dimensional NumPy array (one for each Red, Green, and Blue component, respectively) and
imshow displays our image to our screen.
We can see our image below:
That’s a good start, but how about getting rid of the numbered axes?
By calling
plt.axis("off") we can remove our numbered axes.
Executing our code we end up with:
Nothing to it! You can now see that the numbered axes are gone.
The OpenCV Caveat
But of course, we use OpenCV a lot on this blog.
So let’s load up an image using OpenCV and display it with matplotlib:
Again, the code is simple.
But the results aren’t as expected:
Uh-oh. That’s not good.
The colors of our image are clearly wrong!
Why is this?
The answer lies as a caveat with OpenCV.
OpenCV represents RGB images as multi-dimensional NumPy arrays…but in reverse order!
This means that images are actually represented in BGR order rather than RGB!
There’s an easy fix though.
All we need to do is convert the image from BGR to RGB:
Running our script we can see that the colors of our image are now correct:
Figure 4: When using OpenCV and displaying an image using matplotlib, be sure to call cv2.cvtColor first.
As I said, there’s nothing to displaying matplotlib RGB images!
Summary
In this blog post I showed you how to display matplotlib RGB images.
We made use of
matplotlib,
pyplot and
mpimg!
Hello, very nice tutorial. Actually when I tried to right-click the first image and save it, it can only be saved as *.jpg format, but isn’t it a *png format?
Hey, download the source code to the blog post to grab the original .png image used in the example. I used .jpg in my screenshots just so save bandwidth (since .jpg files are usually smaller than .png files).
Brilliant! I’d been using the GTK framework instead of QT so I couldn’t see the x,y position of the mouse pointer using the conventional cv2.imshow method. Using matplotlib is so much more useful ! 🙂
Indeed, matplotlib is quite useful! I’m glad to see that the post helped Akshay!
The difference in color representation between open cv and mathplot was exactly what I was looking for. Thanks a ton 🙂
Happy to help Ajay!
this is what I was looking for t. thank you!!!
Thanks Adrian,
I thought it’s because of ‘cmap’ option in imshow.
It seems, that my issue comes from the combination of matplotlib-1.5.1 and qt-5.7.0. See also .
Hello Adrian,
I am having 3 numpy (R,G,B)arrays which I am stacking together to plot an RGB image ,so I used “k=np.dstack([R,G,B])”. Now, when I am plotting the image using “plt.imshow(k)”. The RGB image is not displayed correctly.
I have followed many posts on stackoverflow related to color normalization but still I am not able to display the image correctly. Kindly help.
Hi Shubham — what do you mean by “the RGB image is not displayed correctly”? What is “incorrect” about it?
If I am visualising each array(channel either R,G or B) through plt.imshow(), I can see the image clearly but once I use np.dstack and then visualise the RGB image, it is just mixed up and looks like a hotch potch of pixels.Do I need to normalize in plt.imshow(). I have done that but still not able to get the image perfectly.
Kindly specify if there is another way to display the image.
It sounds like you’re not properly stacking your channels. Try this:
Hello Adrian, Is there a way to save the turned pale (like “Figure 3”) imagefile?
I need to get a “wrong” imagefile.
(Does “cv2.imwrite” automatically determines BGR or RGB? )
The
cv2.imwritedoes not automatically determine BGR versus RGB. If you have an RGB image and want to write it back to disk, convert to BGR and then call
cv2.imwrite.
Hello,
A simplier way is just to do:
plt.imshow(img[:,:,::-1])
🙂
Yes, that also works as well.
Even simpler:
plt.imshow(img[…,::-1])
🙂
hi,
I’m facing the same problem with my code.
my code does a non linear transformation on the image and then i’m plotting the image using matplotlip only and the colors of the image are not the same as the original.
not using cv2 at all.
any suggestions?
RGB images are typically have values in the range [0, 255]. If you’re applying a non-linear transform to the image I assume this shifts your pixels outside this range. I would suggest scaling the pixels back in to the appropriate range before displaying them.
i got an error like this
error: (-215) depth == 0 || depth == 2 || depth == 5 in function cvtColor
how to solve that? thanks
Double-check your input path to “cv2.imread”. The path is invalid, causing “cv2.imread” to return “None”. You can read more about NoneType errors (and how to fix them) in this guide.
Hi Adrian,
Just to share that, if one is using virtualenv settings, the mathplot library raises the issue on MacOX:
RuntimeError: Python is not installed as a framework.
Adding the following 2 lines did the trick for me to have it run without problem, no matter the version of mathplot:
import matplotlib as mpl
mpl.use(‘TkAgg’)
Cheers 😉
Awesome, thanks for sharing! 🙂
thank you, article did help
You are welcome!
Exactly what I was looking for ! 😍
I’m glad it helped you! | https://www.pyimagesearch.com/2014/11/03/display-matplotlib-rgb-image/ | CC-MAIN-2019-43 | en | refinedweb |
Technical Support
On-Line Manuals
RL-ARM User's Guide (MDK v4)
The U8 type is defined in rtl.h. It specifies the unsigned 8-bit type
used by the real-time kernel routines. The U8 type is defined
as:
typedef unsigned char U8;
and is used as shown in the following example:
#include <rtl.h>
U8 u. | http://www.keil.com/support/man/docs/rlarm/rlarm_lib_u8.htm | CC-MAIN-2019-43 | en | refinedweb |
- .Net...to install or not?!?
- Creating PDF from CR
- Problem with new web application project in C#
- Active Directory Authetication
- .NET Framework 2.0 on VS.NET 2003
- global object variable
- textbox formating with date,string,numeric and other data types
- DataModules in Delphi, what to use in VS for design time.
- Custom control initialization
- Error while saving attachment
- ImageManipulation bmp to tiff (bitmap to tiff)
- jScript call from the code-behind (aspx.cs)
- Populate 2nd Dropdown without page repost ?
- Collecting database data ?
- illink.exe is missing msvcr70d.dll
- PagedDataSource with Typed Dataset
- Transition from VB6.0 to VB .NET
- .Net DDK
- Simple Datagrid question...
- How To Step through aspx code
- SQL Problem
- Ebay - Get .NET consultancy and help with Tsuanami relief
- How to run a script during a VS.NET build?
- Managed and Unmanaged Code in one DLL?
- ASP.NET problem - Creating dataset
- Performance measuring tool
- File.Copy
- Shadow of a closed form
- Ghosts being accessed
- .Net Application and SQL DTS Packages
- Ocx using dao gives error upon exit when used in .NET app
- Split String by number of characters
- Need Help
- Application Block strange behavior
- WEbservice question
- Using VS.NET for legacy development?
- Items in ComboBox are not Visible
- Validating Tabstrip with Multipage Control
- Array of Class
- How do I change the binary edit window font?
- Timeout exceptions during execution of stored procedures
- MessageQueue in c#
- URL
- How event Handle
- TypeConverter How does it works
- TypeConverter.CreateInstance How does it works
- Serialization questions...
- .net office
- how to prepare patch
- Items in Combi Box are not visible
- how do you specify the .NET runtime version for controls embedded in web browsers?
- How Many Planning To Rewrite Their Webs To Support Firefox?
- ShowInTaskBar change raises user control load event
- distorted fonts and incorrectly sized windows
- threads and delegates
- Anyone seen a packet capture library?
- VS.NET 2003 - F1 help, doesn't display 1st time, but does 2nd time?
- test
- How to update displayed content of TEXTAREA with multiline text
- Hiding a TabPage???
- how to Monitor existance of files ????
- File locked problem in c#
- Visual Studio 2005 Beta - VB dataset question
- ArrayList.Contains?
- Get Calling appliucation
- Systems Integration question
- ? How to strong name a 3rd Party Assembly ?
- Mail Merge
- Calling DLL written in C++
- Microsoft Outlook 11.0 Object Library
- insert a unc path to db
- Function-call in Response.Write
- Better name than dot net
- SQl Question with VB .NET
- how to passing value back to .asp file
- threads and blocked form
- A pal needs help on this site
- Adding custom attributes to elements
- Adding an onclick event for a button
- Read source code of a web page
- Visual Basic .NET 2003 does not start installation on Windows XP h
- Pocket PC Shell API functions
- Asynchronous event handling datagrid question
- What's a Good Book for Beginners?
- Dynamic MenuItems *weird* problem
- Calling DLLs
- Controlling DataGrid Input
- Printing Source Code
- .NET/IIS search for documents
- Starting process without Process.Start
- Invalid Report Data Source!
- Best Practice for Many Side's Data Etnry !
- Invalid file dsn '' o_O
- File open fails if the file is being used by another process
- HttpWebRequest and WebRequest
- MIcrosoft Outlook 11.0 Object Library
- Load Balancing Question
- what does serialize mean
- Export Simple Text
- TypeConverter.CreateInstance How does it works
- WMEncoder Multiple Instances
- Cannot find assembly ABCSalary, Version=1.0.0.1, ...
- File opens in the Solution Explorer itself
- getting Internal Error 2349 when trying to install
- Unable to run ASP.NET app on 2003
- ftp question using article id 832679
- NUnit redirecting output
- Accessing session information from another project in a solution
- Reflection / Dependencies / Remoting
- Find logged in user
- NUnit redirecting output
- Streamed XML not opening in Word...
- Construct OdbcConnection get unhandle error in Windows 2000
- Ad Rotator and XML
- Assembly.GetEntryAssembly
- Cookie Handling: Sending saved cookies in httprequest
- VB 6 and VB.net simultaneously
- How I can save a RichTextBox.Rtf property as a simple html-page ?
- Sdifer - folder synchronazation and snapshoting library
- Reading Image Properties without loading as Image
- Using C#.NET to fax and send mail via Exchange
- Let a user write an expression as a string and return a value
- How to check email and save attachements using vb.net app
- How to test if excel is installed?
- Management of letters from VB
- Assign default property value
- connecting to SQL
- DirectorySearcher question...
- .net passport
- Need advice on separating content from look and feel
- .net installers
- Questions about deployment project. Thanks!
- how Can i convert this from C# to VB
- User Sessions
- basic question
- child/parent forms
- singleton object per request
- serial number
- convert a weakly named assembly to a strongly named
- IE6 and the .NET CLR
- Maybe what I want to do isn't possible...
- Whidbey Stablility
- Question: hosting smart client inside IE
- DALC vs DAO
- module not found when Accessing Custom DLL's from WebService
- regular expression help
- .NET enables IIS to run JSP?
- MD5 encode diferent from Php
- Outlook runtime error
- Determining which encoding the browser used for a url
- CryptographicException, The parameter is incorrect.
- Exception Application Block - Custom publisher
- Graphics in email
- Spell Checking
- Omit dataset name from xml string
- Check emails and download attachments from vb.net application
- File or assembly, or dependencies not found error
- a common problem in VC
- Utility app to run nightly - which project type?
- ASP.NET Firefox Rendering Anomalies Documented?
- cache image in window form?
- Save web form.
- Visual Basic and visual studio
- Migration of VB 6 MS datareports to 2003 Net Crystal reports
- MSI / How to "Lock Down" Install Directory ?
- how to config window service' s description
- how to set description of window service
- Migration - XLA to .NET
- Update a database
- "A severe error occurred on the current command ..." - is this a bug?
- Passing Parameters From C++ to VB.Net
- "Problems during load"
- printing only part of a webform
- Reference Variable Question
- How to set focus on a label after a button clicked (web applicatio
- obfuscated dll - Wise Installer
- I'll try a different NG
- Method Not Found
- Apend Row in VB.NET Datagrid
- How to pass & return string parameters to Delphi-dll?
- .NET from a VB6 developers perspective...
- Opening a web project in Visual Studio .Net
- TypeConverter in a form property
- High Level View of a .Net Application
- Any *TOP* software developed on .NET (like Office, DB engine etc.)
- Class = Class (operator overriding)
- Creating batch process
- W2K3 IIS 6.0 ASP.NET Requests Per Second Limits?
- .Net Setup package bug
- Problem Whenever Rebuilding The Solution
- Application error when .NET application closed
- run application twice using c# code
- Legal requirements to distribute dotnetfx.exe?
- do i need it?
- CSC.EXE Problem when /lib reference path have space
- Regular expression that doesn't recognize newline
- Transfer between Win/Web forms, generally
- Setting access permissions of web sites in IIS 6.0 from .Net appli
- There is no source code available for the current location
- New Form of Astro Turfing
- Asynchronous Threading Issue
- Assembly attribute issue
- Open an Existing Project (Remove a project from the project list)
- Running C# Application on Citrix
- Accessing a custom dll in asp.net
- Adjust .NET Security Levels to Mass of machines
- Should Be Simple - Break VB.Net when a msgbox is on the screen
- how do you specify the .NET version for controls embedded in web browsers?
- Error 401: Unauthorized
- what's the relationship between emailing using SMTP in IIS and using your own email server?
- how to SAve the EventLog
- Autos panel in VS - is this available as seperate control?
- COM obj Registered?
- Image refresh
- set focus
- Debug uiEditors or typeconverter
- Select row of Datagrid, and disable single cell selection.
- Shadow of a closed form
- Automatically detect proxy settings
- How does one report a potential .NET Framework v1.1 SP1 Bug to Microsoft?
- Terminal Server CAL's in Windows 2003 Enterprise Server
- How to set default printer?
- my macro won't run!
- how to make a decimal a power of ten?
- How to prevent auto-creation of control resx file?
- Change windows user account password
- (.NET) Registering assembly while deploying
- Casting error in late-binding
- TCP .net server and VB6 client problem
- Vb.net Telnet VT100
- List groups that a user belong using AD
- inheriting/reusing common code
- Two event fired in Filesystemwatch
- How to find out if OS is Windows XP?
- Make a CheckBox readOnly or change enabled color
- Multiple Downloads in ASP.Net
- ASP.NET - Making connection from the visual studio designerer
- Visual Studio .NET update problem
- Proper use of inner exceptions
- Automating Outlook and FTP using vb.net
- enumerate files
- Cannot terminate the application
- using a datagrid in asp.net
- Inheriting from 2 Interfaces
- new to .net need help
- debugging issues
- updating blobs
- MailMerging w/ Word 2000
- Resize image from SQL server and display in asp:image
- How do you detect when a file is finished being created
- GUI Design
- Runtime debugging error: 2nd post
- Application Server Design for .Net
- Special characters/symbols in asp.net
- Error creating window handle.
- Dategrid PushButton Validation
- Running a Function in design time
- HttpwebRequest problem.
- Urgent assistance needed please with ASP.NET
- RemovePreviousVersion does not appear to work!
- w32_WNetGetUser not work Win98
- Drag and Drop: How to determine what type of object is dropped
- move one of the child of mdi to the front end
- DTS in dotnet windows applicaiton
- how to use trace?
- How to programmatically create a partition?
- Refresh in Web
- Literal Text control and IE - disappearing text bug
- NET Framework 2.0 and BSD
- Sending e-mail programaticaly
- new MCP / MCAD for whidbey??
- Todd's Blog: Grading Bill Gates
- Magic Instantiation! Most Annoying! :-(
- Entry in DataGrid
- Look for a really good Charting package
- The lost ENTER key...
- Where to put my code?
- output html form aspx
- Scalability of image folder
- How to diable the file upload input after
- How can I execute a command in the command prompt using Visual Bas
- Object call in Frames (Beginnerquestion)
- Runtime Debugging Services error
- Viewing an ASPX page
- DNS.GetHostByAddress Question
- Need YOur advise
- How to Load an Xml C# String into a DataSet
- Enter to Tab through fields
- export to msword - html
- VS 20005 beta question ???
- CDO.Person problem
- DataGridRadioButton
- .net doing BHO's Choice of platform question - Maybe experts
- regenerate resource file
- How to Dynamically Call Various Routines in VB.NET?
- C# SMTP Component
- Asynchronous Sockets and High Async IO Thread Count
- DataGrid, Footer, and Total
- Heap corruption in .Net 2003 expression evaluator Add-In
- Good machine configuration for .NET?
- winform - open up a NEW default browser window
- Help with error retrieving XML
- Textbox...Multi-Line VS a Memo Field
- Invisible data
- reference to handle / handle to reference
- .NET Project File - SubType setting
- Operator Overload question for VB 2005
- How Can I get the Processor Id from my computer
- How to keep a form being visible all the time?
- Data Grid Issue
- Passing a set to SQL Server
- Test if field is const w/ FxCop Introspection
- Large solution
- Application Screen Refreshing Issues
- Cost to upgrade to .net from VB6
- Crystal report Problem
- what does Inherits means
- kill a process with insufficient user permissions
- callback or event?
- Command-line tool to edit .resources?
- Javascript before all postback
- Garbage Collection
- Preventing print screen
- Process.StartInfo.EnvironmentVariables problem when there are twoenv variables that differ only in letter case
- Hashtables
- control property categoryattribute
- File language property
- create excel object from binary data
- Changing printer page layout (to landscape) from script
- Interop versions
- Session Variables --> SessionServer --> Availablity
- Combobox in a Datalist
- Development application of the year
- Label & Code Behind
- Configuration for DLL
- How can I tell if a worksheet exists in a workbook using VB.NET ?
- Same namespace structure across multiple languages?
- MailMessage with a large body
- VB.NET Question
- rendering an instance of an *.aspx object
- WindowsIdentity._GetCurrentToken() Access is denied
- Problem with Extended attributes on NTFS
- FxCop
- Keyboard Scan Code Capture?
- Does the 'PublicKeyToken' value in the web.config <section> depend on the version of the .NET framework?
- Overriding Render Method and saving the resulting html
- window.onbeforeunload and postback
- How to access a custom dll file
- How to access a custom dll file in .Net
- Error reading in XML data
- CrystalDecisions.shared error
- DataGrid
- Very simple question
- regular expersion. replace all except those that matches the patte
- How can i achive this...............?
- how to open new window with params
- Need Data collection method advise ??
- Random Compiler Errors
- Architecture for a distributed app
- DateTimePicker Problems
- Scheduled Job
- Appending HTML documents to a Crystal Report
- listview in VB 6.0
- generate random w/ weighting
- callback function problem
- Planning a project
- Anyone using .Net language to write shareware
- a pre-beginner's question: what is the pros and cons of .net, compared to ++
- VB.Net to COM Marshalling
- IDictionary
- SessionServer -Use It?
- World Community Grid
- sqldataadapter
- VS IDE hangs at 100% CPU utilization when project opens
- CheckedListBox sorting differently per user?
- modify the path environment variable
- Object Reference not set to an instance of an object
- excell and dotnet
- HelpMeDataView
- SEVER EXPLORER in VB
- HelpMe
- DataViewHelp
- display a webpage in a windows form
- Managed Code alternative for MSXML2.XMLHTTP
- VB.NET Class Properties
- Passing a Recordset from VB6 to VB.Net (QueryInterface failed.)
- Real time display of database data
- Windows XP IIS ASP Help
- IE cached credentials?
- Using a VB Application with Different Office Versions
- need help with
- Session timeout problem...
- VB.Net and Email
- FTP upload fails
- ComboBox.DataSource = {DataView}; But after is still = Nothing
- include pages inside aspx pages are not displayed on iis 5.1(win xp pro)
- Dynamic data Exchange
- unspecified error dataset disappear from designer
- Program handles Gui event while staying in WebServiceProxy.Invoke() when debugging
- ODBC
- Passing Data Between Applications
- MS Bluetooth stack
- How to handle the email sending when no internet connection
- Help Me - LookupAccountSid
- Shortcut
- Load Testing web applications for user not having VS .NET
- Randomize array items
- The remote server returned an error: (401) Unauthorized.
- install visual basic .net std. 2003 on WinXP SP2 PRO
- GUID
- DB2 IBM OleDB Provider
- VS IDE Blues
- Team System Dev/Test Chats, Wed & Thurs
- documenting functions
- Build Process & Strong Name Question
- How do I add carriage returns to an email in C#?
- tool that measures website security?
- Drag & Drop: A note
- Verify Email Address
- another Object reference not set to an instance of an object. prob
- Dynamically adding toolbar buttons has bad side effect
- Making dervied class from base class
- Getting started with .NET
- .NET 2.0 support of bitmaps in menu items
- Visual Studio .Net 2002 database connection error from server expl
- Visual Studio .Net 2002 database connection error from server expl
- How do I display text from simple .txt file?
- XP Service Pack 2 and Outlook.
- Data Connection to SQL Server
- Accessing Paradox Tables From C# Using ODBC
- Very Large Blob
- reading strings from OutputDebugString from Kernel messages
- distorted fonts and incorrectly sized windows
- Date Routines in C-Sharp
- Making a generic Parameter class
- sample test question for MCAD.
- I need to clear temporary the event sinks from another event and then restore them, but I don't know which methods signed up for that event
- Time Part of DateTime Data Type
- Datagrid from vb6 is not upgraded
- Dot NET application not able to show message box on .NET framework
- Dev Direct January News Round Up
- OLE Document Reader crashes office applications
- Forms keypress event not fired ???
- access table cell by name not by index?
- How do i change the back color of messagebox
- Bala
- How To Implement Com+ or DCOM in .Net
- SMBus Access
- Search record in datagrid ?
- HTML Parser
- Web Method Access Denied.
- Help with executing access query
- .Net Setup package bug
- Transparency in a Group Box???
- Permanent session
- DataSet Cloning Problem
- System.Data.OracleClient
- C# to C++ problems: array of Buttons
- Using MySql Database with .NET ADO objects
- Building a UI into discrete components
- Three tables to one
- mouse capture
- Passing an ADODB.Recordset from VB6 to VB.Net
- TreeView Drag & Drop operation very uncooperative...
- MessageBox does not display 'tex't unless I add extra code.
- Sending faxes using CDO
- .NET SecurityPolicy
- keyboard shortcut for the events window
- Architectural feedback
- Syntax/Mean of the following
- .NET commicate to FireWire (IEEE1394)?
- Reporting services external image
- System.Environment.UserDomainName Win98
- Presentation Layer and application separation
- Can't anybody help with this?
- Usercontrol Help
- Commenting technique in C# equivalent in VB.NET
- Best way to secure cookie session
- OnTextChanged
- Crystal Report cannot show the Pie Chart ?
- Cascading MDI Chid forms
- horizontal GUI in C#. Is it possible?
- drop Microsoft Access XP table with Visual Basic
- A hard one.... - preventing Reboot
- ASP.NET 2.0
- HTTPS - Window. code not working anymore
- how to assign more than one primary key
- Thinstall
- Interpretation of registry log of tweakui produced registry alteration
- An IDE in every application
- Java stopped working in the web browser?
- Net Framework Installers
- Comparison of two integers
- TreeView Mouse event behaviour puzzles
- vb.net dns and nslookup
- Webserver Control vs HTML server Control
- Crsytal Report and ASP.net
- Help creating a Dialog Application
- Spread trading application
- HELP - enumerating local NICS
- ASP.NET team programming via the Internet
- threed thread
- treed thread
- Printing Labels
- Printing
- Reboot and Shutdown Windows XP using VB.NEt
- Reboot and Shutdown Windows XP using Vb.Net
- .NET Application Deployment
- Problem in Window Service running EXE
- Win32 app. or .net App.?
- Visual C++ 7 or .NET
- This is exactly what I mean.....
- How to enable to output private member to XML document?
- PowerPoint automation C#
- Adding ActiveXControl to a user control
- Databound Dropdown List to a binary field
- Accessing File Summary Tab Data
- How to enumerate all Enums in a Class
- How to convert colors to HTML
- How to best determine any/all Logged On Users from a VB .NET Servi
- datagrid, dataadapters, and setdatabinding
- Mapping oledb data to odbc target
- Pocket PC 2003 newsgroup
- Help on Setup and Deployment Projects
- MS Office and .NET framework
- Performance: VC++ 33% slower then Builder 5 on LineTo() API call??
- WebRequest, 500 Internal Server Error, acessing response?
- Dataset.GetXML issue
- Access the Active Directory using .NET
- Access Active Directory using .NET/ADO.NET
- whats the average age of programmers on here
- Any math wizards out there (VB.NET)???
- Develop using FAT32?
- Access denied while trying to change password in Active Directory
- Access denied while trying to change password in Active Directory
- Detect Remote Socket Failure
- Visual studio .Net License
- Working Directory for Web Apps
- Deploy .Net library ????
- Directory picker control in .net
- System DLLs runtime checking
- Installer problem
- Why the impersonation work in one case and not the other?
- Property Grid & Database Schema
- LoadFromSQLServer won't work......
- SQL Server Query Analyzer faster than ADO.Net SQL Data Provider
- "Process cannot access file" problem | https://bytes.com/sitemap/f-312-p-185.html | CC-MAIN-2019-43 | en | refinedweb |
HTTPS support for jupyterhub deployment on k8s
Problem to solveProblem to solve
Jupyterhub deployment does not provide https.
Further detailsFurther details
(Include use cases, benefits, and/or goals)
ProposalProposal
JupyterHub deployment from GitLab managed apps on k8s to support https out of the box.
This can be acomplished using cert manager:
There are a few parts to this:
1. Install
cert-manager on the cluster
Original docs:
I installed it using Helm. Since we use mutual auth for now I had to hack together a rails script to generate a helm client key. Obviously when we implement this feature to deploy
cert-manager via GitLab you won't need to do this. But for the record here is the script to be run from the rails console:
helm = Clusters::Applications::Helm.last; nil File.open('/tmp/ca_cert.pem', 'w') { |f| f.write(helm.ca_cert) }; nil client_cert = helm.issue_client_cert; nil File.open('/tmp/key.pem', 'w') { |f| f.write(client_cert.key_string) }; nil File.open('/tmp/cert.pem', 'w') { |f| f.write(client_cert.cert_string) }; nil
Now once we have our helm certs in place we can install
cert-manager like so:
helm init --client-only --tiller-namespace gitlab-managed-apps helm install stable/cert-manager --name cert-manager --namespace gitlab-managed-apps --tiller-namespace gitlab-managed-apps --tls --tls-ca-cert /tmp/ca_cert.pem --tls-cert /tmp/cert.pem --tls-key /tmp/key.pem
2. Configure the
ClusterIssuer
Original docs:
kubectl create -f - <<EOF apiVersion: certmanager.k8s.io/v1alpha1 kind: ClusterIssuer metadata: name: letsencrypt-prod spec: acme: server: email: [email protected] privateKeySecretRef: name: letsencrypt-prod http01: {} EOF
3. Configure the
ingress-shim
Original docs:
helm upgrade cert-manager stable/cert-manager \ --namespace gitlab-managed-apps --tiller-namespace gitlab-managed-apps \ --tls --tls-ca-cert /tmp/ca_cert.pem --tls-cert /tmp/cert.pem --tls-key /tmp/key.pem \ --set ingressShim.defaultIssuerName=letsencrypt-prod --set ingressShim.defaultIssuerKind=ClusterIssuer
NOTE After following all these steps the
cert-manager should create a
Certificate for you and this should automatically be picked up by the running ingress and your Auto DevOps deployed app should be reachable by
http:// URL.
NOTE I did run into an issue where the HTTPS URL was still constantly returning
default backend - 404. In the end I figured out by looking through
kubectl logs deployments/cert-manager cert-manager --namespace gitlab-managed-apps -f that it was caused by the domain name being longer than 64 bytes. So the only way I was able to get a shorter domain name was by using a shorter group and project name as the domain name is generated by the Auto DevOps pipeline (see)
What does success look like, and how can we measure that?What does success look like, and how can we measure that?
When jupyterhub is deployed, https is default.
Links / referencesLinks / references
MR | https://gitlab.com/gitlab-org/gitlab-foss/issues/52753 | CC-MAIN-2019-43 | en | refinedweb |
Inheritance in java (IS-A relationship) is referred to the ability where child objects inherit or acquire all the properties and behaviors from parent object. In object oriented programming, inheritance is used to promote the code re-usability.
In this Java tutorial, we will learn about inheritance types supported in Java and how inheritance is achieved in Java applications.
Table of Contents 1. What is inheritance 2. Types of Inheritance in Java - 2.1. Single Inheritance - 2.2. Multilevel Inheritance - 2.3. Hierarchical Inheritance - 2.4. Multiple inheritance 3. Accessing Inherited Super Class Members - 3.1. Constructors - 3.2. Fields - 3.3. Methods 4. Summary
1. What is inheritance in Java
As said before, inheritance is all about inheriting the common state and behavior of parent class (super class) by it’s derived class (sub class or child class). A sub class can inherit all non-private members from super class, by default.
In java, extends keyword is used for inheritance between classes. let’s see a quick inheritance example.
1.1. Java inheritance example
Let’s say we have
Employee class. Employee class has all common attributes and methods which all employees must have within organization. There can be other specialized employees as well e.g.
Manager. Managers are regular employees of organization but, additionally, they have few more attributes over other employees e.g. they have reportees or subordinates.
Let’s design above classes.
public class Employee {; } @Override public String toString() { return "Employee [id=" + id + ", firstName=" + firstName + ", lastName=" + lastName + "]"; } }
import java.util.List; public class Manager extends Employee { private List<Employee> subordinates; public List<Employee> getSubordinates() { return subordinates; } public void setSubordinates(List<Employee> subordinates) { this.subordinates = subordinates; } @Override public String toString() { return "Manager [subordinates=" + subordinates + ", details=" + super.toString() + "]"; } }
In above implementation, employees have common attributes like
id,
firstName and
lastName; while manager has it’s specialized
subordinates attribute only. To inherit all non-private members from
Employee class (in this case getter and setter methods),
Manager extends Employee is used.
Let’s see how it works?
public class Main { public static void main(String[] args) { Manager mgr = new Manager(); mgr.setId(1L); mgr.setFirstName("Lokesh"); mgr.setLastName("Gupta"); System.out.println(mgr); } }
Program Output.
Manager [subordinates=null, details=Employee [id=1, firstName=Lokesh, lastName=Gupta]]
Clearly,
Manager class is able to use members of
Employee class. This very behavior is called inheritance. Simple, isn’t it?
Now consider if we do not use inheritance. Then we would have defined id, firstName and lastName in both classes. It would have caused code duplication which always create problems in code maintenance.
2. Types of inheritance in Java
In Java, inheritance can be one of four types – depending on classes hierarchy. Let’s learn about all four types of inheritances.
2.1. Single inheritance
This one is simple. There is one Parent class and one Child class. One child class extends one parent class. It’s single inheritance. The above example code (employee and manager) is example of single inheritance.
2.2. Multi-level inheritance
In multilevel inheritance, there will be inheritance between more than three classes in such a way that a child class will act as parent class for another child class. Let’s understand with a diagram.
In above example, Class
B extends class
A, so class
B is child class of class
A. But
C extends
B, so
B is parent class of
C. So
B is parent class as well as child class also.
2.3. Hierarchical inheritance
In hierarchical inheritance, there is one super class and more than one sub classes extend the super class.
These subclasses
B,
C,
D will share the common members inherited from
A, but they will not be aware of members from each other.
2.4. Multiple inheritance
In multiple inheritance, a class can inherit the behavior from more than one parent classes as well. Let’s understand with diagram.
In diagram,
D is extending class
A and
B, both. In this way,
D can inherit the non-private members of both the classes.
BUT, in java, you cannot use
extends keyword with two classes. So, how multiple inheritance will work?
Till JDK 1.7, multiple inheritance was not possible in java. But from JDK 1.8 onwards, multiple inheritance is possible via use of interfaces with default methods.
3. Accessing inherited parent class members
Now we know that using four types of inheritance mechanisms, we can access non-private members of parent classes. Let’s see how individual member can be accessed.
3.1. Parent class constructors
Constructors of super class can be called via
super keyword. There are only two rules:
super()call must be made from child class constructor.
super()call must be first statement inside constructor.
public class Manager extends Employee { public Manager() { //This must be first statement inside constructor super(); //Other code after super class } }
3.2. Parent class fields
In java, non-private member fields can be inherited in child class. You can access them using dot operator e.g.
manager.id. Here
id attribute is inherited from parent class
Employee.
You need to be careful when dealing with fields with same name in parent and child class. Remember that java fields cannot be overridden. Having same name field will hide the field from parent class – while accessing via child class.
In this case, attribute accessed will be decided based on the class of reference type.
ReferenceClass variable = new ActualClass();
In above case, member field will be accessed from
ReferenceClass. e.g.
//Parent class public class Employee { public Long id = 10L; } //Child class public class Manager extends Employee { public Long id = 20L; //same name field } public class Main { public static void main(String[] args) { Employee manager = new Manager(); System.out.println(manager.id); //Reference of type Employee Manager mgr = new Manager(); System.out.println(mgr.id); //Reference of type Manager } } Output: 10 20
3.3. Parent class methods
Opposite to field access, method access uses the type of actual object created in runtime.
java]ReferenceClass variable = new ActualClass();[/java]
In above case, member method will be accessed from
ActualClass. e.g.
public class Employee { private Long id = 10L; public Long getId() { return id; } } public class Manager extends Employee { private Long id = 20L; public Long getId() { return id; } } public class Main { public static void main(String[] args) { Employee employee = new Employee(); //Actual object is Employee Type System.out.println(employee.getId()); Employee manager = new Manager(); //Actual object is Manager Type System.out.println(manager.getId()); Manager mgr = new Manager(); //Actual object is Manager Type System.out.println(mgr.getId()); } } Output: 10 20 20
4. Summary
Let’s summarize what we learned about java inheritance:
- Inheritance is also known IS-A relationship.
- It provides child class the ability to inherit non-private members of parent class.
- In java, inheritance is achieved via
extendskeyword.
- From Java 8 onward, you can use interfaces with default methods to achieve multiple inheritance.
- Member fields are accessed from reference type class.
- Member methods are accessed from actual instance types.
Drop me any question, you might have, in comments section.
Happy Learning !!
Feedback, Discussion and Comments
Prashant Raghav
Hi lokesh,
In Java can I do
Child_class_object.baseclassmethod(),
Directly | https://howtodoinjava.com/oops/java-inheritance/ | CC-MAIN-2019-43 | en | refinedweb |
UnicodeDecodeError in matplotlib if 'python' set instead of 'sage' in Notebook [closed]
Hello!
I am trying to make publication ready images for LaTeX in Sage with matplotlib (due to the complex nature of the plot) in Notebook.
I have a lot of numerical calculations (not symbolic) so I switch to 'python-mode' using drop-down list on top of the Worksheet (so it looks like 'File...', 'Action...', 'Data...', 'python') so that my constants would read as native python's data types not Sage's symbolic objects.
Now having the switch on top of the page in 'python' mode the following code:
from matplotlib import rc rc('text', usetex=True) rc('text.latex', unicode=True) rc('text.latex', preamble='\usepackage[utf8]{inputenc}') rc('text.latex', preamble='\usepackage[russian]{babel}') font = {'family': 'serif', 'serif': ['Computer Modern Unicode']} rc('font', **font) import matplotlib.pyplot as plt plt.plot([1], [1]) plt.title(ur'Тест') plt.savefig("test.png")
yields ether UnicodeDecodeError or the image with the text corrupted:
But if I switch to 'sage' on top of the page, it works as expected:
I have no idea how that switch on top of the page affects matplotlib's output, but would really like to be able to use Cyrillic (utf8) letters and 'python' mode at the same time.
P.S. That magical switch on top of the page is really painful since I also can not save Worksheets if they contain utf-8 characters: as in this still unresolved issue :-( :-(
UPDATE: Tested the MWE above in Jupiter - seems to work properly! And due to inevitable migration to Jupiter, the problem seems not to be obsolete. | https://ask.sagemath.org/question/37364/unicodedecodeerror-in-matplotlib-if-python-set-instead-of-sage-in-notebook/ | CC-MAIN-2018-17 | en | refinedweb |
es.
Get more info about an IP address or domain name, such as organization, abuse contacts and geolocation.
One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.
> the applet from reloading if the user hits F5. I have the Japplet
> spawn some Jframes, but if the user hits F5 then my JFrames will
> disappear. :( I want to prevent this.
You cannot prevent it. F5 loads the whole page. I was thinking that you could send a specific request to the second frame but this is different to hitting F5.
> but if the user hits F5 then my JFrames will disappear.
Why doesn't it reappear again?
JVM generally use applet caches to avoid reload of applet code.
When you hit F5, in fact the applet is stopped and restarted again (stop() and start() methods).
I don't know if it's possible, but you can try to manage the page reloading in the applet like:
public class myJApplet extends JApplet
{
private boolean started=false,initialized=
public void init()
{
if (!initialized)
{
initialized=true;
// ...
}
}
public void destroy()
{
// destroy nothing
}
public void start()
{
if (!started)
{
started=true;
// ...
}
}
public void stop()
{
// don't stop the applet
}
}
It should work if the same applet instance is used when you reload the page. The applet may be hidden during reloading. But reloading should be quicker if you avoid destroying the applet resources and reloading them again.
since you mantioned JApplet, I assume you are using Sun's java plugin;
at least for later versions (1.4+) , when you refresh - your applet gets a new class loader, so even static fields in your classes are lost -
so storing state in static variables will not work.
you need to somehow store your state persistently when your applet goes does, and reload it when it starts.
if your applet is sigined, you can store state on the disk, if its not - you can store state on the server using some serverside component, its a bit complicated. | https://www.experts-exchange.com/questions/21111023/Japplet-reloading-from-refresh.html | CC-MAIN-2018-17 | en | refinedweb |
Application Security Testing: An Integral Part of DevOps
Watch→
Answer: You can determine if a file exists by calling the exists() method in the java.io.File class. A File instance represents a file on your local file system and allows you to perform operations on a file such as rename or delete. The exists() method will return true if a file exists, and false if it does not. However, if a file is contained in a directory in which you do not have read permission, exists() will return false even if the file exists. In addition, if you want to write to a file that you know exists, you can test if it is writable by invoking canWrite().
The following example program shows how to use the File class to determine if a file exists and is writable. It takes a list of files as command line arguments and for each file it prints out if the file exists as well as whether or not it is writable.
import java.io.*;
public final class FileExists {
public static final void main(String args[]) {
int filename;
if(args.length < 1) {
System.err.println("Usage: FileExists filename1 filename2 ...");
System.exit(1);
}
for(filename = 0; filename < args.length; filename++) {
File file;
file = new File(args[filename]);
if(file.exists()) {
System.out.print(args[filename] + " exists and is ");
if(file.canWrite())
System.out.println("writable.");
else
System.out.println("not writable.");
} else
System.out.println(args[filename] + " does not exist.");
}
}
}
Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled.
Your name/nickname
Your email
WebSite
Subject
(Maximum characters: 1200). You have 1200 characters left. | http://www.devx.com/tips/Tip/24111 | CC-MAIN-2018-17 | en | refinedweb |
Package cloudsql
Overview ▹
Overview ▾
Package cloudsql exposes access to Google Cloud SQL databases.
This package does not work in App Engine "flexible environment".
This package is intended for MySQL drivers to make App Engine-specific connections. Applications should use this package through database/sql: Select a pure Go MySQL driver that supports this package, and use sql.Open with protocol "cloudsql" and an address of the Cloud SQL instance.
A Go MySQL driver that has been tested to work well with Cloud SQL is the go-sql-driver:
import "database/sql" import _ "github.com/go-sql-driver/mysql" db, err := sql.Open("mysql", "user@cloudsql(project-id:instance-name)/dbname")
Another driver that works well with Cloud SQL is the mymysql driver:
import "database/sql" import _ "github.com/ziutek/mymysql/godrv" db, err := sql.Open("mymysql", "cloudsql:instance-name*dbname/user/password")
Using either of these drivers, you can perform a standard SQL query. This example assumes there is a table named 'users' with columns 'first_name' and 'last_name':
rows, err := db.Query("SELECT first_name, last_name FROM users") if err != nil { log.Errorf(ctx, "db.Query: %v", err) } defer rows.Close() for rows.Next() { var firstName string var lastName string if err := rows.Scan(&firstName, &lastName); err != nil { log.Errorf(ctx, "rows.Scan: %v", err) continue } log.Infof(ctx, "First: %v - Last: %v", firstName, lastName) } if err := rows.Err(); err != nil { log.Errorf(ctx, "Row error: %v", err) }
Index ▹
Index ▾
Package files
cloudsql.go cloudsql_vm ¶
func Dial(instance string) (net.Conn, error)
Dial connects to the named Cloud SQL instance. | http://docs.activestate.com/activego/1.8/pkg/google.golang.org/appengine/cloudsql/ | CC-MAIN-2018-17 | en | refinedweb |
It's not the same without you
Join the community to find out what other Atlassian users are discussing, debating and creating.
Hi,
I set up a "send custom email script" post function via script runner and I would like to add the sender's email address in the signature of the email.
the sender refers to a user picker custom field. I created a scripted field to be able to get the email address to display it in the email.
In the scripted field, I wrote the following script :
import com.atlassian.jira.user.ApplicationUser
return issue.getEmailAddress("Change Request Owner Assignee")
I know something's missing but I did not manage to figure out the problem. i tried with String X= fieldX.getEmailAddress but in vain.
Any ideas?
Thanks a lot
getEmailAddress wants you to give it a user. You appear to be feeding it a string which looks like the name of a field.
You need to do something more like getEmailAddress($issue.getCustomFieldValue(fieldManager.getCustomFieldByName("CR owner assignee")))
Although there's better ways to do it in a script, this at least shows you why your current code does not work.
Hi,
Thanks for your answer. I tryed your code but unfortunately it still does not work...
Mel
If you want them in the signature then what Nic said
Hi mel
In the "CC issue fields" field you can add the id of your user picker custom field, you can find it <baseUrl>/rest/api/2/field, it should be something like customfield_xxxx
(hit preview to make sure that it returns the emails). No need to copy the emails to scripted fields.
Hi Thanos,
Thanks for your answer. yes I know that we can have the email cc but it's a "whim" I wanted to add in the email body like signatures in Outlook you. | https://community.atlassian.com/t5/Jira-Core-questions/how-to-get-a-user-custom-field-email-address-through-script/qaq-p/10172 | CC-MAIN-2018-17 | en | refinedweb |
Feedback
Getting Started
Discussions
Site operation discussions
Recent Posts
(new topic)
Departments
Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
Archives
I'm looking for example type systems that can type list structure.
For a simple example... (Sorry, I think in code)
// Map two elts at a time, not two lists
def map2(alist:List, f(*,* -> 'u) -> 'u List) // f(*,*) is not good
def mklis(nl: 'u List, rest: List)
match rest
| a :: b :: r -> mklis(f(a,b) :: nl, r) // types of a and b?
| else -> reverse(nl)
in mklis(nil, alist);
def plist2alist(plist:List -> <List>List)
map2(plist, fn(a,b) a :: b :: nil);
plist2alist('(A, 1, B, 2, C, 3))
=> ((A,1),(B,2),(C,3))
It would be very nice to type plist's internal structure,
thus allowing for typing the map2 function, plist2alist()'s
resulting internal structure, etc.
I can sort of imagine some kind of regex typing construct,
but I have no clear ideas on this. Any languages out there
do a good job of typing repeating internal patterned structure
like this?
If this is impossible for any theoretical reason, I'd love
to know that too :-)
Many thanks.
Scott
Various systems have been proposed to type XML data, which is a very similar problem. The first example that comes to mind is XDuce, although I'm sure there are many others. XDuce at least does indeed use a notion of "regular expression types" as you suggest. I would look in this area if I were you.
you might want to bug cdiggins one of the posters around here and author of the Cat programming language, his whole deal is having a typed stack, which seems like a very similar problem.
I mucked around a bit. Sorry if the syntax seems funky, but the idea should come across. Also, if one can slip a constructor into argument list of ConsA, then ConsB could be moved into the definition of List2. This sort of achieves what we might want for a Lisp style property list, but it's not really a list anymore, using a new set of constructors.
class 'h 't ConsB(hd: 'h, tl: 't) ;
class 'a 'b List2 =
ConsA(hd: 'a, tl:ConsB('b, 'a 'b List2))
| Nil2
;
As it stands, it seems to me just an odd way to write ConsA('a, 'b, 'a 'b List2) A pattern match in the first case would look something like.
| ConsA(k,ConsB(v,r)) -> // Fool with k,v,r values
or this in the second case of a two headed Cons.
| ConsA(k,v,r) -> // Fool with k,v,r values
This gives us something like a well typed property list, but again, it's not really a list - we're not here imposing type structure on a series (repeating or not repeating) of Cons cells. I mean, they are Cons cells, in the ConsA/ConsB case, in the sense that they have head/tail - but they are different types (different constructors).
I can sort of imagine something like the above definition, but somehow just specifying additional constraints on the good old "regular" List and cons cells. Perhaps with would be a kind of subtyping or "inheritance", as in oop? Dunno.
The regular expression idea from XML is also intriguing. Seems like sort of the same problem - imposing typed structure on aggregates that use the same underlying compositional machinery.
You might want to take a look at nested data types.
data Twist a b = Nil | Cons a (Twist b a) deriving Show
map2 _ Nil = Nil
map2 f (Cons x (Cons y rest)) = Cons (f x y) (map2 f rest)
main = do let pl = (Cons "a" (Cons 1 (Cons "b" (Cons 2 (Cons "c" (Cons 3 Nil))))))
print (map2 (,) pl)
print (map2 (flip replicate) pl) | http://lambda-the-ultimate.org/node/2875 | CC-MAIN-2018-17 | en | refinedweb |
try_files is your friend, here you can do the order you want to try files
and finaly have the proxypass upstream.
I think you have to specify the Accept header with the media type you want
in addition to the Content-Type header that states what is the content type
of your request, NOT the content type of the response which indeed is set
by the Accept header
So use the Accept header instead of Content-Type header
There are some caveats in doing so for code readability but a possible
solution is as follow.
Define the serialization method. If you need to work with different clients
I suggest JSON.
Create a decorator and put it between your function and the route
@route(...)
@expandargs
def foo(id, bar, baz):
...
In the decorator use request.json() (automatically decodes the payload if
it's JSON) to expand the args and then you'll call the wrapped function
with original args and the new, say, **expandedargs (note the double
asterisks to explode the keywords).
Problems arise when mixing positional and keyword args.
You have a bunch of options including manually invoking the express
(connect, really) middleware functions yourself (really, go read the source
code. They are just functions and there is no deep magic to confuse you).
So:
function defaultContentTypeMiddleware (req, res, next) {
req.headers['content-type'] = req.headers['content-type'] ||
'application/json';
}
app.use(defaultContentTypeMiddleware);
app.use(express.bodyParser());
This is code needed on DOCUMENT_ROOT/.htaccess on new.com:
Options +FollowSymLinks -MultiViews
# Turn mod_rewrite on
RewriteEngine On
RewriteBase /
RewriteCond %{HTTP_HOST} ^(www.)?new.com$ [NC]
RewriteRule ^$ [R=301,L]
This is code needed on DOCUMENT_ROOT/.htaccess on old.com:
Options +FollowSymLinks -MultiViews
# Turn mod_rewrite on
RewriteEngine On
RewriteBase /
RewriteCond %{HTTP_HOST} ^(www.)?old.com$ [NC]
RewriteRule ^$ [R=301,L]
RewriteCond %{HTTP_HOST} ^(www.)?old.com$ [NC]
RewriteRule ^(.+)$ [R=301,L]
RewriteCond %{HTTP_HOST} ^([^.]+).old.com$ [NC]
RewriteRule ^$ [R=301,L]
RewriteCond %{HTTP_HOST} ^([^.]+).old.com$ [NC]
RewriteRule ^(.+)$ [R=301,L]
The easiest way would be to override the route for "/" to point to your
custom controller. Make decision there and either perform a redirect,
transfer the request or return varying results.
It could also be done on a lower level but this is way more complicated
(using custom route implementation, route handler etc. - similar to what
Orchard.Alias module does). Extending Orchard.Alias to take into account
custom logic in addition to or replacing the current simple path-matching
logic would be a way to go then.
Your server returns an XML content but says it returns HTML content
(content type is text/html according to the error message) and thus the
parsing failed. You need to make sure your server returns something like
text/xml and also that you have the correct converters in you rest template
object.
Edit: Try to add this message converter. Put it first (before
StringHttpMessageConverter and SourceHttpMessageConverter)
Jaxb2RootElementHttpMessageConverter jaxbMessageConverter = new
Jaxb2RootElementHttpMessageConverter();
List<MediaType> mediaTypes = new ArrayList<MediaType>();
mediaTypes.add(MediaType.TEXT_HTML);
jaxbMessageConverter.setSupportedMediaTypes(mediaTypes);
messageConverters .add(jaxbMessageConverter);.
contentType is the type of data you're sending, so application/json;
charset=utf-8 is a common one, as is application/x-www-form-urlencoded;
charset=UTF-8, which is the default.
dataType is what you're expecting back from the server: json, html, text,
etc. jQuery will use this to figure out how to populate the success
function's parameter.
If you're posting something like:
{"name":"John Doe"}
and expecting back:
{"success":true}
Then you should have:
var data = {"name":"John Doe"}
$.ajax({
dataType : "json",
contentType: "application/json; charset=utf-8",
data : JSON.stringify(data),
success : function(result) {
alert(result.success); // result is an object which is created from
the returned JSON
},
});
If you're expecting the following:
<div>S
Have,
Ok this is a bit too specific redirect but as far as I understand is you
don't need any thing more than this
location /local/airports/(.*)-2 {
return 301;
}
I think that your url is corrupted, you missed / and ?:
url:
'?'+"token="+token+"&account="+account+"&version=1.0&method=put",
Moreover you shuldn't use global urls (with http), because they are blocked
by browser...
url:
"?token="+token+"&account="+account+"&version=1.0&method=put",
The Accept header tells the server what your client wants in the response.
The Content-Type header tells the server what the client sends in the
request. So the two are not the same.
If the server only accepts application/json, you must send a request that
specifies the request content:
Content-Type: application/json
That's why your edited code works.
Edit
In your first code you use WebTarget.request(MediaType...
acceptedResponseTypes). The parameters of this method
define the accepted response media types.
You are using Innvocation.Builder.accept(MediaType... mediaTypes) on the
result of this method call. But accept() adds no new header, it is
unnecessary in your first code.
You never specify the content type of your request. Since the server
expects a Content-Type header, it
I use Firefox to send MPE requests via FormData regularly, and was doing so
without issue in 21 for some time. It has to be some plugin you have
installed, I would think. Disable all Firefox extensions and try again.
Assuming you have a class like this to represent the payload,
class Payload
{
public Dictionary<string, string> data { get; set; }
public string consolidationKey { get; set;}
public long expiresAfter { get; set; }
}
you can use HttpClient, like this.
string url = "";
var client = new HttpClient();
client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue(
"Bearer", "token");
client.DefaultRequestHeaders.Accept.Add(
new MediaTypeWithQualityHeaderValue("application/json"));
client.DefaultRequestHeaders.Add("X-Amzn-Type-Version",
"[email protected]");
client.Default
I found workaround:
I hope somebody merge this fix to angularjs master.
You can bypass formatters entirely by reading the content yourself. Here's
an example:
public async Task Post()
{
string content = await Request.Content.ReadAsStringAsync();
// Store away the content
}
This doesn't require you to use or define any formatters at all..
First, create the following class:
public class ButtonAttribute : ActionMethodSelectorAttribute
{
public string ButtonName { get; set; }
public override bool IsValidForRequest(
ControllerContext controllerContext,
System.Reflection.MethodInfo methodInfo)
{
return controllerContext.Controller.ValueProvider
.GetValue(ButtonName) != null;
}
public ButtonAttribute(string buttonName)
{
ButtonName = buttonName;
}
}
Then, in the view, add the name attribute to your buttons:
<form action="~/Sample/Post" method="post">
<input type="text" name="x" value="test" />
<input type="number" name="y" value="123" />
<input type="submit" value="First button" name="submit1" />
<input type="su
Just looking at this quickly it seems like this line of code is wrong.
var result = client.PostAsJsonAsync<Dictionary<string,
Object>>(url, data).Result;
based on fact that you're saying amazon wants the post json to look like
this.
{"data":{"message":"value1","title":"value2"},"consolidationKey":"Some
Key","expiresAfter":86400}
It seems like you should be passing your var content to the post rather
than var data. I.e.
var result = client.PostAsJsonAsync<Dictionary<string,
Object>>(url, content).Result;
That should probably be:
data = HTTParty.post(url, :body => request, :headers =>
{"Content-Type" => "application/xml"})
Don't worry about content-length, that's HTTParty's job.
your code looks correct.
requests.post('', files={'name': 'John Doe'})
... and should send a 'multipart/form-data' Post.
and indeed, I get something like this posted:
Accept-Encoding: gzip, deflate, compress
Connection: close
Accept: */*
Content-Length: 188
Content-Type: multipart/form-data;
boundary=032a1ab685934650abbe059cb45d6ff3
User-Agent: python-requests/1.2.3 CPython/2.7.4 Linux/3.8.0-27-generic
--032a1ab685934650abbe059cb45d6ff3
Content-Disposition: form-data; name="name"; filename="name"
Content-Type: application/octet-stream
John Doe
--032a1ab685934650abbe059cb45d6ff3--
I have no idea why you'd get that weird Content-Type header:
Content-Type: application/x-pandoplugin
I would begin by removing Pando Web Plugin from your machine completely,
and then try
Access Control Origin things.
Setting of Content-Type header in the JavaScript was not allowed by
browser. The solution is to modify the response headers (example in Java):
HttpServletResponse hresp = (HttpServletResponse) resp;
hresp.addHeader("Access-Control-Allow-Origin", "*");
hresp.addHeader("Access-Control-Allow-Headers",
"X-Requested-With,Content-Type");
It is not jquery-specific, but it applies to every AJAX request as well
(Dojo or plain javascript).
As per the HTTP spec, you can send any content type you like in an HTTP
response as long as you provide the appropriate Content-type header.
The main benefit of JSON and XML over a plain query string is that they
support hierarchies and complex data structures, e.g.:
{"cars":[{"manufacturer":"Ford"}, {"manufacturer":"GM"}]}
or
<cars>
<car>
<manufacturer>Ford</manufacturer>
</car>
<car>
<manufacturer>GM</manufacturer>
</car>
</cars>
These kinds of structures are usually very useful for webservices, and
can't really be achieved with a plain query string.
$.ajax uses XDomainRequest in order to allow AJAX applications to make safe
cross-origin requests
In Internet Explorer 8, the XDomainRequest object was introduced. This
object allows AJAX applications to make safe cross-origin requests
directly by ensuring that HTTP Responses can only be read by the
current page if the data source indicates that the response is public
but unfortunately,
ap
Your message converter does not work with the native request but with a
HttpInputMessage parameter. That is a Spring class.
The inputMessage.getBody() is were your problem arises. By default, a
ServletServerHttpRequest (another Spring class) is used which has something
like this in its getBody() method:
public InputStream getBody() throws IOException {
if (isFormSubmittal(this.servletRequest)) {
return getFormBody(this.servletRequest);
}
else {
return this.servletRequest.getInputStream();
}
}
which delegates to a private implementation like this:
private InputStream getFormBody(HttpServletRequest request) throws
IOException {
ByteArrayOutputStream bos = new ByteArrayOutputStream();
Writer writer = new OutputStreamWriter(bos, FORM_CHARSET);
You should register each nib file you need in viewDidLoad, something like
this (substituting the correct names for the nib file and the identifier):
[self.collectionView registerNib:[UINib nibWithNibName:@"RDCell"
bundle:nil] forCellWithReuseIdentifier:@"FirstType"];
Then, in itemForRowAtIndexPath, test for type and return the correct type
of cell:
- (UICollectionViewCell *)collectionView:(UICollectionView
*)collectionView cellForItemAtIndexPath:(NSIndexPath *)indexPath {
if (type = @"firstType") {
FirstCell *cell = (FirstCell *) [collectionView
dequeueReusableCellWithReuseIdentifier:@"FirstType"
forIndexPath:indexPath];
return cell;
}else{
SecondCell *cell = (SecondCell *) [collectionView
dequeueReusableCellWithReuseIdentifier:@"Sec
You can make course id and assign it to the userid so when some one will
login based on course they have chosen you can show them specific part of
site by checking course id from users table for that user.
try following instead:
<select id="selectField1" style="padding-left: 20px;width:150px">
<option value="option1">Cat</option>
<option value="option2">Dog</option>
<option value="option3">Lion</option>
<select id="selectField2" style="padding-left: 20px;width:150px">
<option value="option1">Cat</option>
<option value="option2">Dog</option>
<option value="option3">Lion</option>
<div id="option1" class="block">Felis catus</div>
<div id="option2" class="block">Canis lupus familiaris</div>
<div id="option3" class="block">Panthera leo</div>
$('.block').hide();
$('#selectField1,#selectField2').
You can get the post data via request.form.keys()[0] if content type is
application/x-www-form-urlencoded.
request.form is a multidict, whose keys contain the parsed post data.
Update
Your plnkr worked for me... sort of. I get the following response:
{"results":"Sorry, 'json' key expected in post data. For example { "json":
"{...}" }. Please check the Blitline examples."}
According to the docs:
A job is a collection of 1 or more functions to be performed on an
image. Data submitted to the job api must have a key of "json" and a
value that is a string. The string must contain properly formatted
JSON.
You should be submitting your POST in a format like this:
angular.module('myApp', ['blitline'])
.config(['$blitlineGlobalProvider', function($blitlineGlobalProvider) {
$blitlineGlobalProvider.options({
json: '{"application_id": "YOUR_ID","version": 2,"src":
"","functions": [{"name":
"resize_to_fit","
I tried something similar some time ago. I used JQuery to parse text as
Json or html. In case of HTML I appended it to directly to the DOM.
check parseHTML()
Have a look at this:
How can I make angular.js post data as form data instead of a request
payload?
Alternatively, you could do the following:
$http.post('file.php',{
'val': val
}).success(function(data){
console.log(data);
});
PHP
$post = json_decode(file_get_contents('php://input'));
$val = print_r($post->val,true);
What you describe is a standard master-detail interface. I guess the detail
view controller has a property defined in the .h file where you can set the
new 'detail item'. If not, you need to add one (and you must currently have
one in the init method or something).
Implement the setter method for that property, update the detail item and
then reloadData.
This way you're just updating the existing view controller rather than
trying to create a new one each time.
I'd recommend you create a new iPad (or universal) project in Xcode, select
the master-detail project template and have a look at the code it contains.
Either set the height of the cell and and set overflow:hidden;, which is
kinda ugly, or split up the long comment and put the extra into a hidden
span and use jquery to show hide it
fiddle
HTML
<table>
<tr>
<td valign="top" width="100px" >comment 1:</td>
<td class="smallComment">Blah blah blah blah blah blah blah
blah blah b
Find the tpl.php file that prints that panel layout from the panels module
(it can be found under "panels/plugins/layouts/YOUR_LAYOUT" folder) and
copy it in your theme folder.
In the tpl.php file add the php code the same way you edit drupal themes.
I'm assuming you are binding your TreeView to a list of items. If so, are
or can the first and second tier of items be of different data types?
Then, you can do a HierarchicalDataTemplate for your first tier type and a
DataTemplate for your second tier type as such:
<HierarchicalDataTemplate DataType="{x:Type local:FirstTierType}"
ItemsSource="{Binding Items}">
<StackPanel Orientation="Horizontal">
<TextBlock Text="{Binding Name}" />
</StackPanel>
</HierarchicalDataTemplate>
<DataTemplate DataType="{x:Type local:SecondTierType}">
<StackPanel Orientation="Horizontal">
<TextBlock Text="{Binding Name}" />
<StackPanel.ContextMenu>
<ContextMenu>
<MenuItem Header="wh
Posting a JSON object is quite easy in Angular. All you need to do is the
following:
Create a Javascript Object
I'll use your exact properties from your code.
var postObject = new Object();
postObject.userId = "testAgent2";
postObject.token = "testAgent2";
postObject.terminalInfo = "test2";
postObject.forceLogin = "false";
Post the object to the API
To post an object to an API you merely need a simple $http.post function.
See below:
$http.post("/path/to/api/", postObject).success(function(data){
//Callback function here.
//"data" is the response from the server.
});
Since JSON is the default method of posting to an API, there's no need to
reset that. See this link on $http shortcuts for more information.
With regards to your code specifically, try changing your save metho | http://www.w3hello.com/questions/Change-Nginx-redirection-rules-based-on-content-type-of-request | CC-MAIN-2018-17 | en | refinedweb |
OptionPane and ArraySize not known
Maureen Charlton
Ranch Hand
Joined: Oct 04, 2004
Posts: 218
posted
Oct 26, 2004 17:52:00
0
If I was to use
JOptionPane
to get user input, with the opportunity to type exit when the user has finished entering their data, And I wanted to store the data they had input in an array would I have to use a dynamic array? (As I would need the array to grow in size to accommodate the number of user input items to initialise the size of the array).
Or is their an alternative?
/*File name: GetDynamicArrayPerfect.java Requirement: Create an MS-DOS program to request, and then display, student names at the end. */ import javax.swing.JOptionPane; import java.util.ArrayList; class DynamicArray { //Member section //Private members String temp; //Public members //An array to hold the data for student name public static String StudentName [ ]; //Constructor section public DynamicArray( ) { //Array will grow as necessary StudentName = new String[1]; }//ends constructor DynamicArray //Method section //Get the value from the specified position in the array //Since all array position are initially zero, when the //specified position lies outside the //actual physical size of the data array, //a value of 0 is returned. public String get(int position) { if (position >= StudentName.length) { temp=StudentName[0]; } else { temp=StudentName[position]; } return temp; }//ends method get //Store the value in the specified position in the array //The data array will increase in size to include //this position, if necessary public void put(int position, String value) { if (position >= StudentName.length) { //The specified position is outside the actual size //of the data array. //Double the size, or if that still does not include //the specified position, set //the new size to 2*position. int newSize = 2* StudentName.length; if (position >=newSize) { newSize = 2 * position; } String newStudentName[ ] = new String[newSize]; System.arraycopy (StudentName, 0, newStudentName, 0, StudentName.length); StudentName =newStudentName; //The following line is for testing purposes only. System.out.println("Test Size of dynamic array increasing: "+newSize); } StudentName[position]= value; System.out.println("\nTest Name in array and position: " +value+"\t" +position); }//ends method put }//ends class DynamicArray public class GetDynamicArrayPerfect { //This is the main program to obtain student details using //Java's JOptionPane class and call the showInputDialog method. public static void main(String [ ] args) { DynamicArray StudentName;//To hold the input Names int numCt = 0;//No. of student names stored String Name;//One of the names input StudentName = new DynamicArray( ); //GET DATA AND STORE IT IN ARRAY for (int lp = 0; lp<=numCt; lp++) { Name= JOptionPane.showInputDialog("Please enter Student Name: \n Enter 'Exit' to end program"); //EXIT LOOP IF "Exit" HAS BEEN ENTERED if (Name.compareToIgnoreCase("Exit")==0) break; else { //Store student details in the dynamic array StudentName.put(numCt, Name); numCt++; }//end if else }//end for statement to get data and store it in array //PRINT OUT HEADING TO SCREEN System.out.println("\nStudent Name:\tPos:\tEnrolled Course:\n=============\t====\t================"); //PRINT OUT TO SCREEN, ALL STUDENT DETAILS ENTERED for (int lp = 0; lp<numCt; lp++) { System.out.println(StudentName.get(lp)+ "\t\t"+lp ); }//end for statement System.out.println("\nThe number of Student Details entered =: "+numCt ); System.exit(0); //this is needed when using JOptionPane }//end main program }//end GetDynamicArray class
Layne Lund
Ranch Hand
Joined: Dec 06, 2001
Posts: 3061
posted
Oct 26, 2004 18:02:00
0
If you want to implement this yourself, it looks like you are on the right track. In that case, you don't need "import java.util.ArrayList". Alternatively, you can use
ArrayList
which already does this magic for you without any effort on your part.
Layne
Java API Documentation
The Java Tutorial
I agree. Here's the link:
subject: JOptionPane and ArraySize not known
Similar Threads
Hashing Storage
Method Hash
Arrays: Calling input from other methods and classes
Method: Return String Array position
Exception in thread "main" java.lang.Stack
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/397535/java/java/JOptionPane-ArraySize | CC-MAIN-2015-40 | en | refinedweb |
Post your Comment
Hibernate min() Function
Hibernate min() Function
This section contains an example of the HQL min() function.
min() function in HQL returns the smallest value from the
selected... keywords as the SQL to be written with the min()
function.
for example : min
JPA Min Function
JPA Min Function
In this section, you will learn about the min function
of JPA... result.
JPA Min Function:
Query query=em.createQuery(
"
Hibernate Min() Function (Aggregate Functions)
Hibernate Min() Function (Aggregate Functions)
... the Min()
function. Hibernate supports multiple aggregate functions. When... criteria.
Following is a aggregate function (min() function
how to calculate max and min - RUP
function max and min in java.
public class MathMaxMin {
public static...));
// Method for the min number among the pair of number
System.out.println("Method...
Thanks
Hi friend,
In Math class having two function max
Min Value
Min Value How to access min value of column from database table... min(id) from QuestionModel result_info";
Query query = session.createQuery(selectQuery);
System.out.println("Min Value : " +query.list().get(0
MySQL Aggregate Function
|
+----------+
The 'MIN' function is used to find... MySQL Aggregate Function
This example illustrates how use the aggregate function
min project with vb - WebSevices
min project with vb i have do the min project with VB Frantend and oracle backend
JavaScript min method
JavaScript min method
..., secondValue, thirdValue,
..........nValue);
This min() method can be used...; it calls the function findMinimum() as
we have defined in the JavaScript of our code
Using Min, Max and Count on HQL.
Using Min, Max and Count on HQL. How to use Min, Max and Count on HQL
Hibernate criteria query using Min().
Hibernate criteria query using Min(). How to find minimum value in Hibernate criteria query using Projection?
You can find out minimum...);
}
}
}
Output:
Hibernate: select min(this_.salary) as y0
function
function difference between function overloading and operator overloading
Mysql Min Max
Mysql Min Max
Mysql Min Max is useful to find the minimum and maximum records from a table.
Understand with Example
The Tutorial illustrate an example from Mysql Min Max
Java bigdecimal min example
Java bigdecimal min example
Example below demonstrates the working of bigdecimal
class min() method. Java min method analysis the bigdecimal objects value and returns
SQL Aggregate Functions List
;
SQL Aggregate Functions List describe you the Aggregate Function List
Queries. The Aggregate Function include the average, count, min, max, sum etc...; count (id) Function is a aggregate function that return the sum
number of records
Hibernate Avg() Function (Aggregate Functions)
Hibernate Avg() Function (Aggregate Functions)
... to use the avg()
function. Hibernate supports multiple aggregate functions. When...(...),
sum(...), min(...), max(...) , count(*), count(...), count(distinct
Hibernate Max() Function (Aggregate Functions)
Hibernate Max() Function (Aggregate Functions... to use the Max()
function. Hibernate supports multiple aggregate functions. When...(...),
sum(...), min(...), max(...) , count(*), count(...), count(distinct
hibernate criteria Max Min Average Result Example
hibernate criteria Max Min Average Result Example
In this Example, We... the Projections class max, min ,avg methods.
Here is the simple Example code files...");
}
if (i == 1) {
System.out.print("Min Salary
Post your Comment | http://roseindia.net/discussion/17861-JPA-Min-Function.html | CC-MAIN-2015-40 | en | refinedweb |
This scenario provides the basic configuration and steps necessary to implement fabric monitoring in a private cloud based on Microsoft Windows Server 2012 and Microsoft System Center 2012.
This scenario uses the following System Center components in addition to Windows Server 2012.
The scenario assumes that these components are already installed and configured and working properly.
It is beyond the scope of this scenario to provide basic deployment and configuration information for these components.
You can refer to the individual documentation for each component for this information.
Fabric monitoring is one feature of
Fabric Management. It refers to monitoring those resources that support the private cloud to ensure that they are available and providing adequate performance.
In keeping with the goals of a private cloud, the monitoring needs to be as automated as possible and provide the ability to remediate issues in addition to simply detecting and reporting them.
The fabric of a private cloud includes the infrastructure required to deliver cloud services.
This includes physical components such as the host servers, storage devices, network devices, and components of the facility such as power.
It also includes virtualized components such as virtual machines, virtual disks, and virtual network components.
Operations Manager is the primary component providing monitoring services in a System Center 2012 environment, and it is the central component of this solution.
Service Manager provides complete incident management and correlation of incidents to the configuration management database. By integrating the two components, alerts in Operations Manager can automatically create incidents in Service Manager where they
can be managed by support personnel.
In addition to detecting and managing issues, an automated monitoring solution should include the ability to remediate detected issues.
Orchestrator provides runbooks that can perform such remediation and that can be launched from Service Manager either automatically or by an operator.
The following table summarizes the function of each component in this solution., architected, and managed based on requirements defined by a private organization.
The scenario assumes that you have installed and configured the System Components according to their individual requirements.
It does not assume that you have configured integration between them as this configuration is included as part of the scenario.
Further information on the integration between the different System Center components is available in the
System Center 2012 Integration Guide hosted on the Microsoft TechNet wiki at .
Implementing monitoring for the physical resources of the cloud primarily includes implementing standard features of Operations Manager.
This includes deploying agents and management packs for the physical computers and devices and discovering network components.
The first task in implementing monitoring for servers and blade systems is
deploying Operations Manager agents to the physical host computers.
The agent is typically included in the standard server build configuration so that monitoring can be performed immediately upon deployment of a new physical server.
Configuring Active Directory so that the agent can query for a management group assignment can assist in the agent locating the correct management group to perform initial monitoring.
When the agents are deployed, then you need to
install management packs to provide required monitoring. Monitoring the operating system and services such as Hyper-V can be performed by installing standard management packs including the following:
Monitoring of unique aspects of the physical computer must be performed by management packs specific to the brand of device, such as HP and Dell.
These management packs can be located in the
management pack catalog for the particular equipment that your private cloud is based on.
Storage devices are monitored by the operating system management packs for basic issues such as availability and free space.
Detailed monitoring of the physical devices though require management packs specific to the brand of device such as NetApp, HP, and EMC.
These management packs can be located in the
management pack catalog for the particular equipment that your private cloud is built on.
Monitoring of network devices is a standard feature of System Center 2012 Operations Manager, and a variety of different manufacturer devices are monitored without requiring additional
management packs. Before a network device can be monitored,
it needs to be discovered. Operations Manager allows you to explicitly discover individual devices or perform a recursive query that can run on a schedule to identify new devices as they are introduced into the environment.
Using this feature, new devices can be automatically discovered and monitored with no operator intervention.
Monitoring of virtual devices includes monitoring of the virtual machines, virtual disks, and virtual network devices.
These are provisioned and managed by System Center 2012 Virtual Machine Manager, so monitoring of VMM is the primary task required to monitor these resources.
In addition to installing the
System Center Monitoring Pack for System Center 2012 - Virtual Machine Manager in order for Operations Manager to discover and monitor VMM components, you must
configure VMM to interact with an Operations Manager management server.
VMM performs some actions using the Operations Manager SDK that are typically performed with management packs for other products, and this is configuration required beyond installing the VMM management pack.
Part of the configuration of the VMM management pack is enabling
Physical Resource Optimization (PRO). Management packs that leverage this feature are able to access data from Operations Manager to be exposed in the VMM console. They can also perform automated actions in response to particular conditions.
Vendors may provide PRO Enabled management packs for their applications or services.
While Operations Manager specializes in detecting issues and collecting operational data, Service Manager provides complete management of the lifecycle of detected incidents. By integrating the two components, resources discovered by Operations Manager
can be managed by Service manager. In addition, alerts created by Operations Manager can automatically create corresponding incidents in Service Manager and then kept in synchronization between the two tools. Operations Manager integrates with Service Manager
through two types of connectors that are both created and configured in the Service Manager console.
The Configuration Items connector imports objects from Operations Manager as Configuration Items in Service Manager.
Discoveries in Operations Manager locate resources and their properties on managed computers, and the connector allows these objects to be automatically imported into Service Manager.
Any instance of a discovered class in Operations Manager that derives from a common set of classes will be imported into Service Manager.
The only requirement is that you
import the management pack that includes the class definitions that you want to import.
The standard set of management packs are
imported using a Windows PowerShell script. Instances of classes that do not derive from this common set classes will not be imported unless you
add their class to the Allowed List.
The Alerts connector imports alerts from Operations Manager into Service Manager so they can be managed as incidents.
The incident in Service Manager remains in synchronization with the alert in Operations Manager allowing updates and resolution to be performed on either side.
In order to synchronize alerts in Operations Manager with Incidents in Service Manager, you must
configure a Connector in Service Manager and a Subscription in Operations Manager.
The subscription defines which alerts will be forwarded to Service Manager.
This may be as simple as forwarding all new alerts with a Critical severity, or you can provide more granular criteria to define only a subset of alerts that are managed as incidents.
The connector in Service Manager defines what to do with the alert once it’s forwarded.
You can assign a template to different types of alerts in order to provide such details as who to assign the incident, its severity, and what CIs are affected.
A typical strategy is to initially assign all incidents a basic template and manually provide these details as support personnel are able to review the incident.
As you gain experience with the types of alerts that you are experiencing, then more granular templates may be created in order to automate these details.
Management packs in Operations Manager specialize in detecting issues and collecting performance and operations data.
When an incident is assigned to a technician in Service Manager, an operator will be required to analyze the resulting alert and perform the steps required to correct the problem.
Once the operator validates that the problem has been resolved, they resolve the incident accordingly.
Runbooks in Orchestrator can be used to perform automated remediation for certain issues.
Using a runbook, a problem can be corrected and the resulting incident resolved with minimal operator intervention.
A well written runbook will validate that the conditions exist indicating the issue it is designed to correct, perform the required remediation steps, validate that the correction has been made, and then resolve any open alerts or incidents.
Standard runbook activities installed with Orchestrator allow you to perform basic operations with Windows Server 2012, but you must have an Integration Pack installed in order to use
activities designed to interact with another application or service. You should install at least the following Integration Packs to interact with the other System Center components.
You should also have the following integration packs installed to interact with basic services of the fabric.
You may require other Integration Packs to interact with other components and services in your environment.
You can typically identify these Integration Packs in the
Technet Library but may need to contact the vendor directly. Since the VMM management pack will allow you to interact with virtual resources such as storage and network, its Integration Pack can often be used to interact with these resources.
For those resources with no Integration Pack available, you can use the
Run .Net Script activity to run a Windows Powershell script that accesses the resource.
There is no current standard set of runbooks for remediating common issues in the cloud fabric.
You will have to create runbooks for your own environment or obtain them from other vendors.
A standard process is to periodically analyze issues that have occurred with the cloud fabric and determine whether a runbook could be created to automate their remediation.
In Orchestrator, Runbooks can be started from the
Runbook Designer or
Orchestration Console. If they are
imported into Service Manager though, they can be included in a
Runbook Automation Activity Template where they can either be launched manually by an operator or automatically as soon as an incident is created.
Importing runbooks into Service Manager has the following advantages over running them from the Runbook Designer or Orchestration Console.
In order to import runbooks into Service Manager, you must
create a runbook connector. The connector will connect to the Orchestrator Web Console server at periodic intervals and import runbooks.
To use a runbook that has been imported into Service Manager, you must create a
Runbook Automation Activity Template.
This will allow you to define settings for the runbook and to map values into any parameters required by the runbook.
The template can then be used with a Work Item such as a Service Request.
In order to associate the runbook with an incident opened by Operations Manager, you must add the runbook activity template to a
template based on the Incident class.
These are the only templates that are available to use in Alert Routing Rules in the Operations Manager connector.
Once the Incident template is created, you can
add an Alert Routing Rule providing the criteria of the alerts that the runbook should be associated with. | http://social.technet.microsoft.com/wiki/contents/articles/14920.system-center-2012-scenario-fabric-monitoring.aspx | CC-MAIN-2015-40 | en | refinedweb |
Docker, an open source project providing a way to automate the deployment of Linux applications inside portable containers, is generating a lot of excitement. The underlying approach is not new: We've been using containers for years to componentize whole systems, abstracting them from the physical platform, so you can move them from platform to platform. But Docker brings the container approach to the cloud, so you can move Linux applications from cloud to cloud.
Docker is a much lighter-weight approach to application portability than we've had before. Docker extends a common container format called Linux Containers with a Linux kernel and a high-level API that together run processes in isolation: CPU, memory, I/O, network, and so on. Docker also provides namespaces to completely isolate an application's view of the operating environment, including process trees, network, user IDs, and file systems.
[. ]
Docker's lightweight platform abstraction is much more efficient than traditional VMs for creating workload bundles that are transportable from cloud to cloud. In many cases, virtualization is too cumbersome for cloud portability.
However, it will take some time before we see developers and cloud platform providers getting good at using Docker. Thus, you may want to take a wait-and-see approach, at least this year. Chances are more technology will hit the streets using Docker, making it more bulletproof.
But as Docker matures, it'll become a key part of your cloud environment.
This article, "Docker will make cloud apps portable -- next year," originally appeared at InfoWorld.com. Read more of David Linthicum's Cloud Computing blog and track the latest developments in cloud computing at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter. | http://www.infoworld.com/article/2607751/cloud-computing/docker-will-make-cloud-apps-portable----next-year.html | CC-MAIN-2015-40 | en | refinedweb |
This belongs here. So hard.
Rated 3 / 5 stars
it was meh.... but still a better ending than ME3 !
Rated 5 / 5 stars
I have never played any ms game
Rated 3.5 / 5 stars
def one of the better alt endings : )
Rated 4 / 5 stars
This is hilarious. Truly. I've heard alot about Mass Effect series, but this... might be the funniest thing to come of it, even though its a pathetic attempt at bad mouthing it. Bravo to potty mouths and brutality!
Out of the mouth of babes and potty mouths . . . | http://www.newgrounds.com/portal/view/594039/review_page/5 | CC-MAIN-2015-40 | en | refinedweb |
Known Issues for JDeveloper and ADF 11g 11.1.1.3.0 (PS2)
last updated: 26-APR-10!
Please read the installation guide for details on system requirements and specific installation instructions for various platforms.
For internal use, JDeveloper may sometimes insert emeacache.uk.oracle.com as the proxy server. This setting should only happen temporarily and only for internal users. However, there have been some reports of external customers also seeing this proxy setting which can lead to errors during deployment outside the Oracle network. The workaround is to remove (or fix) the proxy settings in Tools > Preferences, Web Browsers and Proxy.
After migrating IDE settings from a previous version of JDeveloper, the Tab Size setting in Tools > Preferences, Code Editor, Code Style is ignored.
The Windows .bat scripts for managing WLS do not work if environment variables such as JAVA_HOME or CLASSPATH contain a ')'. The former happens when e.g. the 32bit JDK is installed in its default path, e.g. "C:\Program Files (x86)\Java\jdk1.6.0_18". The latter if an application such as Apple QuickTime adds itself to the CLASSPATH environment variable.
When trying to run the integrated WLS you will get an error while starting that instance, stating that
"\Java\jdk1.6.0_18 was unexpected at this time."
The integrated WLS will not start as a result of that.
This also happens if the scripts are being run manually on command prompt.
Workaround is to install the JDK and programs such as Apple QuickTime in a path without ')'.
You can only create database objects (tables, columns, etc) in Oracle Lite. You cannot drop or update database objects,
unless the update is to create an object. This is because of current limitations in the Type 2 and Type 4 drivers for Oracle Lite.
You cannot perform any CRUD operations using SQL Worksheet for Oracle Lite databases. This is because of current limitations in the Type 2 and Type 4 drivers for Oracle Lite.
When you create a connection to Apache Derby, the default schema is not initialized for database diagrams. This means that when you try to create a database object such as a table on a database diagram for a Derby database, the table is not created.
The workaround is to:
JDeveloper does not support the following Oracle Database 11g Release 2 features:
If the object name or schema name contains the string IS or AS, the generated SQL is corrupt.
The workaround is to avoid using IS or AS in the object name or schema name, or to change the case of the string IS or AS. For example, use one of "is", "Is", or .
When testing Web services in JDeveloper, if ns1:nested2 namespace, as shown below:
displaying implied hyper links in the content view of the HTTP Analyzer you will see that on the Windows platform for every line you are down the document the links are offset by one character. This is no current workaround for this issue.
Excel validation does not get applied to the placeholder row in an ADF Table component that contains data. (bug 7509432)
At this point, an unhandled exception occurs.
Workaround:. (bug 6148067)
This behavior is expected because Oracle ADF Desktop Integration modifies an integrated Excel workbook each time a user opens it.. (bug 7511508)
Summary: Excel disables ADFdi Button components in integrated worksheets when "zoom" is set to any value other than 100%.
Workaround: Set Excel's Zoom level to 100% so that ADF Button components can function correctly in integrated Excel workbooks. (bug 8305907)
Note: Workbook developers can avoid this issue by using worksheet menu items instead of buttons.
Oracle ADF Desktop Integration does not support the conditional formatting features provided in Excel by clicking Home > Conditional Formatting.
(bug 6869548)
Workaround: underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.
The remote certificate is invalid according to the validation procedure.
Solution: View the certificate and install it.Known issues involving workbook download..
http\://xmlns.oracle.com/adf/activedata/config=oracle.adfinternal.view.faces.activedata.ActiveDataConfiguration$ActiveDataConfigCallback
ADF Faces components expect applications to use primary keys on the model which are pre-populated for new records and do not change with any record updates., the userActivity data for previous requests may be partially or sometimes completely unavailable when accessed from pages with a different URL than when it was originally saved.:
In current releases of Oracle JDeveloper 11g R1, the adf-config.properties file and the .adf\META-INF\services directory don't exist by default. Developers who wish to use active data service (ADS) in the user interface project need to create the file and the directory manually. The ads-config.properties file is needed for ADFShare MBean to parse ADS related configuration setup in the adf-config.xml file.
To create the ads-config.properties file, select the user interface project and choose File | New from the Oracle JDeveloper menu. In the General category, select File and click OK. In the Create File dialog, enter adf-config.properties as the file name. In the Directory field, browse to the .adf\META-INF\ directory and append services to the path to create the folder that will contain the adf-config.properties file. For example, the directory path in the dialog might look like this C:\JDeveloper\mywork\MyApplication\.adf\META-INF\services. Click OK to close the dialog. In the Application Navigator, expand the ADF META-INF node and the new services node, and then double-click the adf-config.properties file. In the editor, add the following line to the contents of the file and save it.
/Add the value as a single string with no line breaks.
http\://xmlns.oracle.com/adf/activedata/config=oracle.adfinternal.view.faces.activedata.ActiveDataConfiguration$ActiveDataConfigCallback
When you want to access an ADF Business Components web service in a consuming application (the client application) or when you want to test the service using a Java test client in the same application as the service, you must ensure that the correct libraries appear on the project's classpath. In the Application Navigator, double-click the project and in the Project Properties dialog, select Libraries and Classpath and confirm the following libraries appear in this order:
Note that JAX-WS Client library must appear last in the list to ensure Oracle's implementation of javax.xml.ws.spi.Provider is found. If Oracle's implementation is not used (defaults to Sun's implementation), a runtime service exception error will result.
Web applications using getUserPrincipalName() from ApplicationModuleImpl or SessionImpl may get 'anonymous' after user already logged in.
Workaround:
1) Use ADFContext.getCurrent().getSecurityContext().getUserName() instead.
2) If issue is on history column only, then entities with history column kinds Created By and Modified By should override getHistoryContextForAttribute() in the entity impl class as follow:
If entity has ‘Created By” or ‘Modified By” history columns, then override getHistoryContextForAttribute(AttributeDefImpl attr) in the entity impl class that has history column(s) as follows:> < . . . >
When you create a mobile browser JSF page (by checking "Render for Mobile" check box when creating JSF page), the default component set selected in the Component Palette is typically not "Trinidad". It may be JSF or even ADF Faces. Do not use JSF or ADF Faces, but always select "Trinidad" or "Data Visualization Core" component set.
A Developer Preview version of Oracle ADF Mobile client is available as a separate download from the JDeveloper Update Center (Help > Check for Updates). For a list of known issues for ADF Mobile Client, please refer to the ADF Mobile Client Release Notes. | http://www.oracle.com/technetwork/developer-tools/jdev/knownissues-086971.html | CC-MAIN-2015-40 | en | refinedweb |
NAME
login, logout - write utmp and wtmp entries
SYNOPSIS
#include <utmp.h> void login(const struct utmp *ut); int logout(const char *ut_line); Link with -lutil.
DESCRIPTION tty, and stores the corresponding pathname minus a possible leading /dev/ into this field, and then writes the struct to the utmp file. On the other hand, if no tty. | http://manpages.ubuntu.com/manpages/precise/man3/logout.3.html | CC-MAIN-2015-40 | en | refinedweb |
Overview
MongoDB is a NoSQL document storage database that has quickly become one of the default databases that developers use when creating new applications. The ease of use of the database system and the JSON document storage implementation are just a few of the reasons that developers are flocking to this NoSQL database. With a plethora of drivers available for most programming languages, getting up and running with this database is often a trivial task. Is this blog post, we will show the steps for deploying a MongoDB instance using the popular MongoLab database hosting provider and then integrating that database with a PHP application that is deployed on OpenShift.
MongoLab offers free MongoDB instances for developers to try their service in order to get a good understanding of the platform in which they host your database. Signing up is quick and easy as you only need to provide an email address and password. In order to sign up, head over to the MongoLab website and click the sign up button.
After you have created an account, you will be presented with a screen to create your first database.
In the above image, I created a database with the name of mydb and selected Amazon EC2 as my hosting provider. I chose EC2 East as that is where OpenShift Online gears are located by default. This will provide the lowest latency possible between my application and the database.
I also created a database user for my application to use.
After the database has been created, you can view information about the database as well as perform administration tasks such as user management and backup scheduling by using their web console. You will also be provided with the connection URL for your database. Make a note of the connection URL as we will be using it later in the blog post.
Testing the connection
Now that you have the connection URL for your MongoDB database that is hosted by MongoLab, we can verify that everything is working correctly by connection to the remote database from our local machine.
Note: In order to test the connection from your local machine, you will need to the mongo client installed locally. If you do not have this installed, you can skip this step.
Creating an OpenShift Online account
Step 1: Create an OpenShift Account
If you don’t already have an OpenShift account, head on over to the website and signup.: Create an OpenShift application
Now that we have an OpenShift account and the client tools installed, lets get started building an application to use our MongoDB instance. The first thing we need to do is create a gear that will hold our application code and database. I will be using the command line tools that we installed in step 2, but you can perform the same action via the web console or using our IDE integration.
$ rhc app create mymongoapp php-5.3
This will create an application container for us, called a gear, and setup all of the required SELinux policies and cgroup configuration. OpenShift will also setup a private git repository for you and propagate your DNS out world wide. This whole process should take about 30 seconds.
Once the application has been has been created, you can verify that it is working properly by loading the following URL in your web browser:
http://{yourAppName}-{yourNamespace}.rhcloud.com
For example, if you named your application mymongoapp, like the example above and if your namespace or domain is mycoolapps, the url would be:
Adding PHP and the CodeIgniter Framework
In order to modify the existing PHP application that was created by default with support for CodeIgniter, you can issue the following command:
cd mymongoapp git remote add upstream -m master git://github.com/openshift/CodeIgniterQuickStart.git git pull -s recursive -X theirs upstream master
Note: You can also download and push the CI framework yourself but I have provided the above git repository as a convenience.
Adding MongoDB support to CodeIgniter
Now that we have an application created, mongodb added, and CodeIgniter ready to go, the next step is to add mongodb support to the framework. By default, OpenShift already includes a php mongodb driver so you don’t have to worry about that aspect.
For this blog post, I am going to use a mongodb library written by Alex Bilbie, which is available on github.
Getting up and running with this library is pretty straightforward. Simply add the file Mongo_db.php to your /application/libraries folder and the file mongodb.php to your /application/config folder.
Once you have copied over those files, edit the mongodb.php file located in the mymongoapp/php/application/config directory and make the following changes using the information provided to you by MongoLab:
$config['mongo_host'] = $config['mongo_port'] = $config['mongo_db'] = $config['mongo_user'] = $config['mongo_pass'] =
Once you have finished adding the files and modifying the configuration file, add the files to git and commit the changes.
$ cd mymongoapp $ git add . $ git commit -am "Adding mongodb library" $ git push
Creating a Model, View, and Controller
If you are new to programming you may not have even heard of the MVC design pattern. MVC stands for Model-View-Controller and is a design pattern whose fundamental principle is that developers should separate the presentation from the business logic of an application. Many frameworks employ MVC at the core of the framework and CodeIgniter is no exception.
Create the Model
Models are PHP classes that are designed to work with information in your database. For example, let’s say you want to use CodeIgniter to create a simple user account and then search for users at a later point. Create a file under the mymongoapp/php/application/models directory and name the file usermodel.php. The source code for that file should look like this:
<?php class UserModel extends CI_Model { function __construct() { parent::__construct(); } function getUser($username) { $this->load->library('mongo_db'); $user = $this->mongo_db->get_where('users', array('username' => $username)); return $user; } function createUser($username, $password) { $this->load->library('mongo_db'); $user = array('username' => $username, 'password' => $password); $this->mongo_db->insert('users', $user); } } ?>
Let’s examine the createUser function that we defined above. The first thing we decide is that any person calling this API method will be expected to provide a username and a password for the user account that is to be created. Note that since this is just a simple example that we are not performing any validation on the createUser function. Typically you would do some routine checks to ensure the user doesn’t already have an account or to ensure the username and password conform to your password length standards.
The first code snippet in the function loads the mongodb library that we included in our application above. This call will search for a php library with the specified name in the libraries directory which is located under the application directory.
We then create a user array that will hold the data we want to insert into our mongodb database. In this case, we are just using the parameters that were passed in to the function call.
The next statement is where we actually insert the data into our database:
$this->mongo_db->insert('users', $user);
This code says to use the mongo_db library that we loaded and perform an insert on the users collection with the $user array acting as the data. Keep in mind that the users collection doesn’t have to exist for this call to work correctly. If the collection is not already there, mongodb will happily create the collection for you and then insert the data.
Create the Controller
If you are not familiar with CodeIgniter, the controller classes live under the php/application/controllers directory. For this example, we are going to create a new controller called MongoTest.php with the following contents:
<?php class MongoTest extends CI_Controller { public function index() { $this->load->model('usermodel'); $result = $this->usermodel->createUser("myuser", "mypassword"); $this->load->view('mongodbtest_view'); } } ?>
The index function is the default function that will get called when a HTTP request is initiated for this URL. The first thing we do is load the usermodel that we created in the previous step. This will allow us to call directly into functions that we have specified in that model class.
We then call createUser and pass in a username of myuser and a password of mypassword. This will in turn call the model creatUser function that we created in the previous step and perform the insert into the database.
The last code snippet tells the framework which view display to the user. In the case we are going to display a view that we will create in the next step.
Create the view
All view and presentation logic should be located under the mymongoapp/php/application/view directory. Create a file called mongodbtest_view.php and add the following contents:
<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>Welcome to CodeIgniter running mongodb</title> <style type="text/css"> ::selection{ background-color: #E13300; color: white; } ::moz-selection{ background-color: #E13300; color: white; } ::webkit> <div id="container"> <h1>Welcome to CodeIgniter with MongoDB running on OpenShift</h1> <div id="body"> <p>Congratulations. You have CodeIgniter and MongoDB running on OpenShift!</p> </div> <p class="footer">Page rendered in <strong>{elapsed_time}</strong> seconds</p> </div> </body> </html>
Deploy and test your application
At this point we have created an application, added mongodb hosted at MongoLab, installed CodeIgniter, added the mongodb library to our application, and created a MVC sample application. Now comes the fun part, testing our changes. Change directories to the top level of your application, which should be the mymongoapp directory and perform the following commands:
$ git add . $ git commit -am "Created MVC application" $ git push
At this point, you should see a congratulations message on the screen.
What’s Next?
- Get your own private Platform As a Service (PaaS)
by evaluating OpenShift Enterprise
- Need Help? Ask the OpenShift Community your questions in the forums
- Showcase your awesome app in the OpenShift Developer Spotlight.
Get in the OpenShift Application Gallery today. | https://blog.openshift.com/getting-started-with-mongodb-mongolab-php-and-openshift/ | CC-MAIN-2015-40 | en | refinedweb |
Using REST with Oracle Service Bus
By james.bayer on Jul 28, 2008
Overview
Oracle Service Bus (OSB), the rebranded name for BEA's AquaLogic Service Bus, is one of the products I help customers with frequently. Recently questions from both customers and internally at Oracle have been increasing about OSB support for REST. In this entry I cover the current and future state of REST support in OSB.
Background
REST can be a loaded term. There are a couple of InfoQ articles introducing REST and REST Anti-Patterns that should be required reading for those considering a REST architecture. What does REST support really mean in the context of an Enterprise Service Bus?
A Service Bus applies mediation to both clients and providers of services. This results in a loosely coupled architecture, as well as other benefits. Being loosely coupled means that Providers and Consumers can now vary different aspects of a message exchange:
- Location of Provider/Consumer
- Transport (Http, Https, JMS, EJB, FTP, Email, etc)
- Message Format (SOAP, POX - Plain Old Xml, JSON, txt, etc)
- Invocation Style (Asynchronous, Synchronous)
- Security (Basic Auth, SSL, Web Service Security, etc)
Two aspects where Service Bus support for REST would apply in an example would be Transport and Message Format.
A Contrived But Realistic Example
Consider a contrived example where REST support in the Service Bus could easily come up. Assume an organization already has an Order SOAP based service that is mediated by the Service Bus. Currently this service is used by several applications from different business units. SOAP web services with well-known XSD contracts work great for an interchange format for existing systems. However, there is a new initiative to give management a mobile application to get order status. The sponsoring executive has heard that AJAX is important and tasked an intern to quickly build the front-end because of frustration with IT delays. The front-end is quickly mocked up using an AJAX framework and the executive tells the IT staff to deploy it immediately. The problem is that the client-side AJAX framework doesn't work with SOAP and XML, it was mocked up with JSON.
Instead of developing a second Order Service to support this new invocation style and format, OSB can be configured to mediate both the invocation style and the message format and reuse the perfectly working, tested, provisioned, etc Order Service. A new proxy could be configured to accept HTTP requests and map them to SOAP over HTTP. The response, which is normally XML described by XSD, can be converted to JSON.
OSB Capabilities
A common REST approach in practice is the use of HTTP verbs POST, GET, PUT, and DELETE to correspond to the CRUD methods of CREATE, READ, UPDATE, DELETE. So we can map those verbs to the Order Service, mapping DELETE to cancelOrder since it is against company policy to delete orders from the system.
The HTTP transport for inbound requests (a Proxy) in the Service Bus provides a mechanism to intercept invocations to a user customized URI base path. So if my server binds to the context "user_configured_path" and localhost and port 7001, then the HTTP Proxy intercepts all HTTP GET and POST calls to . This includes suffixes to that path such as .
AquaLogic Service Bus 3.0 (as well as 2.x releases of ALSB) supports GET and POST use cases, but does not have developer friendly support for HTTP verbs PUT and DELETE out of the box. For GET requests, pertinent information is available in the $inbound/ctx:transport/ctx:request/http:relative-URI variable. To parse query string information, use $inbound/ctx:transport/ctx:request/http:query-string. A common pattern for HTTP GET is to use XQuery parsing functions on that variable to extract extra path detail, but the $body variable will be empty. For POST requests, the same applies, but the $body variable may have a message payload submitted as xml, text, or some other format.
I have heard from product management that first class support for HTTP verbs such as PUT and DELETE is planned in the next release of OSB, which is scheduled sometime in the Fall of 2008. For customers that require PUT and DELETE verb support immediately, OSB does have a transport SDK, which could be used to add inbound and outbound REST support. That approach would require some development. There are other inbound options besides the transport SDK, such as a basic approach of deploying an HTTP servlet with doDelete() and doPut() implemented. This servlet could be deployed on OSB instances and forward to specific proxies to provide PUT and DELETE support. An outbound option would be to use a java call-out to invoke a REST style provider.
Returning to the example for a moment, the executive dashboard only requires order status in the first release, so only HTTP GET support is necessary. In the existing SOAP-based Order Service, the interaction looks like this:
An equivalent REST-ful URL might look like where 123 is the OrderID. Clients using this interface might expect a JSON response like this:
{
"Status":"Submitted",
"OrderID":"123",
"OrderDetailID":"345",
"CustomerID":"234"
}
Types like int are supported JSON values, although the above value types are strings as you can see by the double quotes. Let's just assume that strings are ok for now to make this easier. Here is a proxy configuration bound to the Order URI context:
Notice that the response type is "text" because we will convert the XML to JSON. I tried using the JSON java library from json.org to convert xml nodes in the response to a JSON format. One gotcha is namespace conversion. For simplicity I tested with xml nodes with no namespaces, and that worked. That code looks like this:
org.json.JSONObject j = org.json.XML.toJSONObject( xmlStringBuffer.toString() );
System.out.println( j.toString() );
There are other techniques to convert between XML and JSON that use convention to map namespaces to JSON structures, but that is for another time. Because my example used real-world xml with namespaces, I used a java callout to loop over child nodes of an XmlObject to build the JSON structure in java code. Let me know what you think and if you have considered alternative approaches. I've attached my code artifacts that are completely for illustrative purposes only and worked on my machine with ALSB 3.0 and Workshop 10.2.
For another example that works with currently released versions of Service Bus, check out "The Definitive Guide to SOA - BEA AquaLogic Service Bus" by Jeff Davies. In the book he has a Command example where a Proxy is used for a service that handles multiple types of commands. An updated version of the book should be available shortly.
Posted by Jay Blanton on August 14, 2008 at 03:04 AM PDT #
Posted by James Bayer on August 14, 2008 at 03:29 AM PDT #
Posted by Damien on October 23, 2008 at 05:57 PM PDT #
Posted by James Bayer on October 23, 2008 at 07:05 PM PDT #
Posted by Fran on February 13, 2009 at 03:05 AM PST #
Posted by James Bayer on February 19, 2009 at 05:24 AM PST #
Posted by linesh on February 23, 2009 at 05:33 PM PST #
Posted by James Bayer on February 23, 2009 at 08:43 PM PST #
Posted by Jason Kim on March 02, 2009 at 06:45 AM PST #
Posted by Emiliano Pecis on April 22, 2009 at 03:55 AM PDT #
Posted by James Bayer on April 22, 2009 at 04:02 AM PDT #
Posted by Tom Kies on June 30, 2009 at 12:40 PM PDT #
Posted by James Bayer on June 30, 2009 at 07:53 PM PDT #
Posted by Ajoy on November 08, 2009 at 12:14 PM PST #
Posted by james.bayer on November 08, 2009 at 08:51 PM PST #
Posted by Ardi on November 25, 2009 at 12:28 AM PST #
Posted by james.bayer on November 28, 2009 at 12:35 AM PST # | https://blogs.oracle.com/jamesbayer/entry/using_rest_with_oracle_service | CC-MAIN-2015-40 | en | refinedweb |
Details
- Type:
Bug
- Status: Resolved
- Priority:
Blocker
- Resolution: Fixed
- Affects Version/s: 1.1
-
- Component/s: Replication
- Labels:None
- Skill Level:Dont Know
Description.
Issue Links
- relates to
COUCHDB-477 Add database uuid's
- Open
Activity
- All
- Work Log
- History
- Activity
- Transitions
I'd rather see the replicator respect a naming field. CouchDB core places no specific significance on the replication documents, treating them as any other document in the _local/ namespace. And we've heard a number of times, especially in the last few weeks, about how config files (and specifically ones that change) are ugly.
I proposed UUIDs at the DB level a long time ago for this reason, and relatedly so that you could trigger push/pull without using HTTP at both sides and have it be the same replication (discover that you are a host via UUID). Configuration files would work to make it server level, but it's hacky. DB-level is a bad idea because sysadmins might copy couch files. Ultimately, the client should identify the replication if it can. I think that's the best solution.
Couch already has a unique identifier: its URL. I'm not sure another per-server UUID buys you much. With a second unique identifier, you can determine that (this couch has moved on the Internet || this couch has a configuration error). Maybe the first condition is more likely. Meh.
Couch already has a per-server UUID: _config/couch_httpd_auth/secret. Hashing this value can produce a public unique ID.
Normal users may not read the config, so you have to expose this value to them somehow. Is that a new global handler? Or is it added to the{"couchdb":"Welcome"}
response?
For these reasons, I'm not sure the solution is to assign a random UUID to the Couch..
@Jason
URL's are nice and all, but they're fairly unstable. What's the URL for a couch on a phone? Or after it changes networks? Or if its behind a proxy? And how would that couch figure out what's URL is?
Randall's comment about copying files is why its not viable to just generate a UUID for each database. Because its quite likely that a sysadmin would copy that file which would result in to db's having the same UUID making the UUID not so UI.
@Jason: The reason I filed this bug report is that the URL of a database in Couchbase Mobile isn't unique. It's barely meaningful at all; it's of the form "" where "nnnnn" is an upredictable port number assigned by the TCP stack at launch time. That URL isn't even exposed to the outside world because the CouchDB server is only listening on the loopback interface.
The state that's being represented by a replication ID is the contents of the database (at least as it was when it last synced.) So it seems the ID should be something that sticks to the database itself, not to any ephemeral manifestation like a URL.
@Paul, I overlooked or misunderstood Randal's point about duplicating the UUID. Makes sense.
@Paul and Jens, totally: URLs are unstable. And, in general, on the web, if a URL changes, that is a huge piece of information. That is a huge hint that you need to re-evaluate your assumptions. Exhibit A: the replicator starts from scratch if you give it a new URL. That is the correct general solution.
Your requirement to change the couch URL and everything is just fine and the application doesn't have to worry--I think that requirement is asking CouchDB to be a bad web citizen. In other words, IMO you have an application-level problem, not a couch problem, except that even if you could determine everything is alright, you can't specify the replication ID.
I think Damien just time-traveled again.
BTW, @Jens, you can already give Apache CouchDB a UUID for your own needs. If you have admin access, just put it anywhere, /_config/jens/uuid. If an unprivileged client must know this UUID, then change the "Welcome" mesage in _config/httpd_global_handlers/%2f. Any unprivileged client can find it there. (You could also place the UUID in the authentication realm in _config/httpd/WWW-Authenticate which actually sounds sort of appropriate.) Finally, if you are willing to run a fork (which mobile Couchbase is) then you could add any feature you need which doesn't make sense for Apache CouchDB.
link relevant older ticket
report.
Hi, Jens. Yes, I take your point to stick to the specific topic. Thanks.
To summarize my long rants: A Couch UUID already exists. A database UUID won't work. Allowing the client to provide the replication ID would be great!
The following patch solves the issue by using the same approach as the auth cookie handler - using a node uuid which is stored in .ini config.
The new replication ID generation no longer uses the hostname and port pair, but instead this uuid. It fallbacks to the current method when searching for existing checkpoints however (thanks to Randall's replication ID upgrade scheme).
Besides mobile, desktop couch (Ubuntu One) is another example where the port is dynamic. I've also seen a desktop running normal CouchDB where the ID for a replication was different the first time it was generated from subsequent times - I suspect it was an issue with inet:gethostname/0 but unable to prove it.
Personally I don't think using the .ini for a single parameter is that bad (unlike things like OAuth tokens for e.g.) like others commented here before - an alternative would be to store it in a local document of the replicator database for e.g.
I would very much like to see this issue fixed in 1.2.0.
I've picked up this task and prepared a branch with my work (1259-stable_replication_ids). This patch goes beyond filipe's original and applies to the source and target as well. If couchdb believes either is using a dynamic port (it's configurable, but defaults to any port in the 14192-65535 range), it will ask the server for its uuid (emitted in the / welcome message). If it has one, it uses that instead of the port (specifically it uses{Scheme, UserInfo, UUID, Path}
instead of the full url).
I was reading the comment, and not sure it's a probem. Irt's expected in a p2p world and master-master replication that the node at the end could change. What doesn't changes are the data inside the dbs. Imo tthis is the role of the application to handle port,ips, dns changes. Not couchdb. In short I would close this issue as a wontfix.
Benoit: I think you're misunderstanding the issue. This isn't something about P2P. It's just that if the local CouchDB is not listening on a fixed port number, then replications made by that server to/from another server aren't handled efficiently ... even though the local server's port number has nothing at all to do with the replication (since it's the one making the connections.)
In a real P2P case, this change makes even more sense, because the addresses of the servers are unimportant – as you said, it's the databases and their data that are the important thing. A UUID helps identify those.
I don't. The replication is fundamentally a peer to peer protocol. or master to master. whatever.
It is and should be expected that a node can disappear for many reasons. Relying on a server id is one of the possibilities to restart automatically a replication, other would use other parameters based on the ip and the location, etc... It is the responsibility of the application to know that. As a protocol the replication shouldn't force this way imo. As a client the replicator shouldn't too. It is already doing the best effort imo: ie restarting for the same ip, port and properties of the replication.
As a side note, this is one of the reason the 0 port could be used here instead of letting your application fix a port. 0 won't change. And you could rely with the application on a unique property in the replication properties.
> It is the responsibility of the application to know that. As a protocol the replication shouldn't force this way imo.
Then how does the application do this? I haven't seen any API for it.
Also, I don't see how this has anything to do with the case of a leaf node running a server that happens to have a dynamic port assignment. The port this node is running on has absolutely nothing to do with the replication. In the (now obsolete) case of Couchbase Mobile, the server doesn't even accept external requests, so its port number is purely an internal affair.
I still have a feeling that we're talking about completely different things. But I can't really figure out what your point is... application that handle the routing policy. A server id can be used as an adress point but some could decide to use other parameters to associate this replication id to a node.
Maybe instead of relying on a fixed node id, we could however introduce an arbitrary remote address id fixed on the node that handles the replication. This remote ID will be associated by to an host , port. The layer assigning this address id to the host/port could be switchable, so the application or user could introduce easily its own routing policy. Which could be relying on a server id or not..
> btw your example with couchbase mobile is generally solved by using the replication in pull mode only. So here it is relying on a fixed address to replicate.
sigh No, that is exactly the situation I was describing. The mobile client is the only one initiating replication; it pulls from the central (fixed-address) server, and pushes changes to it. So the mobile device's IP address and port are irrelevant, right? Except that the replication state document stored in local has an ID based on several things _including the local server's address and port number. So the effect is that, every time the app launches, all the replication state gets lost/invalidated, and it has to start over again the next time it replicates.
TouchDB doesn't have this problem because I didn't write it with this design flaw
Instead every local database has a UUID as suggested here, and that's used as part of the key.
Except if you are using either the same port on each devices (which is generally what does an application) or the ephemeral port "0" which is also the same then for reach replication. Also in that case you will also have to handle the full address change and security implications. Relying on a unique ids to continue the replication may be a design flaw or at least a non expected/wanted behaviour. What will prevents an hostile node to connect back to your node with the same id? How do you invalidate it?
I am pretty sure anyway we should let to the application or final user the choice of the routing policy. I will have a patch for that later in the day.
It seems like overkill to get the IANA to assign a fixed port number to an app that doesn't even listen on any external interfaces! The only use of that port is (was) over the loopback interface to let the application communicate with CouchDB.
Passing zero for the port in the config file didn't make the problem go away. Apparently the replicator bases the ID on the actual random port number in use, not on the fixed 0 from the config.
> What will prevents an hostile node to connect back to your node with the same id?
Hello, are you listening at all to what I'm writing? I've already said several times that the app does not accept incoming connections at all. It only makes outgoing connections to replicate.
And in general: obviously in any real P2P app there would be actual security measures in place to authenticate connections, most likely by using both server and client SSL certs and verifying their public keys. Once the connection is made, then database IDs can be used to restore the state of a replication.
Hello.... are you understanding that this isn't only about your application? Some may have different uses. And different routing policy. And this is not the role of couchdb to fix them.
If this is true, it explains why someone pointed me to this bug for the two issues I've been experiencing with replication:
1. My permanent replication sessions die almost every single day and come back after I restart CouchDB.
2. Sometimes I end up with more than one replication running for the same configured replica.
Note: My local address changes at least twice a day when I go into the office. The other end of my replication doesn't. It's hard to argue that this behavior is desirable.
Benoit, are you vetoing this change? If so, please include a reason why improving the hit rate for replication checkpoints should not be included in our next release..
@rnewson I'm confused. How does it improve the hit rate fro replications checkpoints ????
I am -1 on this patch for above reason. Which are likely the first the one i gave when saying won't fix. Changing a port or an IP isn't an innocent event. It has many implications.
And such things like using a fixed replication id remove any of its implication and will make some app work difficult. What if the node id is for ex anonymized on each restart ? I think such behaviour should be configurable.
Anyway I don't think this ticket should be a blocker. Let discuss it quitely for the 1.4 .
Benoit, the only intention of the patch is to improve the hit rate for replication checkpoint documents. If the port of a participant changes, the current replicator will replicate from update_seq 0 because it won't find the checkpoint from the previous port. With this change, the replicator can negotiate a stable value (the UUID) to use in place of the unstable value (port), and thus find a valid checkpoint document. For large databases, this can be hugely valuable. If you are saying that this breaks replication or eventual consistency, please say so and explain how. The only thing this patch should do is prevent needless and time-consuming replication when a shortcut is available. If you feel it doesn't do that, please help me to see why.
This ticket is over a year old, I do not want to bump it to 1.4 without a good reason.
So i will repeat myself here:
- changing a port is not an innocent event. With this patch the replication is just ignoring it. And i currently asking myself who take the responsibility to make sure that this replication can still happen with this environment change. Changing a port changes the condition in which your node is acting.
This patch can only work in a trusted environment. Which can be ok but *must* be optional imo.
Sorry, Benoit, I simply can't see the security problem you are trying to describe at all. Walk me through a process where a server changes ports and security is violated. As far as I can tell, the only difference this patch makes is that we can resume replication from a checkpoint where we previously couldn't. That a server changes port doesn't invalidate the checkpoint's integrity, nor do I trust the server more or less based on its numeric port value. If I want to trust the machine, I need TLS, which is completely orthogonal to this ticket.
@rnewson part of the patch that make the checkpoint locally constant is fine for me (but not relying on a remote uuid) though i don't think we need a server id for that but that can probably be changed later.
Anyway following the discussion (and that will be post 1.3) I think that what we should really do here is handling the following scenario:
1. port, ip changed, port crashed, restared -> a checkpooint is orphelin
2. node come back
3. Instead of restarting the replication , ask what to do and mark the replication task as paused
By "ask" I mean having a switchable middle ware that could handle the scenario or just let the application do what it want. Since that is a braking change I guess it could only happen on 2.0 if some want..
I think{Scheme, UserInfo, UUID, Path}
is necessary, actually, so ignore my second question. Scheme is justified as we should allow a http and https replication that is otherwise identical to run concurrently (though it's a bit silly). UserInfo is mixed in because different users might be able to write only subsets of the data, and Path ensures that checkpoints vary by database name, in the case that multiple sources replicate into the same target (confusing their checkpoints would be very wrong).
Jens, will the patch address your issue?
Overall I'm +1 on this approach for enabling faster restarts of replication – I think it's a huge win.
I don't see that the behaviour of the new patch changes the security constraints vs today, but I think I see Benoit's point. Today if a replication endpoint changes its ephemeral port # (e.g. expired DHCP lease), the replication will fail and cannot restart until it is deleted & recreated.
With the patch, the replication could restart in some situations, without requiring active intervention - that's the whole point.
So if Dr.Evil has captured the UUID, it might be possible to acquire the replication without the source endpoint being aware. I think this should be addressed post 1.3. The proposed functionality could note that securing replication requires using TLS and appropriate SSL cert checking in both directions. Which seems common sense anyway! The Dr Evil scenario however is no different under today's activity - if an IP address is hijacked and SSL is not in use, Dr Evil has your documents.
Dave, if the port of source or target changes, then, no, the replication cannot restart (because the replication task will be trying to contact a server on a port it is not listening on). If a new task is started pointing at the new port, it will simply pick up a valid checkpoint that it otherwise wouldn't.
as an alternive solution I think that we could let the user fix the replication id by reusing the current _replicator id or the one used for the _replicate API.
The only problem I see with that is when the the user will post 2 tasks doing the same thing.
I have looked at the patch but I don't really understand what it's doing, both because my Erlang is really weak and because I don't know the internals of CouchDB. So I can't really comment on the code.
It does sound like what's being suggested goes beyond what I asked for. This bug is about the local server (the one running the replication) having a different IP address or port than the last time. The suggested patches seem to also cover changes to the remote server's URL. That's an interesting issue but IMHO not the same thing.
The point of this bug is that the URL of the local server running the replication is irrelevant to the replication. If I'm opening connections to another server to replicate with it, it doesn't matter what port or IP address I am listening on, because there aren't any incoming connections happening. They don't affect the replication at all.
As for Benoit's security issues: Replication has no security. Security applies at a more fundamental level of identifying who is connecting and authenticating that principal. You absolutely cannot make security tests based on IP addresses or port numbers.
Jens, Yes, the patch goes further than the ticket, I said as much in my first comment when I took the ticket. As you note, it has no security implications.
There are three host:port values in play for any one replication task, it seems only a partial solution to fix the stability issue for one of them (though, I agree, that if we only fixed one, it would be the co-ordinating node).
It is true that the host:port of the co-ordinating node does not affect the replication per se, as long as you ignore what would go wrong if two processes were doing the same replication. This is also true of the host:port of the "source" and "target" servers too.
I am happy to solve just the initial problem identified in the ticket if that will allow Benoit to retract his veto, however I felt it important that we were all clear about the security implications here (namely, that there are none) before proceeding. If, as seems the case now, we all agree on the security aspect, I don't see the harm in all 3 participants having a stable identifier allowing replication checkpoints to be used if a machine changes name or port.
There is a potential security issue using a remote node id like in the second part of the patch. Local to local there is none.
I am + 1 for fixing the issue related to the node doing coordination.
The basic version is committed (where we use UUID instead of the local hostname:port).
Thanks Jens for filing this and the detailed description.
To be more clear, the idea would be to add a UUID to each server and use it as input to the replication id generation instead of the local port number. It would be something similar to what is done for the cookie authentication handler:
If such uuid doesn't exist in the .ini (replicator section), we generate a new uuid and save it.
Damien's thought about allowing the client to name them, sounds also very simple - perhaps using the "_replication_id" field in replication documents (which already exists and is currently automatically set by the replication manager). | https://issues.apache.org/jira/browse/COUCHDB-1259?focusedCommentId=13089973&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2015-40 | en | refinedweb |
The attached program is a prototype for a autodetection function to be put in the choose_medium.c file of dbootstrap. It seems to handle the "new" 2.2 IDE subsystem and SCSI devices since at least 2.0; I've tested it on m68k, alpha and ia32 platforms. Since the IDE system treats DVD-ROMs as CD-ROMs, they are properly detected (though it's possible /proc/ide/hd?/media may report "dvd-rom" or something else in 2.3+; I haven't checked). Note the use of a bitmask for the IDE detection; SCSI devices are simply counted since they are numbered 0..n at boot time. Non-IDE and non-SCSI CD-ROMs aren't handled at all. My suggested use would be for the routine to "mark" the list presented by choose_cdrom(), although (presumably) if only one matching device is detected, it could bypass the menu altogether. Anyway, it's packaged as a standalone program so people can test it and let me know if it fails to detect a CD-ROM it should, or detects something wrongly. Chris -- ============================================================================= | Chris Lawrence | Your source for almost nothing of value: | | <[email protected]> | | | | | | Grad Student, Pol. Sci. | Visit the Lurker's Guide to Babylon 5: | | University of Mississippi | <*> <*> | =============================================================================
#include <stdio.h> #include <sys/stat.h> #include <glob.h> struct cdroms_detected { int scsi_count; int ide_mask; }; static struct cdroms_detected autodetect_cdrom(void) { FILE *fp; struct stat st; glob_t gl; static struct cdroms_detected cdr = {0,0}; size_t sz=100; char *buf; buf = (char *)malloc(sz); if(!stat("/proc/ide", &st) && S_ISDIR(st.st_mode)) { if(!glob("/proc/ide/hd?/media", 0, NULL, &gl)) { int i; for( i = 0; i < gl.gl_pathc; i++ ) { if(fp = fopen(gl.gl_pathv[i], "r")) { if(getline(&buf, &sz, fp) >= 0 && !strncmp(buf, "cdrom", 5)) { printf("CD-ROM: /dev/hd%c\n", gl.gl_pathv[i][12]); cdr.ide_mask |= 1<< (gl.gl_pathv[i][12] - 'a'); } /* printf("%s: %s", gl.gl_pathv[i], buf); */ fclose(fp); } } globfree(&gl); } } if(!stat("/proc/scsi/scsi", &st) && S_ISREG(st.st_mode)) { if(fp = fopen("/proc/scsi/scsi", "r")) { while(!feof(fp) && getline(&buf, &sz, fp) >= 0) { /* fputs(buf, stdout); */ if(!strncmp(buf, " Type: CD-ROM", 16)) { printf("CD-ROM: /dev/scd%d\n", cdr.scsi_count); cdr.scsi_count++; } } } } return cdr; } int main(void) { autodetect_cdrom(); } | https://lists.debian.org/debian-boot/1999/12/msg00349.html | CC-MAIN-2015-40 | en | refinedweb |
Results 1 to 4 of 4
- Join Date
- Jul 2013
- 2
- Thanks
- 1
- Thanked 0 Times in 0 Posts
Beginner Ruby: How to actually start
Ok !
- Join Date
- Sep 2002
- Location
- Saskatoon, Saskatchewan
- 16,994
- Thanks
- 4
- Thanked 2,662 Times in 2,631 Posts
Command line execute
You can use a shebang as well, and execute the ruby script. Assuming /usr/bin/env contains the ruby:
PHP Code:
#!/usr/bin/env ruby
def PrintMaName(name)
puts "Ruby sez: hello #{name}"
end
PrintMaName("Fou-Lu")
I'll be starting to use ruby more often as well; just gotta get beyond the re-learn curve.PHP Code:
header('HTTP/1.1 420 Enhance Your Calm');
- Join Date
- Jul 2013
- 2
- Thanks
- 1
- Thanked 0 Times in 0 Posts
Not sure about how to do what you're saying.
You might have to walk me through the steps... is the usr/bin etc. in the text doc. or is it the directory for terminal?
- Join Date
- Sep 2013
- 1
- Thanks
- 0
- Thanked 0 Times in 0 Posts
You can start on
or you could read this free book online
or maybe this guide (which is pretty fun to read)
Last edited by VIPStephan; 09-02-2013 at 04:20 PM. Reason: Removed fake signature
AdSlot6 | http://www.codingforums.com/ruby-and-ruby-on-rails/298861-beginner-ruby-how-actually-start.html?s=9b100d645dc1665c70f6c60e4365edb1 | CC-MAIN-2015-40 | en | refinedweb |
GlassFish : Self Management Rules
By shalini_m on Jan 19, 2006
Self Management comprises of a set of management rules. You could create different rules based on the various event types. As mentioned at glassfish.dev.java.net, the different type of events are as follows.
- Lifecycle events
- Monitor events
- Log events
- Trace events
- Timer events
- Notification events
Let us take an example of displaying a pop-up window as an alert. Action mBeans implement the NotificationListener interface. The handleNotification() method for our action will look like the following.
public synchronized void handleNotification(Notification notification,Object handback) {
try {
JFrame. setDefaultLookAndFeelDecorated(true);
JFrame myFrame = new JFrame("MyAction");
myFrame. setSize(250, 180);
myFrame. setLocation(300,200);
myFrame. getContentPane().add( new Label("Action has been invoked",Label.CENTER), BorderLayout.CENTER);
myFrame. setVisible(true);
myFrame. show();
} catch (Exception ex) { // Log the exception }
}
Let us name this class as CustomAction which is in the com.sun.myrule.action package. It has a corresponding CustomActionMBean. Follow these steps to deploy the action on the application server.
1. Compile the files and make a jar out of these two files.
2. After starting the application server, open the admin console by typing on a browser.
Figure1: Admin Console
3. On the left pane, click on "Custom MBeans". On the right pane, all the deployed mbeans are listed. To deploy our action mbean, click on Deploy. Select the jar file that you just created against the "File to upload" column and click Next.
Figure2: Custom MBeans
Figure3: Creating Custom MBeans
4. Enter a name for the mbean. This will later be used while creating the rule. Enter name as "CustomAction".
5. Enter the class name along with the package name. In this case, it is com.sun.myrule.action.CustomAction.
Figure4: Creating Custom MBeans: Step 2
6. Click Finish.
The action mbean is successfully deployed on the application server.
We will use the same action mbean for all the rules. This action mbean will be called each time an event is triggered. For more information on actions, refer to actions section in glassfish.dev.java.net
Now let us proceed and create rules.
Simple rules for creating a rule:
1. Identify the event type you need. For example : lifecycle
2. Identify the mandatory and optional properties for that event type. For example : lifecycle event requires a property called "name".
3. Identify the action mbean you would like to use for your rule.
4. You could use the Application server Command line interface (CLI) or the Graphical user interface (GUI) to create rules.
5. Make sure the server is started before proceeding with rule creation.
Example 1 : Assuming the administrator wants to be notified everytime the application server shuts down or is terminated because of a failure. Continuous monitoring is necessary for this kind of a situation. Therefore, a rule could be created mentioning a notification to be sent to the administrator when this event is triggered. Management rule bundles together the event and the action.
Let us examine how this rule could be created.
After starting the application server with the command "asadmin start-domain", use either CLI or GUI to create the rule.
CLI mode :
--host localhost --port 4848 --user admin
--action CustomAction
--eventtype lifecycle
--eventloglevel WARNING --eventproperties name=shutdown
my_lifecycle_rule
GUI mode :
- Start admin console by typing on a browser, after starting the application server.
- Go to the Management Rules tab under the Configurations tab in the left pane. This displays the list of rules deployed on the server on the right pane.
- Click on New
- Type the name of the rule as "my_lifecycle_rule".
- Check the Status checkbox to enable the rule.
- Enter description
- Choose event type as lifecycle.
- Click on Next if you are done with this page.
- In the next page, select "shutdown" against Event and enter a description if desired.
- Select "CustomAction" listed in the action list box and click Finish.
Figure5 : Management Rules
Figure6 : New Management Rule
Figure7 : New Management Rule : Step 2
The rule is successfully created and is deployed on the application server. The action will be triggered everytime the application server shuts down. Similarly, rule could be configured to take some action on ready and termination.
Example 2 : The administrator wants to be beeped once in a while regarding some status. A rule could be created to do this as follows.
CLI mode :
--host localhost --port 4848 --user admin
--action CustomAction
--eventtype timer
--eventloglevel INFO
--eventproperties pattern=MM\\\\/dd\\\\/yyyy\\\\/HH\\\\:mm\\\\:ss:
datestring=01\\\\/20\\\\/2006\\\\/12\\\\:00\\\\:00:
period=30000000:
message=Hello
my_timer_rule
Note here that "/" and ":" are escape sequences. Period, mentioned in milliseconds, is the period after which the notification is to be sent after the date mentioned in datestring.
GUI mode :
- Start admin console by typing on a browser, after starting the application server.
- Go to the Management Rules tab under the Configurations tab in the left pane. This displays the list of rules deployed on the server on the right pane.
- Type the name of the rule as "my_timer_rule".
- Check the Status checkbox to enable the rule.
- Enter description
- Choose event type as timer.
- Click on Next if you are done with this page.
- Enter the values for datestring, pattern and other fields.
- Select "CustomAction" listed in the action list box and click Finish.
How do you capture the log events broadcasted by Glassfish ....no documentation on that
Posted by akhilesh on March 04, 2008 at 03:51 PM IST #
Can you publish the code of the "corresponding MBean". This would help to find out how to capture the log events ...
Posted by batzee on March 06, 2008 at 07:43 AM IST #
The log events could be captured by setting the type to "log". The corresponding attributes are "loggerName" and "level".
Posted by Shalini on March 13, 2008 at 09:24 AM IST #
nice
Posted by guest on March 13, 2009 at 05:02 AM IST #
Shalini, can you pls share the code of the action jar file created for server shutdown since i am trying for the same and would be of much help to write down similar actions
Posted by kishore on April 17, 2009 at 01:56 AM IST #
Do I need to recreate a timer rule if I change amd redeploy the used MBean?
Posted by Gabriel Escamilla on September 29, 2009 at 01:30 PM IST #
Enter a Valid Date, Make sure date range is valid as per the entered month, year... This happens when I want to edit/create using GUI. How can I fix this?
Posted by Gabriel Escamilla on September 29, 2009 at 01:47 PM IST #
I'm unable to create a jar with a CustomAction and CustomActionMBean that the Glassfish admin console will accept, javax.management.NotCompliantMBeanException: Class com.example.CustomAction is not a JMX compliant Standard MBean
I'd very much like to see the source used above.
Posted by Jeff Sexton on October 20, 2009 at 06:05 PM IST #
Add
import javax.management.Notification;
import javax.management.NotificationListener;
on your MBean class (say MyClass), it needs to implement your MBean interface (say MyClassMBean) as well as NotificationListener. Your MBean interface must have the handleNotification method:
public void handleNotification(Notification notification,Object handback);
Hope this helps
Posted by Gabriel Escamilla on October 20, 2009 at 06:24 PM IST #
Hi,
I am trying to monitor my instances through jmx and management rules. I have read that a instance in start mode would give state value of 1 and in stop mode would give a state value of 3
Here is the MBean code to get a state value:
public String getInstanceStateValue() {
JMXServiceURL jmxsurl;
jmxsurl = new JMXServiceURL("service:jmx:rmi:///jndi/rmi://" + localhost + ":" + 8686 + "/jmxrmi");
Map env = new HashMap();
String[] credentials = new String[]{"admin", "adminadmin"};
env.put(JMXConnector.CREDENTIALS, credentials);
JMXConnector connector = JMXConnectorFactory.connect(jmxsurl, env);
MBeanServerConnection mbsc = connector.getMBeanServerConnection();
ObjectName instanceStateValue = new ObjectName("amx:j2eeType=J2EEServer,name=instance1");
stateValue = mbsc.getAttribute(instanceStateValue, "state").toString();
System.out.println("State Value: " + stateValue);
return stateValue;
I am able to retreive the 'state' value alright.
Here the entry from domain.xml for the corresponding management rule:
<management-rule
<event level="WARNING" record-
<property name="observedmbean" value="NAMBean"/>
<property name="observedattribute" value="StateValue"/>
<property name="monitortype" value="stringmonitor"/>
<property name="stringnotify" value="notifydiffer"/>
<property name="stringtocompare" value="2"/>
</event>
<action action-
</management-rule>
When i start the server i see a error message: "Observed attribute must be accessible by the MBean".
On stopping the server i see no error message.
Could you help me out here.?
Posted by Vaibhav Dhawan on December 15, 2009 at 06:05 AM IST #
Hi,
Please note that my event is fired only once. After 5 seconds (Granularity period : 5000), even though free memory value is above specified high value it is not fired.
Please help me.
Posted by paresh bhavsar on January 02, 2011 at 06:18 AM IST # | https://blogs.oracle.com/technical/entry/self_management_rules | CC-MAIN-2015-40 | en | refinedweb |
Opened 3 years ago
Closed 3 years ago
Last modified 3 years ago
#19178 closed Bug (wontfix)
create_permissions method fails if the model has a single new permission
Description
I notice this by using this command →, it list all apps and update the permissions in the db if there is any new.
The stacktrace is:
Traceback (most recent call last): File "/home/mariocesar/Proyectos/Crowddeals/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 222, in run_from_argv self.execute(*args, **options.__dict__) File "/home/mariocesar/Proyectos/Crowddeals/env/local/lib/python2.7/site-packages/django/core/management/base.py", line 252, in execute output = self.handle(*args, **options) File "/home/mariocesar/Proyectos/Crowddeals/crowddeals/core/management/commands/update_permissions.py", line 29, in handle create_permissions(app, get_models(), options.get('verbosity', 0)) File "/home/mariocesar/Proyectos/Crowddeals/env/local/lib/python2.7/site-packages/django/contrib/auth/management/__init__.py", line 74, in create_permissions for perm in _get_all_permissions(klass._meta, ctype): File "/home/mariocesar/Proyectos/Crowddeals/env/local/lib/python2.7/site-packages/django/contrib/auth/management/__init__.py", line 28, in _get_all_permissions _check_permission_clashing(custom, builtin, ctype) File "/home/mariocesar/Proyectos/Crowddeals/env/local/lib/python2.7/site-packages/django/contrib/auth/management/__init__.py", line 49, in _check_permission_clashing for codename, _name in custom: ValueError: too many values to unpack
I see that in ´contrib/auth/management/init.py´ the method _check_permission_clashing unpack the custom new methods. Normally the list of permissions will be a list of list, however if there is just one new permission, when getting the permissiosn it returns just a list of strings.
Making the _get_all_permissios code aware of that will fix the problem.
def _get_all_permissions(opts, ctype): """ Returns (codename, name) for all permissions in the given opts. """ builtin = _get_builtin_permissions(opts) custom = list(opts.permissions) _check_permission_clashing(custom, builtin, ctype) return builtin + custom
def _get_all_permissions(opts, ctype): """ Returns (codename, name) for all permissions in the given opts. """ builtin = _get_builtin_permissions(opts) custom = list(opts.permissions) if any(isinstance(el, basestring) for el in custom): custom = [custom] _check_permission_clashing(custom, builtin, ctype) return builtin + custom
Change History (7)
comment:1 Changed 3 years ago by mariocesar
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
comment:2 Changed 3 years ago by anonymous
Could you attach a sample model with failing Meta.permissions (a test case in pull request would be even better...).
EDIT: forgot to log in - akaariai
comment:3 Changed 3 years ago by mariocesar
My case is something like, having first created with syncdb this model:
class UserProfile(models.Model): user = models.ForeignKey(User) class Meta: permissions = ( ('can_approve', 'Can approve'), ('can_dismiss', 'Can dismiss') )
If later I add a single new permission
class UserProfile(models.Model): user = models.ForeignKey(User) class Meta: permissions = ( ('can_approve', 'Can approve'), ('can_dismiss', 'Can dismiss'), ('can_dance', 'Can dance') )
It fails. If the new permissions where two 'can_dance', 'can_sing' it works.
Where do I put the test case? I'm kind of lost about how write a test for this.
comment:4 Changed 3 years ago by ptone
tests probably should go in contrib/auth/tests/management.py
the create_permission function really should be part of a permissions API that is defined outside of the management/management command context - but that is where it is for now.
comment:5 Changed 3 years ago by mariocesar
I added a new test, here is the stacktrace if there is no patch
====================================================================== ERROR: test_bug_19178 (django.contrib.auth.tests.management.PermissionDuplicationTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/mariocesar/Proyectos/Django/.env/local/lib/python2.7/site-packages/django/contrib/auth/tests/management.py", line 228, in test_bug_19178 create_permissions(models, [], verbosity=0) File "/home/mariocesar/Proyectos/Django/.env/local/lib/python2.7/site-packages/django/contrib/auth/management/__init__.py", line 73, in create_permissions for perm in _get_all_permissions(klass._meta, ctype): File "/home/mariocesar/Proyectos/Django/.env/local/lib/python2.7/site-packages/django/contrib/auth/management/__init__.py", line 28, in _get_all_permissions _check_permission_clashing(custom, builtin, ctype) File "/home/mariocesar/Proyectos/Django/.env/local/lib/python2.7/site-packages/django/contrib/auth/management/__init__.py", line 48, in _check_permission_clashing for codename, _name in custom: ValueError: too many values to unpack
will be pushing the patch in a minute
comment:6 Changed 3 years ago by mariocesar
- Resolution set to wontfix
- Status changed from new to closed
*!
May the earth swallow me ... Sorry for making lose your time, running the tests I realize that error was about packing incorrectly the tuples in my models.
My fix will solve the problem is someone do something like
permissions = ( ('can_approve', 'Can approve') )
permissions, will be end as ('can_approve', 'Can approve'), as it's tuple(tuple()). Of course all (not me) sane people do.
permissions = ( ('can_approve', 'Can approve'), )
comment:7 Changed 3 years ago by ptone
No problem - that is why a test is always a great way to validate a bug report.
I just made a draft patch → | https://code.djangoproject.com/ticket/19178 | CC-MAIN-2015-40 | en | refinedweb |
public class Bootstrap extends Object
VirtualMachineManagerinterface.
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public Bootstrap()
public static VirtualMachineManager virtualMachineManager()
May throw an unspecified error if initialization of the
VirtualMachineManager fails or if
the virtual machine manager is unable to locate or create
any
Connectors.
SecurityException- if a security manager has been installed and it denies
JDIPermission("virtualMachineManager") or other unspecified permissions required by the implementation.
Copyright © 1999, 2014, Oracle and/or its affiliates. All rights reserved. | http://docs.oracle.com/javase/7/docs/jdk/api/jpda/jdi/com/sun/jdi/Bootstrap.html | CC-MAIN-2015-40 | en | refinedweb |
Eclipse Community Forums - RDF feed Eclipse Community Forums Handling files using lob in entity <![CDATA[This is my situation. I want to store @lob in another entity class so that they wont get loaded when i load my entity class ( I know about lazy loading option but cannot use that option). Then maybe i can link a lob in this way without eagerly loading it. @Entity public class myClass { String fileClassId; } @Entity public class FileClass { @id String id; @lob byte[ ] mylob; } But the problem with this approach is no foreign key relationship is established between 'fileClassId' and 'id' . Is there any way to establish it, without causing mylob to load eagerly ? ( and..Lazy loading is not an option for me) Thanks in advance]]> Arun Joy 2010-11-10T17:29:24-00:00 Re: Handling files using lob in entity <![CDATA[You should just define a OneToOne to the lob entity. @Entity public class MyClass { @OneToOne(fetch=LAZY) FileClass fileClass; } Otherwise you will need to query for the lob directly. If you are just talking about schema creation, then you can add the foreign key constraint yourself, such as in a SessionCustomizer, or your own ddl scripts. ]]> James Sutherland 2010-11-11T15:55:33-00:00 | http://www.eclipse.org/forums/feed.php?mode=m&th=200017&basic=1 | CC-MAIN-2015-40 | en | refinedweb |
Hibernate avg() Function
, a configuration mapping file that will provide
information to the Hibernate to create... a simple java file into which
we will use the avg() function with select clause...Hibernate avg() Function
In this tutorial you will learn about the HQL
Hiber Collection Mapping
Mapping Example using Xml************
Hibernate: select employee_.employee_id...Hibernate Collection Mapping
In this tutorial you will learn about the collection mapping in Hibernate.
Hibernate provides the facility to persist
select query using hibernate
select query using hibernate Hi,
can any one tell me to select records from table using hibernate.
Thanks
Please visit the following link:
Hibernate Tutorials
The above link will provide you different Avg() Function (Aggregate Functions)
Hibernate Avg() Function (Aggregate Functions)
... to use the avg()
function. Hibernate supports multiple aggregate functions. When... query criteria.
Following is a aggregate function (avg() function
hibernate
Queries (Native Query).
Hibernate provides primary and secondary level caching...why hibernate? why hibernate?
Hibernate: -Hibernate is an Open Source persistence technology.
It provides Object/Relational mapping
sum and avg function using HQL.
:
Hibernate: select sum(employee0_.salary) as col_0_0_ from employee employee0_
Sum of all salary : 275000
Hibernate: select avg(employee0_.salary) as col_0_0_ from... = "Select sum(emp.salary) FROM Employee emp";
String avgHql = "Select avg
Hibernate Many One One Mapping
Hibernate Mapping And Join
Using hibernate we can easily create relationship... below:
Hibernate: select student0_.roll_no as roll1_0_0_, address1...-To-One and Many-To-Many. An
Example of One To Many mapping is given below
Address
Hibernate Mapping
file defining
object/relational mapping, but in the Hibernate annotations...Hibernate Mapping
In this Hibernate Mapping tutorials series you will
learn... the complex things easily.
This Hibernate Mapping tutorial is targeted
Complete Hibernate 3.0 and Hibernate 4 Tutorial
of the hibernate mapping file..
Understanding Hibernate <... Hibernate Select Clause.
Hibernate Count
Query... object/relational persistence and
query service for Java. Hibernate lets you
Hibernate Tutorials
object/relational persistence and query service for Java. Hibernate lets you... framework.
Hibernate takes care of the mapping from Java classes to database tables and
also provides data query and retrieval facilities. Hibernate significantly
Complete Hibernate 4.0 Tutorial
min() Function
Hibernate Named HQL Query using XML mapping
Hibernate Named HQL Query in Annotation
Hibernate Mapping...
Hibernate Named Native SQL Query using XML mapping Branches.branch_id from Branches branches";
Query query... Hibernate;
import org.hibernate.Session;
import org.hibernate.*;
import Criteria average example
;/hibernate-configuration>
In the above xml file we have created a mapping...
Average Query
Here we will use the avg() method... finally generates the following query for getting the results
from database:
Hibernate Named Query
Hibernate Named Query
Named Query is very useful concept in hibernate. It lets... file(.hbm files). The
query is given unique name for the entire application
Clause |
HQL Select
Clause |
Hibernate
Count Query |
Hibernate Avg... Hibernate O/R Mapping |
Understanding Hibernate <generator> element...
Update Query |
Hibernate
Delete Query |
Introduction to Hibernate Query SELECT Clause
Hibernate SELECT Clause
In this tutorial you will learn how to use HQL select clause.
Use of SELECT clause in hibernate is as same as it used in
SQL... version='1.0'?>
<!DOCTYPE hibernate-mapping PUBLIC
"
Examples of Hibernate Criteria Query
Criteria Query:
HQL - Hibernate Query Language
Hibernate criteria avg..., HQL Select Example, HQL Select Query
Hibernate Tutorials:
Hibernate...Example code and running projects of Hibernate Criteria Query
Here
Hibernate Criteria Query Example
();
}
}
}
The database table mapping is in the file User.hbm.xml and hibernate...Hibernate Criteria Query Example
In this Example, We will discuss about hibernate criteria query, The
interface org.hibernate.Criteria is used to create
Hibernate Mapping
In this tutorial we will discuss Hibernate mapping
hibernate mapping - Hibernate
hibernate mapping when will we use one to one, one to many, many to many mapping... give a practical example
JPA Avg Function
:
Query query=em.createQuery
p FROM Product p WHERE p.price >
(SELECT AVG(p.price) FROM p)");
This query select all...=em.getTransaction();
entr.begin();
Query query=em.createQuery("SELECT p FROM Product p
Hibernate Select Clause
Hibernate Select Clause
... from Insurance
table using Hibernate Select Clause. The select clause
picks up... is the code of our java file which
shows how select HQL can be Query - Hibernate
Hibernate Query Hibernate Query
Hibernate Named Native SQL Query using XML Mapping
In this section, you will learn Named Native SQL Query using XML Mapping in Hibernate with an example
Hibernate One To Mapping
Hibernate One To One Mapping Using Annotation
Hi If you are use-to with the Hibernate annotation, then it is very simple to do mapping using hibernate... is the hibernate.cfg.xml file
hibernate.cfg.xml
<?xml version='1.0' encoding
select clause in hibernate
select clause in hibernate Example of select clause of hibernate... query ?
SELECT emp.name, emp.salary, emp.dateOfJoin from Employee emp...="SELECT emp.name, emp.salary, emp.dateOfJoin from Employee emp";
Query O/R Mapping
Hibernate O/R Mapping
In this tutorial you will learn about the Hibernate O/RM... will be required to create a file
with .hbm.xml extension into which the mapping information will
be provided. This file contains the mapping of Object
Hibernate Mapping Many-to-Many Example
Hibernate Mapping Many-to-Many Example How does many to many relationship work in Hibernate?
In many to mant relationship each row... course can be assigned to multiple student.
Configuration file Mapping
In this section, you will learn about Hibernate Mapping
Hibernate Mapping Files
;
In nutshell we need one configuration file and one hibernate mapping files to
develop... Hibernate Mapping Files
Hibernate Mapping Files:
In this section I
Hibernate Query Language
Hibernate Query Language
Hibernate Query Language or HQL for short is extremely
powerful... for the names of the Java Classes and properties. Hibernate Query
Language is used
Hibernate delete Query
Hibernate delete Query
In this tutorial you will learn how to use HQL delete query in Hibernate.
In this section we are discussing about the HQL delete query..., a mapping file named salesorder.hbm.xml file which
will map the class object
Hibernate Criteria restrictions Query Example
();
}
}
}
The database table mapping is in the file User.hbm.xml and hibernate...
Hibernate Criteria restrictions Query Example
In this Example, We will discuss about hibernate criteria query, The
interface org.hibernate.Criteria
What is component mapping in Hibernate?
What is component mapping in Hibernate? Hi,
What is component mapping in Hibernate?
thanks
Need for hibernate - Hibernate
solution.
In Jdbc we use .properties file and in hibernate we use .xml file... of Hibernate are :
It uses Object-oriented query language.
Transparent...Need for hibernate Can anyone say why should we go for hibern -
Hibernate Aggregate Functions
to fetch the persistent object, a hibernate.cfg.xml file that
the Hibernate...-mapping PUBLIC
"-//Hibernate/Hibernate mapping DTD//EN"
"http...();
Query query = session.createQuery("select sum(gd.numOfGoodsOrder
Hibernate XML Mapping
In this section, you will learn how to do XML mapping in Hibernate
view mapping - Hibernate
view mapping How can we do mapping in hibernate
Hibernate insert Query
Hibernate insert Query
In this tutorial you will learn about how to use HQL... but HQL supports insert into ----- select ---- Statement
i.e in HQL insert query... file hibernate.cfg.xml is
created for providing the Hibernate a
;1.0"?><!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN"" SessionFactory Can anyone please give me an example
Hibernate Tutorial for Beginners Roseindia
Hibernate automatically creates the query to perform operations like select... classes to a Database tables using an XML file
Hibernate maps the data types... to the Hibernate Query Language
Hibernate Criteria
Hibernate Criteria Detached Query And Subquery
. A configuration mapping file creation that
provides the information to the Hibernate...Hibernate Criteria Detached Query And Subquery
In this tutorial you will learn about Detached Query and Subquery in Criteria
Query.
Hibernate provides
Hibernate Architecture
to
Hibernate for mapping the POJO's with the database table.
Hibernate provides world... the in the database.
Hibernate uses the meta-data for mapping the instance variable....
Hibernate automatically generates the SQL query automatically to perform
hibernate criteria Max Min Average Result Example
will discuss about hibernate criteria query, The class... in the database table using mapping file.
DataInsert.java
package roseindia... is in the file User.hbm.xml and hibernate
configuration file is hibernate.cfg.xml. One to many XML Mapping bag example
In this section, you will learn how to use element instead of or element in mapping XML file in Hibernate
Hibernate Select Query
This tutorial contain the explanation of Select clause
Hibernate Count Query
Hibernate Count Query
In this
section we will show you, how to use the Count Query. Hibernate....
Hibernate: select count(*) as col_0_0_ from insurance insurance0_ group
Hibernate Named Native SQL in XML Returning Scalar
In this section, you will learn to execute Hibernate named native SQL query written in XML mapping file which return scalar values(raw values date mapping
In this section, you will learn about date mapping in Hibernate query
hibernate query hibernate apache par kam karta hai kya
Hi Friend,
Yes.
Thanks
Hibernate 3.1.1 Released
INTO...
Native Query support
Connection handling
Changes...
schemaupdate does not handle TableHiLoGenerator
empty property name in mapping file...Hibernate 3.1.1 Released
Back to Hibernate Tutorials Page
Hibernate
Hibernate Criteria Associations
Query Association in Hibernate.
Association in Criteria Query may be done...; encoding='utf-8'?>
<!DOCTYPE hibernate-mapping PUBLIC
"-//Hibernate/Hibernate mapping DTD//EN"
"
on hibernate query language - Hibernate
on hibernate query language Display the contents of 2 fields in a table in a single column using SQL statement.Thanks! Hi friend,read for more information,
Select Clause in hibernate.
Select Clause in hibernate. Example of select clause of hibernate
Hibernate
Hibernate hibernate delete query execution
org.hibernate.InvalidMappingException: Could not parse mapping document - Hibernate
org.hibernate.InvalidMappingException: Could not parse mapping document ....
Exception occur:Could not parse mapping document from resource event/Sample.hbm.xml
org.hibernate.InvalidMappingException: Could not parse mapping
Hibernate Training
.
How to use HQL (Hibernate Query Language)
for querying objects.../relational mapping with Hibernate
Java Persistence API (JPA...;
Hibernate Query Language
Mapping Technics - Hibernate
Mapping Technics Hai
i want some clarifications about one-to-many bidirectional...);
}
}
}
Output:
Hibernate: select min(this_.salary) as y0
Hibernate Criteria
powerful alternatives of
HQL (Hibernate Query Language) with some limitations. It is used where search on
multi criteria required, becouse Hibernate Query...
<?xml version="1.0"?>
<!DOCTYPE hibernate-mapping PUBLIC
select Query result display problem
select Query result display problem Hi,
String SQL_QUERY ="from Cc";
Query query = session.createQuery(SQL_QUERY);
for(Iterator it=query.iterate...
Hibernate: select cc0.id as col00 from cc cc0_
Cc$$EnhancerByCGLIB$$2235676a cannot
on collection mapping - Hibernate
. The index informs hibernate whether a particular in-memory object is the same one Many To Many Annotation Mapping
Hibernate Many To Many Annotation Mapping How to use Many To Many Annotation Mapping in Hibernate?
Hibernate requires metadata like... to another.Hibernate annotation
provides annotation-based
the executeUpdate();
and change the jar file of hibernate...Hibernate delete a row error Hello,
I been try with the hibernate delete example (
Hibernate mapping annotations
In this section, you will learn about the annotations used for various mapping in Hibernate
Hibernate Annotattions - Hibernate
and Relational Table mapping. All the metadata is clubbed into the POJO java file...Hibernate Annotattions Can anybody tell me here how we can use hibernate annotations which is a new facility in hibernate 3.2 any tutorial and what
Hibernate Overview and Architecture
.
Hibernate automatically generates SQL query to perform operations like select....
Class mapping creates the link between java classes and database tables.
Hibernate... your transactions.
Hibernate Query:
Hibernate provide simple SQL, native query
hibernate
hibernate how to impot mysql database client jar file into eclipse for hibernate configuration
Hi Friend,
Please visit the following link:
Thanks
hibernate
hibernate Is there any other way to call procedure in hibernate other than named query????? if we are using session object to get the connection then why hibernate we can directly call by using a simple java class??????? please
solution for mapping hibernate and pojos from net beans through database
solution for mapping hibernate and pojos from net beans through database ... which contain hibernateutil.java
Then i create hibernate mapping and pojos from database but netbeans is not creating
Domain code(.java)
Hibernate xml mapping
hibernate criteria date between
();
}
}
}
The database table mapping is in the file User.hbm.xml and hibernate... about hibernate criteria query, The
interface org.hibernate.criterion.Restrictions... mapping file.
DataInsert.java
package roseindia;
import java.util.Date
Hibernate Criteria setFirstResult Example
();
}
}
}
The database table mapping is in the file User.hbm.xml and hibernate
configuration... will discuss about hibernate criteria query, The
interface org.hibernate.Criteria... the following code to insert the values in the database table using mapping file
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://www.roseindia.net/tutorialhelp/comment/43940 | CC-MAIN-2015-40 | en | refinedweb |
Scratch GPIO Development
This blog entry is for me to provide feedback on what’s happening in developing my Scratch GPIO hander set-up.
This code WILL PROBABLY NOT be compatible with the 1.x release and is intended for experimenters and helpers.
Its for both coders with suggestions and importantly users (particularly teachers/educators/parents with kids) to provide input into what the handler does and the syntax used to do it.
My development code is available on Git Hub
I’d welcome collaborators who share my vision to enable 10 year olds and younger to make things flash/beep/turn/step using a Raspberry Pi and some cheap components. This project is NEVER going to be an I2C bus controller 🙂 (But as Sean says – never say never again!)
I’ve created a ScratchGPIO login on the Scratch Site so that we can all start to upload and share Scratch GPIO code – I want to share the password with any educator who’d can contribute examples and lessons – just contact me for it 🙂
Changes made from 1.x codebase
All pins now default to inputs until addressed as outputs and then they are switched dynamically between digital, PWM or Stepper Motor mode as needed
Any pin can now be used for DC motor or variable brightnes LED using PWM as I now use the a threaded libary PyzPWM
(Although I expect to start using the PWM within RPi.GPIO now that Ben Croston has added it in 🙂 )
PWM output is controlled by variables starting with “Power” e.g Set power11 to 50 will set pin 11 to a 50% duty cycle and effectively give an average 1.65V instead of 3.3V (MotorA/B on pins11/12 retained for simpler syntax for younger pupils)
“Power” is more generic and can be applied even when just varying a LEDS brightness (prompting for this change came from Aaron Shaw’s RGB LED MagPi article Page 26)
PinPattern usage has been removed at this time as not compatible with the concept of any pin being input or output – needs thinking about how to re-introduce.
Single Pin Ultrasonics – if you connect an Ultrasonic Module as per this diagram, then we only need one GPIO pin to trigger it and receive the returned pulse
So now you simply use it (assuming connected to pin 23) use broadcast sonar23 and then just use the sensor item sonar23 to get the distance measured in cm
Stepper motors
Currently got 5 wire unipolar steppers addressed using “StepperX” broadcast to tell Scratch that we’ve connected them . “StepperA” means one connected to Pins 11,12,13 and 15, “StepperB” means one connected to Pins 16,18,22 and 7. They are then controlled using normal MotorA or MotorB variables to control their speed but since they are bi-directional, you can use values of -100 to 100.
Also I have syntax for saying stepping a finite number of steps. You need to broadcast a “StepperA” or “StepperB” to initialise as above but then change (not set) variables PositionA or PositionB for the numbers of steps you wish each stepper to turn ( using change and not set overcomes a bug/feature of Scratch itself)
Tasks currently in progress
Begining to document
Tasks to be done next but not yet in progress
Add back in PinPattern in some fashion to allow multiple pinout changes in one command
Make sure AllOn /Alloff can still be used
Add global Invert broadcast that inverts all high/on/1 commands to pin being set to 0V and all low/off/0 to set a pin to 3.3V (Needed if dealing with components that need to be dragged to 0V to turn them on)
Add in code to deal with H bridge DC motors (I’d need to build a bot with one first 🙂 )
Tasks just a gleam in my eye
Add in Servo control (so I can make robot arms wave around)
Plugin in modules for specific hardware (e.g Raspberry Ladder Board, PiBorg, H-Bridge Motors,BerryBoard) etc
Nice one – looking forward to getting involved!
Was it you that I was speaking to after the Jamboree session?
I like your suggested changes and ‘servo’ would be great. The term pulse is fine but power is already used with many school controllers interfaces and would I would like to suggest it would be more appropriate. So to reduce motor speed kids will be told to reduce power. Pulsing can also mean flashing so can convey wrong meaning as LEDs are made to flash(pulse) but to control brightness it is the power that is reduced by pulsing.
You are onto a winner with Servo control.
Power – I think you may be right 🙂
I did think of it a while ago but dismissed it because as I used to be an engineer and we all know its not power but average voltage 🙂
But your right, at 10 years old, we really don’t need to go into Voltage vs Power differences – lets face it – the kids are taught that the primaries colours are Red,Yellow and Blue but they still manage to become artists 🙂
Simon
Excellent plans. I am all for getting kids to get computers to do physical things, makes it so much more engaging for them and makes the RPi a real computer.
Can you provide some more detail about the servo control, are you able to get software PWM working well enough to drive them? If so that will be superb!
I’m building up a list of great value kit for use with the Raspberry Pi, so I will give you a heads up when I’ve tried them – yes some include I2C – but we can work that out another time… ;-).
I shall be more than happy to help where I can, and fit in with how you plan to do it.
Servo control is not over the horizon but its on it. I need the clever people who write the Python/C libs to get it working (which might not happen as RPi has no native concept of servos) – this is maybe where I2C is actually needed 🙂
The MagPi article generated enough interest for me to try this, but I struggled a bit.
I think a key issue in getting this used by educators would be providing simple, clear, step-by-step examples of each piece of functionality, i.e., blinking LED, servo movement, e.t.c, using just standard components. (Certainly not a custom board like the LEDborg in the second article).
Overall, I think this is a very powerful concept (programs controlling things in physical world) and would love to help where I can.
Hi Ken, Like everyone in the RPi world I only have so much time to allocate to this 🙂 I see my role as the enabler in terms of the software suite to do the job.
Perhaps my contribution to the project could be in writing tutorials / examples.
That would be great 🙂 Have you managed to get a pin to blink at all as I would want my documentation to get everyone to that level at least 🙂
And here is a link to a presentaion we did at the Jamboree which should help a bit 🙂
Ken, what are your thoughts on my new lessons using Scratch GPIO?
I know the RGB-LED kit is still a custom board, but hopefully I’ve included enough back-story to make it usable with anything else.
I hope to work with Si to add a layer of interfacing which can be customised for whatever hardware you connect up (just trying out the concepts at this stage).
The tutorial at looks great for beginners. I love the simple Scratch project at the beginning, and the summary of the commands.
Here is the stoplight scratch project (inspired by cyplecy) I’m working on for our Library’s Maker Festival:
Thanks for taking a look, is difficult to put in enough detail without trying to explain everything in one go.
Stop light looks good (always a great familiar examples), are you planning some super-sized LEDs/lights to be driven?
Just finished up our local Maker Festival. Your stop-light idea got me a write up in the local paper (), but notice the lack of mention of “Scratch”, “Arduino”, or “Raspbery Pi”. I’ll work on a detailed blog post, but basically we had both an Arduino and Pi hooked to boards (with lego) and the kids could program them using Scratch on either device, C via the IDE on Arduino, or Python via IDLE on the Pi. It worked out well for a wide age range.
I would like to get some educational start up tutorials made and am now thinking of possible topics that would be suitable.
Great – that’s what we need 🙂 Just make sure that anything you do is easiliy modifieable as Scratch GPIO V2.0 is developemnt is under way (Power variable has been introduced and all pins default to inputs and any can be used as outputs are examples of changes done sos far) I’ve created a ScratchGPIO login on the Scratch site – contact me via email [email protected] for password if you want to upload code examples to it 🙂
At digital side, the ‘Keyword’ to enable the digital pin as “pin&pinnumber”, i try to use Tsop sensor connect to analogy input, how shall i enable it, is there any keyword for that.?
I have not coded Analogue support yet – sorry 🙁 What is Tsop sensor?
Nice approach to bridging out from Scratch. I’ve been looking at a way to do this for a while so this is going to be a great help. My plan is to create a bridge to MQTT so I can remove the need to run scratch on the PI (I have a Java based MQTT bridge created already). I have a couple of questions which I hope you might be able to answer (or point me in the right direction 😉 ).
1) Do you have details on the message payload which is passed via the Mesh socket?
2) Have you had any experience enabling Mesh on a MacOS based version of Scratch? I’ve followed the instructions have adjusted the underlying smalltalk class to enable the share menu item but using shift + click isn’t working. I can work around this by using one of your samples and stripping out the code and leaving the shell (which looks to me to include the mesh settings) but it would be good to get this working. Also are you able to confirm that the mesh settings are embedded in the projects .sb file?
Id MQTT this ?I don’t undeerstand why you want to bridge out from Scratch but also remove need to run Scratch????? 🙂
The message payload is just a series of white spaced broadcast values, variable names and variable values
I’ve no experience of MacOS at all – I know people have used my ScA code on Mac computers to control Arduinos from Scratch->Python->Firmata and that works fine.
I know each project knows whether Remote Sensor Connections are enabled for itself so it must be held as some sort of flag in the .sb file.
I’m a very pragmatic programmer – if it works – I’m happy 🙂
Simon
🙂 So the reason for using MQTT is so I can allow my son to develop using Scratch on one of my Macs and have the code drive the GPIO on the PI rather than VNC in to the PI to get access to Scratch running on the actual PI. I’ve used this approach to drive the GPIO on the PI from an Android based phone (my Galaxy SII).
Thanks for the heads up on the message payload I’ll look to know up a simple Java app and see where I get to (I’m a bit more of a Java coder than Python).
No worries re lack of MacOS experience. I’ve tried to enable Mesh on Ubuntu as well with no joy so I think I may be missing a step. Given I have a work around and I only need to work on the loopback address I can live without getting this fixed. If I get this working then the flow will look like:
Scratch -> Java -> MQTT -> Mosquitto -> Java -> GPIO
Given MQTT is a Pub/Sub protocol this will allow me to flow data to and from the PI.
Either way should be a fun project
I didn’t know any python 6 months ago – its like easy peasy baby Java without { and } 🙂
I’m with yoru approach – you just want to treat the RPi as an intelligent remote in/out device 🙂
That’s very similar to what I’ve started on ScA (Scratc h controlling Arduino) except I basically re-using the stuff from this project on that one to save wheel re-invention 🙂
When you’ve got something ready to demo – give me a shout as I’d be very interested in it 🙂
Yeah Python isn’t a major issue and I have used it but I can live with out another language to program with 😉 Also I have some nice libraries to drive MQTT from Java which I can re-use here 🙂
I’ll certainly create a blog post if I get it all going and will drop a link in here.
Not had a lot of time to tinker with the Java code but managed to get some time this evening. Receiving updates and broadcasts from Scratch is working fine but the reverse is proving to be a bit more of a challenge. Broadcasts are being picked up by Scratch fine but sensor-update is failing. Did you hit any issues getting this working with Python?
Not at all – prob just going to be a little syntax error
this is prob the relevent code used
def broadcast_pin_update(self, pin_index, value):
#sensor_name = “gpio” + str(GPIO_NUM[pin_index])
#bcast_str = ‘sensor-update “%s” %d’ % (sensor_name, value)
#print ‘sending: %s’ % bcast_str
#self.send_scratch_command(bcast_str)
if ADDON_PRESENT[0] == 1:
#do ladderboard stuff
switch_array = array(‘i’,[3,4,2,1])
#switch_lookup = array(‘i’,[24,26,19,21])
sensor_name = “switch” + str(switch_array[pin_index-10])
else:
sensor_name = “pin” + str(PIN_NUM[pin_index])
bcast_str = ‘sensor-update “%s” %d’ % (sensor_name, value)
print ‘sending: %s’ % bcast_str
self.send_scratch_command(bcast_str)
def send_scratch_command(self, cmd):
n = len(cmd)
a = array(‘c’)
a.append(chr((n >> 24) & 0xFF))
a.append(chr((n >> 16) & 0xFF))
a.append(chr((n >> 8) & 0xFF))
a.append(chr(n & 0xFF))
self.scratch_socket.send(a.tostring() + cmd)
I never wrote this bit- its generic Python->Scratch code
Simon
The strange thing is my Java Code is really just a port of this and the broadcast command works but not the sensor-update. Looks like some more digging around is required.
So I’ve managed to pull together a basic MQTT / Scratch bridge – if you are interested you can find a blog entry here:
Hello,
I see that you have a bit of CodeBug and Arduino coverage. I was wondering if you planned on covering any other dev boards in the near future?
Thanks,
Brandon
I’m actively doing stuff for the Crumble, Codebug and Raspberry Pi. There is a lot of stuff available for Arduino so I’d use those and not mine 🙂
If another cheap board comes out then I’d probably be interested – did you have one in mind?
Simon | https://simplesi.net/scratch-gpio-development/ | CC-MAIN-2022-27 | en | refinedweb |
, scalable full stack applications, powered by AWS.
So, Here we will explore another use case of AWS amplify, which is to add the Authentication to the angular web app using amplify in 2 ways,
1. Use pre-built UI components
2. Call Authentication APIs manually
Prerequisites:
- Node.js version 10 or later
- AWS Account
Create a new Angular application
Use the Angular CLI to bootstrap a new Angular app:
npx -p @angular/cli ng new amplify-app ? Would you like to add Angular routing? Y ? Which stylesheet format would you like to use? (your preferred stylesheet provider) cd amplify-app
Angular 6+ Support
Currently, the newest versions of Angular (6+) do not include shims for ‘global’ or ‘process’ as provided in previous versions. Add the following to your
src/polyfills.ts file to recreate them:
(window as any).global = window; (window as any).process = { env: { DEBUG: undefined }, };
Initiate the Amplify project
Now that we have a running Angular app, it’s time to set up Amplify for this app
amplify init Enter a name for the project (amplify-auth-angular) # All AWS services you provision for your app are grouped into an "environment" # A common naming convention is dev, staging, and production Enter a name for the environment (dev) # Sometimes the CLI will prompt you to edit a file, it will use this editor to open those files. Choose your default editor # Amplify supports JavaScript (Web & React Native), iOS, and Android apps Choose the type of app that you're building (javascript) What JavaScript framework are you using (angular) Source directory path (src) Distribution directory path (dist) Change from dist to dist/amplify-auth-angular Build command (npm run-script build) Start command (ng serve or npm start) # This is the profile you created with the `amplify configure` command in the introduction step. Do you want to use an AWS profile
The above given process creates a file called
aws-exports.js in the
src directory
Install Amplify and Angular dependencies
Inside the
amplify-auth-angular directory, install the Amplify Angular library and run your app:
npm install --save aws-amplify @aws-amplify/ui-angular npm start
The
@aws-amplify/ui-angular package is a set of Angular components and an Angular provider which helps integrate your application with the AWS-Amplify library.
Create authentication service in Angular
To start from scratch, run the following command in your project’s root folder:
If you want to re-use an existing authentication resource from AWS (e.g. Amazon Cognito UserPool or Identity Pool) refer to this section.
amplify add auth
? Do you want to use the default authentication and security configuration? Default configuration ? How do you want users to be able to sign in? Username ? Do you want to configure advanced settings? No, I am done.
To deploy the service, run the
push command:
amplify push
Now, the authentication service has been deployed and you can start using it. To view the deployed services in your project at any time, go to Amplify Console by running the following command:
Set AWS Amplify Config to the App
In your app’s entry point (i.e. main.ts), import and load the configuration file:
import { Amplify } from '@aws-amplify/core'; import { Auth } from '@aws-amplify/auth'; import awsconfig from './aws-exports'; Amplify.configure(awsconfig); Auth.configure(awsconfig);
Import AWS Amplify UI module
File: app.module.ts
import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; /* import AmplifyUIAngularModule */ import { AmplifyUIAngularModule } from '@aws-amplify/ui-angular'; import { AppRoutingModule } from './app-routing.module'; import { AppComponent } from './app.component'; @NgModule({ declarations: [ AppComponent ], imports: [ BrowserModule, AppRoutingModule, /* configure app with AmplifyUIAngularModule */ AmplifyUIAngularModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { }
Replace the content inside of app.component.ts with the following:
import { Component, ChangeDetectorRef } from '@angular/core'; import { onAuthUIStateChange, CognitoUserInterface, AuthState } from '@aws-amplify/ui-components'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { title = 'amplify-angular-auth'; user: CognitoUserInterface | undefined; authState: AuthState; constructor(private ref: ChangeDetectorRef) {} ngOnInit() { onAuthUIStateChange((authState, authData) => { this.authState = authState; this.user = authData as CognitoUserInterface; this.ref.detectChanges(); }) } ngOnDestroy() { return onAuthUIStateChange; } }
Replace the content inside of app.component.html with the html code given,
<amplify-authenticator *</amplify-authenticator> <div * <amplify-sign-out></amplify-sign-out> <div>Hello, {{user.username}}</div> <!-- This is where you application template code goes --> </div>
Customization of the UI with Angular
If you’d like to customize the form fields in the Authenticator Sign In or Sign Up component, you can do so by using the
formFields property.
In app.component.ts add formFields variable as following:
formFields = [ { type: "email", label: "Custom email Label", placeholder: "custom email placeholder", required: true, }, { type: "password", label: "Custom Password Label", placeholder: "custom password placeholder", required: true, }, { type: "phone_number", label: "Custom Phone Label", placeholder: "custom Phone placeholder", required: false, }, ]; }
File: app.componen.html
<amplify-authenticator <amplify-sign-up</amplify-sign-up> <amplify-sign-in</amplify-sign-in> </amplify-authenticator>
To see more options:
Sample Angular Code in Github
If you face any issues at any point of time, you can go through the full implementation here,
Unimedia Technology
Here at Unimedia Technology we have a team of Angular Developers that can develop your most challenging Web Single Page applications using the latest AWS technologies. | https://www.unimedia.tech/2020/12/12/implement-authentication-in-angular-using-aws-amplify/?lang=ca | CC-MAIN-2022-27 | en | refinedweb |
#include <jevois/GPU/ImGuiBackend.H>
Backend for ImGui on JeVois-Pro.
This abstract base class provides an interface used by VideoDisplayGUI. Derived implementations are available for SDL2 (to be used on JeVois-Pro host) and Mali framebuffer + evdev (to be used on JeVois-Pro platform).
Definition at line 33 of file ImGuiBackend.H.
Constructor.
Definition at line 23 of file ImGuiBackend.C.
Virtual destructor for safe inheritance, free resources.
Definition at line 27 of file ImGuiBackend.C.
Initialize the underlying engine that will process events, create windows, etc.
The init starts with the given initial window/framebuffer size. | http://jevois.org/doc/classjevois_1_1ImGuiBackend.html | CC-MAIN-2022-27 | en | refinedweb |
Spark providing us a high-level API – Dataset, which makes it easy to get type safety and securely perform manipulation in a distributed and a local environment without code changes. Also, spark structured streaming, a high-level API for stream processing allows us to stream a particular Dataset which is nothing but a type-safe structured streams. In this blog, we will see how we can create a type-safe structured streams using spark.
To create a type-safe structured stream first we need to read a Dataset. So, we will read a Dataset from socket basically, from a NetCat utility. We will paste some JSON data in the NetCat program to create Streaming Dataset. Let’s First create an entry point to our structured Dataset i.e, spark session.
val spark = SparkSession.builder() .appName("Streaming Datasets") .master("local[2]") .getOrCreate()
Create Streaming Dataset
Streaming Datasets can be created through the
DataStreamReader interface returned by
SparkSession.readStream().
// include encoders for DF -> DS transformations import spark.implicits._ def readCars(): Dataset[Car] = { spark.readStream .format("socket") .option("host", "localhost") .option("port", 12345) .load() // DF with single string column "value" .select(from_json(col("value"), carsSchema).as("car")) // composite column (struct) .selectExpr("car.*") // DF with multiple columns .as[Car] // encoder can be passed implicitly with spark.implicits }
In the above code snippet, we have a method readCars() in which we are reading the data using spark.readStream, that is pasted into the NetCat program. Use the below command to run a NetCat program:
nc -lk 12345
Then paste some JSON data into the NetCat program. For example:
{"Name":"chevrolet impala", "Miles_per_Gallon":14, "Cylinders":8, "Displacement":454, "Horsepower":220, "Weight_in_lbs":4354, "Acceleration":9, "Year":"1970-01-01", "Origin":"USA"} {"Name":"plymouth fury iii", "Miles_per_Gallon":14, "Cylinders":8, "Displacement":440, "Horsepower":215, "Weight_in_lbs":4312, "Acceleration":8.5, "Year":"1970-01-01", "Origin":"USA"} {"Name":"pontiac catalina", "Miles_per_Gallon":14, "Cylinders":8, "Displacement":455, "Horsepower":225, "Weight_in_lbs":4425, "Acceleration":10, "Year":"1970-01-01", "Origin":"USA"}
After reading the above JSON data using spark.readStream, we will get a DataFrame with a single string column “value”. From this DataFrame, create a composite column by passing the Schema for the JSON data.
val carsSchema = StructType(Array( StructField("Name", StringType), StructField("Miles_per_Gallon", DoubleType), StructField("Cylinders", LongType), StructField("Displacement", DoubleType), StructField("Horsepower", LongType), StructField("Weight_in_lbs", LongType), StructField("Acceleration", DoubleType), StructField("Year", StringType), StructField("Origin", StringType) ))
And Finally, converted the DataFrame into a Dataset with as[Car] function in which we need to pass a type argument. In our case, we pass a type Car which has the same structure that the above schema is compatible with.
case class Car( Name: String, Miles_per_Gallon: Option[Double], Cylinders: Option[Long], Displacement: Option[Double], Horsepower: Option[Long], Weight_in_lbs: Option[Long], Acceleration: Option[Double], Year: String, Origin: String )
When we want to convert a DataFrame to a Dataset we need an Encoder. An encoder is a Data structure which tells spark how a row should be converted to a JVM object of Car Type. So, to do that either we can pass Encoder explicitly as[Car] (carEncoder) or let the compiler fetch implicit one for us. And finally readCars() method will give a Dataset of type Car.
Now, let’s do some transformation with the above created Dataset. Let’s show the car names from the Dataset with maintaining type info.
def showCarNames() = { val carsDS: Dataset[Car] = readCars() // collection transformations maintain type info val carNames: Dataset[String] = carsDS.map(_.Name) carNames.writeStream .format("console") .outputMode("append") .start() .awaitTermination() }
You can find complete code here
Happy blogging!! | https://blog.knoldus.com/spark-streaming-datasets/ | CC-MAIN-2022-27 | en | refinedweb |
I’m trying to get all the words made from the letters, ‘crbtfopkgevyqdzsh’ from a file called web2.txt. The posted cell below follows a block of code which improperly returned the whole run up to a full word e.g. for the word shocked it would return s, sh, sho, shoc, shock, shocke, shocked
So I tried a trie (know pun intended).
web2.txt is 2.5 MB in size, and contains 2,493,838 words of varying length. The trie in the cell below is breaking my Google Colab notebook. I even upgraded to Google Colab Pro, and then to Google Colab Pro+ to try and accommodate the block of code, but it’s still too much. Any more efficient ideas besides trie to get the same result?
# Find the words3 word list here: svnweb.freebsd.org/base/head/share/dict/web2?view=co trie = {} with open('/content/web2.txt') as words3: for word in words3: cur = trie for l in word: cur = cur.setdefault(l, {}) cur['word'] = True # defined if this node indicates a complete word def findWords(word, trie = trie, cur = '', words3 = []): for i, letter in enumerate(word): if letter in trie: if 'word' in trie[letter]: words3.append(cur) findWords(word, trie[letter], cur+letter, words3 ) # first example: findWords(word[:i] + word[i+1:], trie[letter], cur+letter, word_list ) return [word for word in words3 if word in words3] words3 = findWords("crbtfopkgevyqdzsh")
I’m using Pyhton3
>Solution :
A trie is overkill. There’s about 200 thousand words, so you can just make one pass through all of them to see if you can form the word using the letters in the base string.
This is a good use case for
collections.Counter, which gives us a clean way to get the frequencies (i.e. "counters") of the letters of an arbitrary string:
from collections import Counter base_counter = Counter("crbtfopkgevyqdzsh") with open("data.txt") as input_file: for line in input_file: line = line.rstrip() line_counter = Counter(line.lower()) # Can use <= instead if on Python 3.10 if line_counter & base_counter == line_counter: print(line) | https://devsolus.com/2022/06/23/trie-is-using-too-much-memory/ | CC-MAIN-2022-27 | en | refinedweb |
Hi, I am testing the map method in a simple React application and I am stuck with a propagation problem. The code is very simple: I have a render method:
<div className={styles.tagContainer}> { this.state.items.map((item, key) => { return ( <div key={key} onClick={(e) => {this.onClickTag(e, item); return false;}}> <p className={`${this.state.active !== true ? styles.tagNormalState : styles.tagSelectedState}`}>{item.Title}</p> </div> ); })} </div>
This method is rendering a serie of div elements with styled p-elements. The div element has an onClick event that when clicked it shoul call another method called onClickTag which is this:
private onClickTag(e: React.MouseEvent<HTMLElement>,item): any { e.stopPropagation(); this.setState({ active: true }); }
right now this method only shange the state to set another color to the div-element when is clicked. The problem is that is not just the clicked element that get the new color but all the items. I tryed to use the e.stopPropagation() to stop this but is not working. Do you any idea what the problem can be? Best regads Americo | https://forum.freecodecamp.org/t/stop-propagation-in-react-mapped-elements/259444 | CC-MAIN-2022-27 | en | refinedweb |
You must have noticed that we are using Action Keywords like 'click_MyAccount()', which is not at all a good practice. As there will be thousands of elements in any application and to click those elements we have to write thousands of click action keywords. So ideally there will be just one click action that should work on every other element on the test application. To achieve that, it is needed to separate the action from the object. For e.g. there will be an object called 'MyAccount' and an action called 'click()' and it would work like this: 'MyAccount.click()'
So our next task is to separate all the objects from the actions. To achieve that we need to create an Object Repository, which will hold all the objects properties in it and then those properties can be used in the main driver script. We can easily do this with the help of Property file. Normally, Java properties file is used to store project configuration data or settings. In this tutorial, we will show you how to set up and use a properties file. Time for action now.
Step 1: Set Up Object Repository properties file
- Create a new property file by right-clicking on 'config' package and select New > File and name it as 'OR'.
- Now take out all the object properties from Action Keywords class and put in 'OR' file.
- All the objects will be defined like this: Object Name = Object Property, where object property is element locator.
OR text File:
# Home Page Objects btn_MyAccount=.//*[@id='account']/a btn_LogOut=.//*[@id='account_logout'] # Login Page Object txtbx_UserName=.//*[@id='log'] txtbx_Password=.//*[@id='pwd'] btn_LogIn=.//*[@id='login']
Step 2: Modify Data Engine sheet to separate Page Objects with Actions
- Insert an extra row in the 'dataEngine' excel sheet just before the 'Action Keywords' column.
- Name this new row as 'Page Object'.
- Add all the objects in the 'Page Object' column.
- Remove the objects names from the Action Keywords, only actions should be left in the Action Keywords column.
Excel file now look like this:
Step 3: Modify Action Keyword class to work with OR properties
- There were three click methods (click_MyAccount(), click_Login(), click_Logout()) in the previous chapter, replace all the click methods with just one method and name it as 'click()'.
- Now when we have just one click method, this method should know on which element to perform click action. For that it is required to pass the element as an argument to this method.
- This argument will be the Object name taken from the 'Page Object' column in the excel sheet.
- Modify the action methods, so that it can use the OR properties.
Keyword Action Class:
package config; import java.util.concurrent.TimeUnit; import static executionEngine.DriverScript.OR; import org.openqa.selenium.By; import org.openqa.selenium.WebDriver; import org.openqa.selenium.firefox.FirefoxDriver; public class ActionKeywords { public static WebDriver driver; //All the methods in this class now accept 'Object' name as an argument public static void openBrowser(String object){ driver=new FirefoxDriver(); } public static void navigate(String object){ driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS); driver.get(Constants.URL); } public static void click(String object){ //This is fetching the xpath of the element from the Object Repository property file driver.findElement(By.xpath(OR.getProperty(object))).click(); } public static void input_UserName(String object){ driver.findElement(By.xpath(OR.getProperty(object))).sendKeys(Constants.UserName); } public static void input_Password(String object){ driver.findElement(By.xpath(OR.getProperty(object))).sendKeys(Constants.Password); } public static void waitFor(String object) throws Exception{ Thread.sleep(5000); } public static void closeBrowser(String object){ driver.quit(); } }
Note: We have still used 'input_Password()' and not detached this object from action. We will take care of this in next coming chapter of Data Driven.
Note: If you see carefully, object argument is passed in every method, even if it is not required in the method, such as 'closeBrowser()'. This is the mandatory condition of the reflection class that all the methods will have same arguments, even if the argument is not used in some methods.
Step 4: Changes in Constants class
- New entry in Constants class for the new column of Page Objects.
- Modify the value of the Action Keyword column, as we inserted a new column before the Action Keword column in dataEngine excel sheet.
Constants Variable Class:
package config; public class Constants { //List of System Variables public static final String URL = ""; public static final String Path_TestData = "D://Tools QA Projects//trunk//Hybrid KeyWord Driven//src//dataEngine//DataEngine.xlsx"; public static final String Path_OR = "D://Tools QA Projects//trunk//Hybrid KeyWord Driven//src//config//OR.txt"; public static final String File_TestData = "DataEngine.xlsx"; //List of Data Sheet Column Numbers public static final int Col_TestCaseID = 0; public static final int Col_TestScenarioID =1 ; //This is the new column for 'Page Object' public static final int Col_PageObject =3 ; //This column is shifted from 3 to 4 public static final int Col_ActionKeyword =4 ; //List of Data Engine Excel sheets public static final String Sheet_TestSteps = "Test Steps"; //List of Test Data public static final String UserName = "testuser_3"; public static final String Password = "[email protected]"; }
Step 5: Load OR properties in the Driver Script
Driver Script:
package executionEngine; import java.io.FileInputStream; import java.lang.reflect.Method; import java.util.Properties; import config.ActionKeywords; import config.Constants; import utility.ExcelUtils; public class DriverScript { public static Properties OR; public static ActionKeywords actionKeywords; public static String sActionKeyword; public static String sPageObject; public static Method method[]; public DriverScript() throws NoSuchMethodException, SecurityException{ actionKeywords = new ActionKeywords(); method = actionKeywords.getClass().getMethods(); } public static void main(String[] args) throws Exception { String Path_DataEngine = Constants.Path_TestData; ExcelUtils.setExcelFile(Path_DataEngine, Constants.Sheet_TestSteps); //Declaring String variable for storing Object Repository path String Path_OR = Constants.Path_OR; //Creating file system object for Object Repository text/property file FileInputStream fs = new FileInputStream(Path_OR); //Creating an Object of properties OR= new Properties(System.getProperties()); //Loading all the properties from Object Repository property file in to OR object OR.load(fs); for (int iRow=1;iRow<=9;iRow++){ sActionKeyword = ExcelUtils.getCellData(iRow, Constants.Col_ActionKeyword); sPageObject = ExcelUtils.getCellData(iRow, Constants.Col_PageObject); execute_Actions(); } } private static void execute_Actions() throws Exception { for(int i=0;i<method.length;i++){ if(method[i].getName().equals(sActionKeyword)){ //This is to execute the method or invoking the method //Passing 'Page Object' name and 'Action Keyword' as Arguments to this method method[i].invoke(actionKeywords,sPageObject); break; } } } }
Note: Name of the object in the OR file is case sensitive, it has to matched exactly the same way it is mentioned in the DataEngine sheet.
Project folder would look like this:
Similar Articles
| https://www.toolsqa.com/selenium-webdriver/keyword-driven-framework/object-repository/ | CC-MAIN-2022-27 | en | refinedweb |
.
The Go client constructor accepts the
elasticsearch.Config{} type, which provides a number of options to control the behaviour:
$ go doc -short github.com/elastic/go-elasticsearch/v7.Config type Config struct { Addresses []string // A list of Elasticsearch nodes to use. // ... }
Let's review the available options and examples of their usage.
Endpoints and security
The first thing you might want to do, unless you're just experimenting with the client locally, is to point it to a remote cluster. The most straightforward way of doing that is to export the
ELASTICSEARCH_URL variable with a comma-separated list of node URLs. This follows the "Twelve-Factor" recommendation, leaves the configuration entirely out of the codebase, and plays well with cloud functions/lambdas and container orchestration systems such as Kubernetes.
// In main.go es, err := elasticsearch.NewDefaultClient() // ... // On the command line $ ELASTICSEARCH_URL= go run main.go // For Google Cloud Function $ gcloud functions deploy myfunction --set-env-vars ELASTICSEARCH_URL= ...
To configure the cluster endpoints directly (e.g., when you load them from a configuration file, or retrieve them from a secrets management system such as Vault) use the
Addresses field with a slice of strings with URLs of the nodes you want to connect to, and the
Username and
Password fields for authentication:
var ( clusterURLs = []string{"", "", ""} username = "foo" password = "bar" ) cfg := elasticsearch.Config{ Addresses: clusterURLs, Username: username, Password: password, } es, err := elasticsearch.NewClient(cfg) // ...
Use the
APIKey field for authentication with the API keys, which are easier to manage via the Elasticsearch API or Kibana than usernames and passwords.
When using Elasticsearch Service on Elastic Cloud, you can point the client to the cluster by using the Cloud ID:
cfg := elasticsearch.Config{ CloudID: "my-cluster:dXMtZWFzdC0xLZC5pbyRjZWM2ZjI2MWE3NGJm...", APIKey: "VnVhQ2ZHY0JDZGJrUW...", } es, err := elasticsearch.NewClient(cfg) // ...
Note: Don't forget that you still need to provide the authentication credentials when using the Cloud ID.
A common need in custom deployments or security-focused demos is to provide a certificate authority to the client so it can verify the server certificate. This can be achieved in multiple ways, but the most straightforward option is to use the
CACert field, passing it a slice of bytes with the certificate payload:
cert, _ := ioutil.ReadFile("path/to/ca.crt") cfg := elasticsearch.Config{ // ... CACert: cert, } es, err := elasticsearch.NewClient(cfg) // ...
Have a look at the
_examples/security folder of the repository for a full demo, complete with generating custom certificates with the
elasticsearch-certutil command-line utility and starting the cluster with corresponding configuration.
Global headers
In specific scenarios such as when using a token-based authentication or when interacting with proxies, you might need to add HTTP headers to client requests. While you can add them in the API calls, using the
WithHeader() method, it's more convenient to set them globally, in the client configuration:
cfg := elasticsearch.Config{ // ... Header: http.Header(map[string][]string{ "Authorization": {"Bearer dGhpcyBpcyBub3QgYSByZWFs..."}, }), } es, err := elasticsearch.NewClient(cfg) // ...
Logging and metrics
During development, it is crucial to closely follow what is being sent to the Elasticsearch cluster and what is being received. The easiest way of achieving that is to simply print the details of the request and response to the console or a file. The Go client package provides several distinct logger components. For debugging during development, the
estransport.ColorLogger is perhaps the most useful one — it prints succinct, formatted, and colorized information to the console:
cfg := elasticsearch.Config{ Logger: &estransport.ColorLogger{Output: os.Stdout}, } es, _ := elasticsearch.NewClient(cfg) es.Info() // > GET 200 OK 11ms
By default, the request and response body is not printed — in order to do so, set the corresponding logger options:
cfg := elasticsearch.Config{ Logger: &estransport.ColorLogger{ Output: os.Stdout, EnableRequestBody: true, EnableResponseBody: true, }, } es, _ := elasticsearch.NewClient(cfg) es.Info() // > GET 200 OK 6ms // > « { // > « "name" : "es1", // > « "cluster_name" : "go-elasticsearch", // > ...
Note: The
estransport.TextLogger component prints the information without using any special characters and colors.
When asking for help in our forums, it is often useful to present the list of operations in an agnostic format so they can be easily "replayed" locally, without understanding or installing a particular programming language. An ideal way of doing that is to format the output as a sequence of executable
curl commands with the
estransport.CurlLogger:
cfg := elasticsearch.Config{ Logger: &estransport.CurlLogger{ Output: os.Stdout, EnableRequestBody: true, EnableResponseBody: true, }, } es, _ := elasticsearch.NewClient(cfg) es.Index( "test", strings.NewReader(`{"title" : "logging"}`), es.Index.WithRefresh("true"), es.Index.WithPretty(), es.Index.WithFilterPath("result", "_id"), ) // > curl -X POST -H 'Content-Type: application/json' '' -d \ // > '{ // > "title": "logging" // > }' // > # => 2020-07-23T13:12:05Z [201 Created] 65ms // > # { // > # "_id": "_YPNe3MBdF-KdkKEZqF_", // > # "result": "created" // > # }
When logging the client operations in production, plain text output is not suitable, because you want to store the logs as structured data, quite possibly in Elasticsearch itself. In this case, you can use the
estransport.JSONLogger to output the entries as JSON documents in an Elastic Common Schema (ECS)-compatible format:
cfg := elasticsearch.Config{ Logger: &estransport.JSONLogger{Output: os.Stdout}, } es, _ := elasticsearch.NewClient(cfg) es.Info() // > {"@timestamp":"2020-07-23T13:12:05Z","event":{"duration":10970000},"url":{"scheme":"http","domain":"localhost","port":9200,"path":"/","query":""},"http":{"request":{"method":"GET"},"response":{"status_code":200}}}
In keeping with the spirit of client extensibility, all the listed loggers are just implementations of the
estransport.Logger interface. To use a custom logger, simply implement this interface:
$ go doc -short github.com/elastic/go-elasticsearch/v7/estransport.Logger type Logger interface { LogRoundTrip(*http.Request, *http.Response, error, time.Time, time.Duration) error // ... }
The
_examples/logging/custom.go example demonstrates how to use the
github.com/rs/zerolog package as the logging "driver" by implementing the interface for a
CustomLogger type.
Another feature of the client, useful in production or for debugging, is the ability to export various metrics about itself: the number of requests and failures, the response status codes, and the details about the connections. Set the
EnableMetrics option to
true, and use the
Metrics() method to retrieve the information. The
_examples/instrumentation/expvar.go example shows an integration with the
expvar package.
Note: The
_examples/instrumentation folder contains interactive demos of integrating the Go client with Elastic APM and OpenCensus as well.
Retries
We’ve seen how the client manages connections and retries requests for specific conditions. Now let's have a look at the related configuration options.
By default, the client retries the request up to three times; to set a different limit, use the
MaxRetries field. To change the list of response status codes which should be retried, use the
RetryOnStatus field. Together with the
RetryBackoff option, you can use it to retry requests when the server sends a
429 Too Many Requests response:
cfg := elasticsearch.Config{ RetryOnStatus: []int{429, 502, 503, 504}, RetryBackoff: func(i int) time.Duration { // A simple exponential delay d := time.Duration(math.Exp2(float64(i))) * time.Second fmt.Printf("Attempt: %d | Sleeping for %s...\n", i, d) return d }, } es, err := elasticsearch.NewClient(cfg) // ...
Because the
RetryBackoff function only returns a
time.Duration, you can provide a more robust backoff implementation by using a third-party package such as
github.com/cenkalti/backoff:
import "github.com/cenkalti/backoff/v4" retryBackoff := backoff.NewExponentialBackOff() retryBackoff.InitialInterval = time.Second cfg := elasticsearch.Config{ RetryOnStatus: []int{429, 502, 503, 504}, RetryBackoff: func(i int) time.Duration { if i == 1 { retryBackoff.Reset() } d := retryBackoff.NextBackOff() fmt.Printf("Attempt: %d | Sleeping for %s...\n", i, d) return d }, } es, err := elasticsearch.NewClient(cfg) // ...
Discovering nodes
The
DiscoverNodesOnStart and
DiscoverNodesInterval settings control whether the client should perform node discovery ("sniffing") during initialization or periodically; both are disabled by default.
Note: Enable node discovery only when the client is connected to the cluster directly, not when the cluster is behind a proxy, which is also the case when using Elasticsearch Service.
The
ConnectionPoolFunc field allows to provide a custom connection pool implementation, which satisfies the
estransport.ConnectionPool interface. However, this is generally useful only in complicated and esoteric network topologies.
A custom connection selector, passed through the
Selector field, might be more practically useful in certain situations, like the AWS zone-aware one we looked at in the previous article, or to implement a weighted round robin selector.
The most invasive configuration option is the
Transport field, which allows you to completely replace the default HTTP client used by the package, namely
http.DefaultTransport. You might want to do this in situations where you want to configure timeouts, TLS or proxy settings, or any other low-level HTTP details:
cfg := elasticsearch.Config{ // ... Transport: &http.Transport{ Proxy: ... MaxIdleConnsPerHost: ..., ResponseHeaderTimeout: ..., DialContext: (&net.Dialer{ Timeout: ..., KeepAlive: ..., }).DialContext, TLSClientConfig: &tls.Config{ MinVersion: ..., // ... }, }, } es, err := elasticsearch.NewClient(cfg) // ...
Custom transport
Because the
Transport field accepts any implementation of
http.RoundTripper, it is possible to pass a custom implementation. Let's look at an example where we count the number of requests, following the
_examples/customization.go demo:
type CountingTransport struct { count uint64 } func (t *CountingTransport) RoundTrip(req *http.Request) (*http.Response, error) { atomic.AddUint64(&t.count, 1) return http.DefaultTransport.RoundTrip(req) } tp := CountingTransport{} cfg := elasticsearch.Config{Transport: &tp} es, err := elasticsearch.NewClient(cfg) // ... fmt.Printf("%80s\n", fmt.Sprintf("Total Requests: %d", atomic.LoadUint64(&tp.count)))
Typically, there's no need to replace the default HTTP client with a custom implementation, with one specific exception: mocking the client in unit tests. In the following example, the
mockTransport type defines the
RoundTripFunc field, which allows to return a specific response for specific tests.
// mockTransport defines a mocked transport for unit tests type mockTransport struct { RoundTripFunc func(req *http.Request) (*http.Response, error) } // RoundTripFunc implements http.RoundTripper func (t *mockTransport) RoundTrip(req *http.Request) (*http.Response, error) { return t.RoundTripFunc(req) } func TestClientSuccess(t *testing.T) { tp := &mockTransport{ RoundTripFunc: func(req *http.Request) (*http.Response, error) { // Return a successful mock response return &http.Response{ Status: "200 OK", StatusCode: 200, Body: ioutil.NopCloser(strings.NewReader("HELLO")), }, nil }, } cfg := elasticsearch.Config{Transport: &tp} es, _ := elasticsearch.NewClient(cfg) res, _ := es.Info() t.Log(res) }
The mocked response is printed out when the test is executed:
go test -v tmp/client_mocking_test.go // === RUN TestClientSuccess // TestClientSuccess: client_mocking_test.go:42: [200 OK] HELLO // --- PASS: TestClientSuccess (0.00s) // ...
Note: In specific situations, it is desirable to replace the HTTP client from the standard library with a more performant one, such as
github.com/valyala/fasthttp, which has an observable performance difference. Run the benchmarks in
_examples/fasthttp to measure the difference in your own environment.
This concludes our overview of the Go client configuration options. In the next part, we will focus on different ways of encoding and decoding JSON payloads, and the
esutil.BulkIndexer helper. | https://www.elastic.co/blog/the-go-client-for-elasticsearch-configuration-and-customization | CC-MAIN-2022-27 | en | refinedweb |
optparse — Parser for command line options¶
Source code: Lib/optparse.py
Niezalecane od wersji 3.2: The
optparse module is deprecated and will not be developed further;
development will continue with the
argparse module. the following
]).
Background¶.
Terminology¶
- argument:]”.
- option
an argument used to supply extra information to guide or customize the execution of a program. There are many different syntaxes for options; the traditional Unix syntax is a hyphen („-”) followed by a single letter, e.g.
-xor
-F. Also, traditional Unix syntax allows multiple options to be merged into a single argument, e.g.
-x -Fis equivalent to
-xF. The GNU project introduced
--followed by a series of hyphen-separated words, e.g.
--fileor
--dry-run. These are the only two option syntaxes provided by
optparse.
Some other option syntaxes that the world has seen include: Windows or certain legacy platforms (e.g. VMS, MS-DOS).
- option argument
-atakes an optional argument and
-bis another option entirely, how do we interpret
-ab? Because of this ambiguity,
optparsedoes not support this feature.
-parsedoesn’t prevent you from implementing required options, but doesn’t give you much help at it either.
For example, consider this hypothetical command-line:
prog -v --report report.txt foo bar
-v and
--report are both options. Assuming that
takes one argument,
report.txt is an option argument.
foo and
bar are positional arguments.
What are options for?¶ either one file to another, or several files to another
directory.
What are positional arguments for?¶.
Tutorial¶
While
OptionParsertakes.
Understanding option actions¶ store action¶.
Handling boolean (flag) options¶.
Other actions¶
Some other actions supported by
optparse are:
"store_const"
store a constant value
"append"
append this option’s argument to a list
"count"
increment a counter by one
"callback"
call a specified function
These are covered in section Reference Guide, and section Option Callbacks.
Default values¶.
Generating help¶
optparse’sexpands
%progin the usage string to the name of the current program, i.e.
os.path.basename(sys.argv[0]). The expanded string is then printed before the detailed option help.
If you don’t supply a usage string,
optparseuses a bland but sensible default:
"Usage: %prog [options]", which is fine if your script doesn’t take any positional arguments.
every option defines a help string, and doesn’t worry about line-wrapping—
optparsetakesconverts the destination variable name to uppercase and uses that for the meta-variable. Sometimes, that’s not what you want—for example, the
--filenameoption explicitly sets
metavar="FILE", resulting in this automatically-generated option description:
-f FILE, --filename=FILE
This is important for more than just saving space, though: the manually written help text uses the meta-variable
FILEto clue the user in that there’s a connection between the semi-formal syntax
-f FILEand the informal semantic description „write output to FILE”. This is a simple but effective way to make your help text a lot clearer and more useful for end users.
options that have a default value can include
%defaultin the help string—
optparsewill replace it with
str()of the option’s default value. If an option has no default value (or the default value is
None),
%defaultexpands to
none.
Grouping Options¶)¶
where
parser is the
OptionParserinstance the group will be inserted)¶
Return the
OptionGroupto which the short or long option string opt_str (e.g.
'-o'or
'--option') belongs. If there’s no such
OptionGroup, return
None.
Printing a version string¶:
OptionParser.
print_version(file=None)¶
Print the version message for the current program (
self.version) to file (default stdout). As with
print_usage(), any occurrence of
%progin
self.versionis replaced with the name of the current program. Does nothing if
self.versionis empty or undefined.
OptionParser.
get_version()¶
Same as
print_version()but returns the version string instead of printing it.
How
optparse handles errors¶
There are two broad classes of errors that
optparse has to worry about:
programmer errors and user errors. Programmer errors are usually erroneous
calls to
OptionParser.add_option(), e.g. invalid option strings, unknown
option attributes, missing option attributes, etc. These are dealt with in the
usual way: raise an exception (either
optparse.OptionError or
TypeError) and let the program crash.
Handling user errors is much more important, since they are guaranteed to happen
no matter how stable your code is.
optparse can automatically detect
some user errors, such as bad option arguments (passing
-n 4x where
-n takes an integer argument), missing arguments (
-n at the end
of the command line, where
-n takes an argument of any type). Also,
you can call
OptionParser.error() to signal an application-defined error
condition:
(options, args) = parser.parse_args() ... if options.a and options.b: parser.error("options -a and -b are mutually exclusive")
In either case,
optparse handles the error the same way: it prints the
program’s usage message and an error message to standard error and exits with
error status 2.
Consider the first example above, where the user passes
4x to an option
that takes an integer:
$ /usr/bin/foo -n 4x Usage: foo [options] foo: error: option -n: invalid integer value: '4x'
Or, where the user fails to pass a value at all:
$ /usr/bin/foo -n Usage: foo [options] foo: error: -n option requires an argument
optparse-generated error messages take care always to mention the
option involved in the error; be sure to do the same when calling
OptionParser.error() from your application code.
If
optparse’s default error-handling behaviour does not suit your needs,
you’ll need to subclass OptionParser and override its
exit()
and/or
error() methods.
Putting it all together¶()
Reference Guide¶
Creating the parser¶
The first step in using
optparse is to create an OptionParser instance.
- class
optparse.
OptionParser(...)¶prints the usage string, it expands
%progto
os.path.basename(sys.argv[0])(or to
progif you passed that keyword argument). To suppress a usage message, pass the special value
optparse.SUPPRESS_USAGE.
option_list(default:
[])
A list of Option objects to populate the parser with. The options in
option_listareautomatically adds a version option with the single option string
--version. The substring
%progis expanded the same as for
usage.
conflict_handler(default:
"error")
Specifies what to do when options with conflicting option strings are added to the parser; see section Conflicts between options.
description(default:
None)
A paragraph of text giving a brief overview of your program.
optparsereformprovides two concrete classes for this purpose: IndentedHelpFormatter and TitledHelpFormatter.
add_help_option(default:
True)
If true,
optparsewill add a help option (with option strings
-hand
--help) to the parser.
prog
The string to use when expanding
%progin
usageand
versioninstead of
os.path.basename(sys.argv[0]).
epilog(default:
None)
A paragraph of help text to print after the option help.
Populating the parser¶.)
Defining options¶
Each Option instance represents a set of synonymous command-line option strings,
e.g.
-f and
--file. You can specify any number of short or
long option strings, but you must specify at least one overall option string.
The canonical way to create an
Option instance is with the
add_option() method of
OptionParser.
OptionParser.
add_option(option)¶
OptionParser.
add_option(*opt_str, attr=value, ...)
To define an option with only a short option string:
parser.add_option("-f", attr=value, ...)
And to define an option with only a long option string:
parser.add_option("--foo", attr=value, ...)
The keyword arguments define attributes of the new Option object. The most important option attribute is
action, and it largely determines which other attributes are relevant or required. If you pass irrelevant option attributes, or fail to pass required ones,
optparseraises an
OptionErrorexception explaining your mistake.
An option’s action determines what
optparsedoes when it encounters this option on the command-line. The standard option actions hard-coded into
optparseare:
"store"
store this option’s argument (default)
"store_const"
store a constant value
"store_true"
store
True
"store_false"
store
False
and
destoption attributes; see Standard option actions.)
As you can see, most actions involve storing or updating a value somewhere.
optparse always creates a special object for this, conventionally called
options (it happens to be an instance of
optparse.Values). Option
arguments (and various other values) are stored as attributes of this object,
according to the
dest (destination) option attribute.
For example, when you call
parser.parse_args()
one of the first things
optparse does is create the
options object:
options = Values()
If one of the options in this parser is defined with
parser.add_option("-f", "--file", action="store", type="string", dest="filename")
and the command-line being parsed includes any of the following:
-ffoo -f foo --file=foo --file foo
then
optparse, on seeing this option, will do the equivalent of
options.filename = "foo"
The
type and
dest option attributes are almost
as important as
action, but
action is the only
one that makes sense for all options.
Option attributes¶
The following option attributes may be passed as keyword arguments to
OptionParser.add_option(). If you pass an option attribute that is not
relevant to a particular option, or fail to pass a required option attribute,
optparse raises
OptionError.
Option.
action¶
(default:
"store")
Determines
optparse’s behaviour when this option is seen on the command line; the available options are documented here.
Option.
type¶
(default:
"string")
The argument type expected by this option (e.g.,
"string"or
"int"); the available option types are documented here.
Option.
dest¶
(default: derived from option strings)
If the option’s action implies writing or modifying a value somewhere, this tells
optparsewhere to write it:
destnames an attribute of the
optionsobject that
optparsebuilds as it parses the command line.
Option.
default¶
The value to use for this option’s destination if the option is not seen on the command line. See also
OptionParser.set_defaults().
Option.
nargs¶
(default: 1)
How many arguments of type
typeshould be consumed when this option is seen. If > 1,
optparsewill store a tuple of values to
dest.
Option.
callback¶
For options with action
"callback", the callable to call when this option is seen. See section Option Callbacks for detail on the arguments passed to the callable.
Option.
callback_args¶
Option.
callback_kwargs¶
Additional positional and keyword arguments to pass to
callbackafter the four standard callback arguments.
Option.
Help text to print for this option when listing all available options after the user supplies a
helpoption (such as
--help). If no help text is supplied, the option will be listed without help text. To hide this option, use the special value
optparse.SUPPRESS_HELP.
Standard option actions¶
The various option actions all have slightly different requirements and effects.
Most actions have several relevant option attributes which you may specify to
guide
optparse’s behaviour; a few have required attributes, which you
must specify for any option using that action.
"store"[relevant:
type,
dest,
nargs,
choices]
The option must be followed by an argument, which is converted to a value according to
typeand stored in
dest. If
nargs> 1, multiple arguments will be consumed from the command line; all will be converted according to
typeand stored to
destas a tuple. See the Standard option types section.
If
choicesis supplied (a list or tuple of strings), the type defaults to
"choice".
If
typeis not supplied, it defaults to
"string".
If
destis not supplied,
optparsederives a destination from the first long option string (e.g.,
--foo-barimplies
foo_bar). If there are no long option strings,
optparsederives a destination from the first short option string (e.g.,
-fimpliesis stored in
dest.
Example:
parser.add_option("-q", "--quiet", action="store_const", const=0, dest="verbose") parser.add_option("-v", "--verbose", action="store_const", const=1, dest="verbose") parser.add_option("--noisy", action="store_const", const=2, dest="verbose")
If
--noisyis seen,
optparsewill set
options.verbose = 2
"store_true"[relevant:
dest]
A special case of
"store_const"that stores
Trueto
dest.
"store_false"[relevant:
dest]
Like
"store_true", but stores
False.is supplied, an empty list is automatically created when
optparsefirst encounters this option on the command-line. If
nargs> 1, multiple arguments are consumed, and a tuple of length
nargsis appended to
dest.
The defaults for
typeand
destare the same as for the
"store"action.
Example:
parser.add_option("-t", "--tracks", action="append", type="int")
If
-t3is seen on the command-line,
optparsedoes the equivalent of:
options.tracks = [] options.tracks.append(int("3"))
If, a little later on,
--tracks=4is seen, it does:
options.tracks.append(int("4"))
The
appendaction calls the
appendmethod on the current value of the option. This means that any default value specified must have an
appendmethod.is appended to
dest; as with
"append",
destdefaults to
None, and an empty list is automatically created the first time the option is encountered.
Increment the integer stored at
dest. If no default value is supplied,
destis set to zero before being incremented the first time.
Example:
parser.add_option("-v", action="count", dest="verbosity")
The first time
-vis seen on the command line,
optparsedoes the equivalent of:
options.verbosity = 0 options.verbosity += 1
Every subsequent occurrence of
-vresults
usagestring passed to OptionParser’s constructor and the
helpstring passed to every option.
If no
helpstring is supplied for an option, it will still be listed in the help message. To omit an option entirely, use the special value
optparse.SUPPRESS_HELP.
optparseautomatically adds a
helpoptionparsesees either
-hor
--helpparseterminates your process with
sys.exit(0).
"version"
Prints the version number supplied to the OptionParser to stdout and exits. The version number is actually formatted and printed by the
print_version()method of OptionParser. Generally only relevant if the
versionargument is supplied to the OptionParser constructor. As with
helpoptions, you will rarely create
versionoptions, since
optparseautomatically adds them when needed.
Standard option types¶.
Parsing arguments¶
The whole point of creating and populating an OptionParser is to call its
parse_args() method:
(options, args) = parser.parse_args(args=None, values=None)
where the input parameters are
args
the list of arguments to process (default:
sys.argv[1:])
values
an
optparse.Valuesobject).
Querying and manipulating your option parser¶
The default behavior of the option parser can be customized slightly, and you can also poke around your option parser and see what’s there. OptionParser provides several methods to help you out:
OptionParser.
disable_interspersed_args()¶
Set parsing to stop on the first non-option. For example, if
-aand
-bare both simple options that take no arguments,
optparsenormally()¶
Set parsing to not stop on the first non-option, allowing interspersing switches with command arguments. This is the default behavior.
OptionParser.
get_option(opt_str)¶
Returns the Option instance with the option string opt_str, or
Noneif no options have that option string.
OptionParser.
has_option(opt_str)¶
Return
Trueif the OptionParser has an option with option string opt_str (e.g.,
-qor
--verbose).
OptionParser.
remove_option(opt_str)¶
If the
OptionParserhas an option corresponding to opt_str, that option is removed. If that option provided any other option strings, all of those option strings become invalid. If opt_str does not occur in any option belonging to this
OptionParser, raises
ValueError.
Conflicts between options¶
If)
-
assume option conflicts are a programming error and raise
OptionConflictError
"resolve"
-
resolve option conflicts intelligently (see below)
Cleanup¶.
Other methods¶
OptionParser supports several other public methods:
OptionParser.
set_usage(usage)¶
Set the usage string according to the rules described above for the
usageconstructor keyword argument. Passing
Nonesets the default usage string; use
optparse.SUPPRESS_USAGEto suppress a usage message.
OptionParser.
print_usage(file=None)¶
Print the usage message for the current program (
self.usage) to file (default stdout). Any occurrence of the string
%progin
self.usageis replaced with the name of the current program. Does nothing if
self.usageis empty or not defined.
OptionParser.
get_usage()¶
Same as
print_usage()but returns the usage string instead of printing it.
OptionParser.
set_defaults(dest=value, ...)¶")
Option Callbacks¶
When:
define the option itself using the
"callback"action
write the callback; this is a function (or method) that takes at least four arguments, as described below
Defining a callback option¶
As always, the easiest way to define a callback option is by using the
OptionParser:
type
has its usual meaning: as with the
"store"or
"append"actions, it instructs
optparseto consume one argument and convert it to
type. Rather than storing the converted value(s) anywhere, though,
optparsepasses it to your callback function.
nargs
also has its usual meaning: if it is supplied and > 1,
optparsewill consume
nargsarguments, each of which must be convertible to
type. It then passes a tuple of converted values to your callback.
callback_args
a tuple of extra positional arguments to pass to the callback
callback_kwargs
a dictionary of extra keyword arguments to pass to the callback
How callbacks are called¶will be the full, canonical option string—e.g. if the user puts
--fooon the command-line as an abbreviation for
--foobar, then
opt_strwill be
"--foobar".)
value
is the argument to this option seen on the command-line.
optparsewill only expect an argument if
typeis set; the type of
valuewill be the type implied by the option’s type. If
typefor this option is
None(no argument expected), then
valuewill be
None. If
nargs> 1,
valuew_strandfor storing option values; you don’t need to mess around with globals or closures. You can also access or modify the value(s) of any options already encountered on the command-line.
args
is a tuple of arbitrary positional arguments supplied via the
callback_argsoption attribute.
kwargs
is a dictionary of arbitrary keyword arguments supplied via
callback_kwargs.
Raising errors in a callback¶
The callback function should raise
OptionValueError if there are any
problems with the option or its argument(s).
optparse catches this and
terminates the program, printing the error message you supply to stderr. Your
message should be clear, concise, accurate, and mention the option at fault.
Otherwise, the user will have a hard time figuring out what they did wrong.
Callback example 1: trivial callback¶.
Callback example 2: check option order¶")
Callback example 3: check option order (generalized)¶')
Callback example 4: check arbitrary condition¶.)
Callback example 5: fixed arguments¶.)
Callback example 6: variable arguments¶:
either
--or
-can be option arguments
bare
--(if not the argument to some option): halt command-line processing and discard the
--
bare
-(if not the argument to some option): halt command-line processing but keep the
-(append it to
parser.largs))
Extending
optparse¶
Since the two major controlling factors in how
optparse interprets
command-line options are the action and type of each option, the most likely
direction of extension is to add new actions and new types.
Adding new types¶
To add new types, you need to define your own subclass of
optparse’s
Option class. This class has a couple of attributes that define
optparse’s types:
TYPES and
TYPE_CHECKER.
Option.
TYPES¶
A tuple of type names; in your subclass, simply define a new tuple
TYPESthat builds on the standard one.
Option.
TYPE_CHECKER¶
A dictionary mapping type names to type-checking functions. A type-checking function has the following signature:
def check_mytype(option, opt, value)
where
optionis an
Optioninstance,
optis an option string (e.g.,
-f), and
valueparameter.
Your type-checking function should raise
OptionValueErrorif it encounters any problems.
OptionValueErrortakes a single string argument, which is passed as-is to
OptionParser’s’s¶
Adding new actions is a bit trickier, because you have to understand that
optparse has a couple of classifications for actions:
- „store” actions
actions that result in
optparsestoring a value to an attribute of the current OptionValues instance; these options require a
destattribute to be supplied to the Option constructor.
- „typed” actions
actions that take a value from the command line and expect it to be of a certain type; or rather, a string that can be converted to a certain type. These options require a
typeattribute.
ALWAYS_TYPED_ACTIONS¶
Actions that always take a type (i.e. whose options always take a value) are additionally listed here. The only effect of this is that
optparseassigns the default type,
"string", to options with no explicit type whose action is listed in
ALWAYS_TYPED_ACTIONS.",)and
TYPED_ACTIONS.
to ensure that
optparseassigns the default type of
"string"to
"extend"actions, we put the
"extend"action in
ALWAYS_TYPED_ACTIONSas well.
MyOption.take_action()implements just this one new action, and passes control back to
Option.take_action()for the standard
optparseactions.
valuesis an instance of the optparse_parser.Values class, which provides the very useful
ensure_value()method.
ensure_value()is essentially
getattr()with a safety valve; it is called as
values.ensure_value(attr, value)
If the
attrattribute of
valuesdoesnand
ensure_value()will take care of getting it right when it’s needed. | https://docs.python.org/pl/3.9/library/optparse.html | CC-MAIN-2022-27 | en | refinedweb |
49552/python-error-pygame-error-couldn-t-open-pygame-png
I'm trying to build the snake game using pygame. I'm following this tutorial.
I end up with the following error:
pygame.error: Couldn't open pygame.png
icon = pygame.image.load('spaceship.png')
pygame.error: Couldn't open spaceship.png
I am getting this error,\anyone?
Hope this Helps!!
To learn more, join the online course to do Masters in Python.
Thanks!
Hey, @ Ali,
It will be very helpful if you can post your code. Though I can suggest you something like better to use Relative paths instead.
current_path = os.path.dirname(__file__) # your .py file is located
current_path = os.path.dirname(__file__) # the resources folder path
image_path = os.path.join(resource_path, 'images') # the image folder path
By doing this, wherever you move the folder containing your .py file, its subdirectories (and therefore whatever they contain) can still be accessed without you having to modify your code.
current_path = os.path.dirname(__file__) # Where your .py file is located
resource_path = os.path.join(current_path, 'resources') # The resource folder path
image_path = os.path.join(resource_path, 'images') # The image folder path
player_image = pygame.image.load(os.path.join(image_path, 'spaceship.png'))
I hope this will be helpful.
Hi, @There,
You have to add in the file that has the script.
Every application usually ships with various resources, such as image and data files, configuration files and so on. Accessing those files in the folder hierarchy or in a bundled format for various platforms can become a complete task, for which the resources module can provide ideal supportive application components.
The Resource class allows you to manage different application data in a certain directory, providing a dictionary-style access functionality for your in-application resources.
Hey, @Sumit,
Do you import os?
Just add:
import os
in the beginning, before:
from settings import PROJECT_ROOT
This will import the python's module os, which apparently is used later in the code of your module without being imported.
Hey, @There,
Are you facing the exact same error which is mentioned above while executing your code?
my code is like this
import pygame
import sys
screen = pygame.display.set_mode([680, 480])
pygame.init()
screen.fill([130, 0, 200])
class Block(pygame.sprite.Sprite):
def __init__(self,file_name,location):
img = pygame.image.load(file_name)
pygame.sprite.Sprite.__init__(self)
self.image = pygame.Surface([width,height])
self.image.fill(color)
self.rect = self.image.get_rect()
pygame.image.load('ball(2)')
while True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
sys.exit()
pygame.display.flip()
So using this code you want to conclude that there is no error after executing it?
add import os at top of your ...READ MORE
Try installing libpng
You can do it with ...READ MORE
You need to download and install the ...READ MORE
pygame.Surface takes in integer values for building ...READ MORE
You must be trying this command in ...READ MORE
file = open('text.txt', . | https://www.edureka.co/community/49552/python-error-pygame-error-couldn-t-open-pygame-png?show=85022 | CC-MAIN-2022-27 | en | refinedweb |
The Entity Services API includes tools for generating code templates and configuration artifacts that enable you to quickly bring up a model-based application.
For example, you can generate code for creating instances and instance envelope documents from raw source and converting instances between different versions of your model. You can also generate an instance schema, TDE template, query options, and database configuration based on a model.
This chapter covers the following topics:
The following steps outline the basic procedure for generating code and configuration artifacts using the Entity Services API. The specifics are described in detail in the rest of this chapter.
object-node) to one of the
es:*-generateXQuery functions or
es.*GenerateJavaScript functions to generate a code module or configuration artifact.
The following diagram illustrates this process. The relevant part of the model is the portion represented by the model descriptor.
The following diagram illustrates the high level flow for creating, deploying and using an instance converter module. The instance converter module is discussed in more detail in Creating an Instance Converter Module.
The following table summarizes the code and artifact generation functions provided by Entity Services. Both the XQuery (
es:*) and Server-Side JavaScript (
es.*) name of each function is included. For more details, see the MarkLogic XQuery and XSLT Function Reference or MarkLogic Server-Side JavaScript Function Reference.
An instance converter helps you create entity instance documents from raw source data. Generate a default instance converter using Entity Services, then customize the code for your application.
An instance converter is a key part of a model-driven application. The instance converter provides functions that facilitate the following tasks:
For more details on envelope documents, see What is an Envelope Document?.
You usually use the instance converter to create entity instance envelope documents and to extract canonical instances for use by downstream entity consumers.
You are expected to customize the generated converter module to meet the needs of your application.
Generate an instance converter from the JSON
object-node or json:object representation of a model descriptor by calling the XQuery function es:instance-converter-generate or the JavaScript function es.instanceConverterGenerate. The result is an XQuery library module containing both model-specific and entity type specific code.
The input to the generator is a JSON descriptor. If you have an XML descriptor, you must first convert it to the expected format; for details, see Working With an XML Model Descriptor. The output of the generator function is an XQuery library module.
You can use the generated code as-is, but most applications will require customization of the converter implementation. For details, see Customizing a Converter Module.
The following example code generates a converter module from a previously persisted descriptor, and then saves the generated code as a file on the filesystem.
You could also insert the converter directly into the modules database, but the converter is an important project artifact that should be placed under source control. You will want to track changes to it as your application evolves.
This section explores the default code generated for an instance converter module. The following topics are covered:
The generated module begins with a module namespace declaration of the following form, derived from the
info section of the model.
module namespace normalizedTitle = "baseUri/title-version";
For example, if your descriptor contains the following metadata:
"info": { "title": "Example", "version": "1.0.0", "baseUri": "" }
Then the converter module will contain the following module namespace declaration. Notice that the leading upper case letter in the
title value (Example) is converted to lower case when used as a namespace prefix.
module namespace example = "";
If the model
info section does not include a
baseUri setting, then the namespace declaration uses the base URI.
If the
baseUri does not end in a forward slash (/), then the module namespace URI is relative. For example, if
baseUri in the previous example is set to, then the module namespace declaration is as follows:
module namespace example = "";
To learn more about the base URI, see Controlling the Model IRI and Module Namespaces.
The converter module implements the following public functions, plus some private utility functions for internal use by these functions.
Each
extract-instance-T function is a starting place for synthesizing an entity instance from raw source data. These functions are where you will apply most of your customizations to the generated code.
The input to an
extract-instance-T function is a node containing the source data. The output is an entity instance represented as a json:object. By default, the instance encapsulates a canonicalized entity with the original source document. This is default envelope document repesentation.
In pseudo code, the generated implementation is as follows:
declare function ns:extract-instance-T( $source-node as node() ) as map:map { normalize the input source reference initialize variables for the values of each entity property initialize an empty instance of type T attach the source data to the instance assign values to the instance properties };
The portion of the function that sets up the entity property values is where you will apply most or all of your customizations. The default implementation assumes a one-to-one mapping between source and entity instance property values.
For example, suppose the model contains a Person entity type, with entity properties firstName, lastName, and fullName. Then the default
extract-instance-Person implementation contains code similar to the following. The section following the begin customizations here comment is where you make most or all of your customizations.
declare function example:extract-instance-Name( $source-node as node() ) as map:map { let $source-node := es:init-source($source, 'Person') (: begin customizations here :) let $id := $source-node/id ! xs:string(.) let $firstName := $source-node/firstName ! xs:string(.) let $lastName := $source-node/lastName ! xs:string(.) let $fullName := $source-node/fullName ! xs:string(.) (: end customizations :) let $instance := es:init-instance($source-node, 'Person') (: Comment or remove the following line to suppress attachments :) =>es:add-attachments($source) return if (empty($source-node/*)) then $instance else $instance => map:with('id', $id) => map:with('firstName', $firstName) => map:with('lastName', $lastName) => map:with('fullName', $fullName) };
If the source XML elements or JSON objects have different names or require a more complex transformation than a simple type cast, customize the implementation. For more details, see Customizing a Converter Module.
Comments in the generated code describe the default implementation in more detail and provide suggestions for common customizations.
Most customization involves changing the portion of each ns
:extract-instance-T function that sets the values of the instance properties.
The default implementation of this portion of an extract function assumes that some property P in the entity instance gets its value from a node of the same name in the source data, and that a simple typecast is sufficient to convert the source value to the instance property type defined by the model.
For example, if an entity type named
Person defines a string-valued property named
firstName, then the generated code in
firstName in
example:extract-instance-Person related to intializing this property looks like the following:
let $firstName := $source-node/firstName ! xs:string(.) ... let $instance := es:init-instance($source-node, 'Person') .... if (empty($source-node/*)) then $instance else $instance ... => map:with('firstName', $firstName) ...
You might need to modify the code to perform a more complex transformation of the value, or extract the value from a different location in the source node. For example, if your source data uses the property name given to hold this information, then you would modify the generated code as follows:
let $firstName := $source-node/given ! xs:string(.)
The following list describes other common customization use cases:
Once you finish customizing the code, you must deploy the module to your App Server before you can use the code. For details, see Deploying Generated Code and Artifacts.
For a more complete example, see Getting Started With Entity Services or the Entity Services examples on GitHub. For details on locating the GitHub examples, see Exploring the Entity Services Open-Source Examples.
You can use the Entity Services API to generate a template for transitioning entity instance data from one version of your model to another. This section covers the following topics:
For an end-to-end example of handling model version changes, see the Entity Services examples on GitHub. For more details, see Exploring the Entity Services Open-Source Examples.
A version translator is an XQuery library module that helps you convert instance data conforming to one model version into another.
The version translator only addresses instance conversion. Model changes can also require changes to other artifacts, such as the TDE template, schema, query options, instance converter, and database configuration. For more details, see Managing Model Changes.
Though you can run the generated translator code as-is, it is meant to serve as a starting point for your customizations. Depending on the ways in which your source and target models differ, you might be required to modify the code.
Generate a version translator using the XQuery function es:version-translator-generate or the JavaScript function es.versionTranslatorGenerate. The output is an XQuery library module that you can customize and install in your modules database.
The inputs to the generator are source and target model descriptors, as JSON. If you have an XML descriptor, you must first convert it to the expected format; for details, see Working With an XML Model Descriptor.
You can use the generated code as-is, but most applications will require customization of the converter implementation. For details, see Customizing a Version Translator Module.
You must install the translator module in your modules database before you can use it. For details, see Deploying Generated Code and Artifacts.
The following example code generates a translator module from previously persisted descriptors, and then saves the generated code as a file on the filesystem. The resulting module is designed to convert instances of version 1.0.0 to instances of version 2.0.0.
You could also insert the translator directly into the modules database, but the translator is an important project artifact that should be placed under source control. You will want to track changes to it as your application evolves.
This section explores the default code generated for a version translator module. This information can help guide your customizations. This section covers the following topics:
The generated module begins with a module namespace declaration of the following form, derived from the
info section of the two models.
module namespace title2-from-title1 = "baseUri2/title2-version2-from-title1-version1";
Where title1 and version1 come from the
info section of the source model, title2 and version2 come from the
info section of the target model, and baseUri2 comes from the
info section of the target model. (The base URI from the source model is unused.) The titles are normalized to all lower case.
For example, suppose the source and target models contain the following info sections, reflecting a change from version 1.0.0 to version 2.0.0 of a model with the title Person. The model title is unchanged between versions.
Then the version translator module will contain the following module namespace declaration.
module namespace person-from-person = "";
If the
info section of the target model does not include a
baseUri setting, then the namespace declaration uses the base URI.
If the target
baseUri does not end in a forward slash (/), then the module namespace URI is relative. For example, if
baseUri in the previous example has no trailing slash, then the module namespace declaration is as follows:
module namespace person-from-person = "";
The version translator module contains a translation function named ns
:convert-instance-T for each entity type T defined in the target model. The module can contain additional functions, but these for internal use by the translator module. The
convert-instance-T functions are the public face of the converter.
For example, if the target model defines a
Name entity type and a
Person entity type and the title of the both the source and target model is
Person, then the generated translation module will contain the following functions:
person-from-person:convert-instance-Name
person-from-person:convert-instance-Person
The input to a
convert-instance-T function should be an entity instance or envelope document conforming to the source model version of type T. The output is an in-memory instance conforming to the target model version of type T, similar to the output from the
extract-instance-T function of an instance converter module.
For each entity type property that is unchanged between the two versions, the default
convert-instance-T code simply copies the value from source instance to target instance. Actual differences, such as a property that only exists in the target model, require customization of the translator. For details, see Customizing a Version Translator Module.
For an example, see
example-version in the Entity Services examples on GitHub. To download a copy of the examples, see Exploring the Entity Services Open-Source Examples.
This section describes some common model changes, how they are handled by the default translation code, and when customizations are likely to be required.
Most of your translator customizations go in the block of variable declarations near the beginning of the conversion function. For example, the block of code shown in bold, below. These declarations set up the values to be assigned to the properties of the new instance, later in the conversion function. The variable names and default initial values are model-dependent.
declare function person-from-person:convert-instance-Person( $source as node() ) as map:map { let $source-node := es:init-translation-source($source, 'Person') let $id := $source-node/id ! xs:string(.) let $firstName := $source-node/firstName ! xs:string(.) let $lastName := $source-node/lastName ! xs:string(.) let $fullName := $source-node/fullName ! xs:string(.) return...
The table below provides a brief overview of some common entity type definition changes and what customizations they might require. The context for the code snippets is the property value initialization block shown in the previous example. All the code snippets assume a required property; if the property under consideration is optional, then the call to map:with would be replaced by a call to es:optional.
You can generate a Template Driven Extraction (TDE) template from your model using Entity Services. Once installed, the template enables the following capabilities for your model-based application:
You can only take advantage of these capabilities for entity types that define a primary key. Without a primary key, there is no way to uniquely identify entity instances. For details on defining a primary key, see Identifying the Primary Key Entity Property.
This section contains the following topics:
To learn more about TDE, see Template Driven Extraction (TDE) in the Application Developer's Guide.
Use the es:extraction-template-generate XQuery function or the es.extractionTemplateGenerate JavaScript function to create a TDE template. The input to the template generation function is a JSON or json:object representation of a model descriptor. You can use the template as-is, or customize it for your application. You must install the template before your application can benefit from it. For details, see Deploying a TDE Template.
Any hyphens (-) in the model title, entity type names, or entity property names are converted to underscores (_) when used in the generated template, in order to avoid invalid SQL names.
For example, the following code snippet generates a template from a model previously persisted in the database. For a more complete example, see Example: TDE Template Generation and Deployment.
The template is an important project artifact that you should put under source contol.
If you customize the template, you should validate it. You can use the tde:validate XQuery function or the tde.validate JavaScript function for standalone validation, or combine validation with insertion, as described in Deploying a TDE Template.
A TDE template generated by the Entity Services API is intended to apply to entity envelope documents with the structure produced by an instance converter module. If you use a different structure, you will have to customize the template. For more details, see What is an Envelope Document?.
The generated template has the following characteristics:
The triples sub-template for an entity type T has the following characteristics.
./T. That is,
//es:instance/T in an envelope document. For example,
//es:instance/Personif the model defines a
Personentity type.
subject-iriis defined. The value of this variable is an IRI created by concatenating the entity type name with an instance's primary key value. This IRI identifies a particular instance of the entity type.
triplesspecification that will cause the following facts (triples) to be generated about each instance of type T:
subject-iriof the entity type. In RDF terms, the triple expresses
<subject-iri> a <entity-type-iri>.
<subject-iri> rdfs:isDefinedBy <descriptor-document-uri>. This triple defines how to join instance/class membership to the instance document.
The rows sub-template for an entity type T has the following characteristics.
./T. That is,
//es:instance/T in an envelope document.
_propertyName is defined. For example,
Person_friends, if the
Personentity type has an array typed property named
friends. The characteristics of this view are described below.
irias its data type is indexed as
IRI.
The T
_propertyName view generated in the rows sub-template for an entity property with array type has the following characteristics:
The following entity type characteristics result in a TDE template that requires customization:
contextelement of the embedded type must be changed to reflect its position in instance documents.
You can make other customizations required by your application. For example, you might want to generate additional facts about your instances, or remove some columns from a row sub-template.
The generated template should work for both XML and JSON envelope documents in most cases, but some entity type structures might require customization of XPath expressions in the template in order to accomodate both formats.
For more details on the structure and content of TDE templates, see Template Driven Extraction (TDE) in the Application Developer's Guide.
You must install your TDE template in the schemas database associated with your content database. The template must be in the special collection for MarkLogic to recognize it as template document.
Choose one of the following template installation methods:. No validation is performed.
For more details, see Validating and Inserting a Template in the Application Developer's Guide.
Once your template is installed, MarkLogic will update the row index and generate triples related to your instances whenever you ingest instances or reindexing occurs.
The following example generates a TDE template from the model used in Getting Started With Entity Services, and then installs the template in the schemas database.
The following code generates a template from a previously persisted model, and then saves the template to a file on the filesystem as $ARTIFACT_DIR
/person-templ.xml.
You are not required to save the template to the filesystem. However, the template is an important project artifact that you should place under source control. Saving the template to the filesystem makes it easier to do so.
If you apply the code above to the model from Getting Started With Entity Services, the resulting template defines two sub-templates. The first sub-template defines how to extract semantic triples from
Person entity instances. The second sub-template defines how to extract a row-oriented projection of
Person entity instances.
<template xmlns=""> ... <templates> <template xmlns: <context>./Person</context> <vars> <var> <name>subject-iri</name> <val>sem:iri(...)</val> </var> ... </vars> <triples>...</triples> </template> <template xmlns: <context>./Person</context> <rows>...</rows> ... </template> </templates> </template>
If the model includes additional entity types, then the template contains additional, similar sub-templates for these types.
The following code validates and installs a template using the convenience function provided by the TDE library module. Evaluate this code in the context of your content database.
If the query runs sucessfully, the document
/es-gs/templates/person-1.0.0.xml is created in the schemas database. If you explore the schemas database in Query Console, you should see that the template is in the special collection.
Entity Services can generate an XSD schema that you can use to validate canonical (XML) entity instances. Instance validation can be especially useful if you have a client or middle tier application submitting instances.
This section contains the following topics:
To generate a schema, apply the es:schema-generate XQuery function or the
es.schemaGenerate JavaScript function to the
object-node or json:object representation of a model descriptor, as shown in the following table. For a more complete example, see Example: Generating and Installing an Instance Schema.
The schema is an important project artifact, so you should place it under source control.
Before you can use the generated schema(s) for instance validation, you must deploy the schema to the schemas database associated with your content database. You can use any of the usual document insertion APIs for this operation.
If your model defines multiple entity types and the entity type definitions do not all use the same namespace, a schema is generated for each unique namespace. Install all of the generated schemas in the schemas database.
Use the xdmp:validate XQuery function or the xdmp.validate JavaScript function to validate instances against your schema. For an example, see Example: Validating an Instance Against a Schema.
Note that you can only validate entity instances expressed as XML. You can extract the XML representation of an instance from an envelope document using the es:instance-xml-from-document XQuery function or the es.instanceXmlFromDocument JavaScript function.
The Entity Services API applies the following rules when generating a schema from a model:
xs:element.
xs:complexTypefor each entity type defined by the model. This type contains a sequence of elements representing the entity type properties.
es:complexTypeis generated.
minOccursand
maxOccurson the property's
xs:element.
minOccurs=0.
xs:elementis generated for one property, and then the
xs:elementdefinitions for the other properties will be commented out. You must customize the schema (or modify your model) to resolve this conflict.
The following list describes some situations in which schema customization might be needed.
The following example generates a schema from a previously persisted model, and then inserts it into the schemas database.
Since the model is in the content database and the schema must be inserted into the schemas database, xdmp:eval is used to switch database contexts for the schema insertion. If you generated the schema and saved it to the filesystem first, then you would only have to work with the schemas database, so the eval would be unnecessary.
The following code inserts a schema with the URI
/es-gs/person-1.0.0.xsd into the schemas database associated with the content database that holds the source model. Assume the model was previously persisted as a document with URI
/es-gs/models/person-1.0.0.json.
The following example validates an instance against a schema generated using the es:schema-generate XQuery function or the es.schemaGenerate Server-Side JavaScript function. It is assumed that the schema is already installed in the schema database associated with the content database, as shown in Example: Generating and Installing an Instance Schema.
The following code validates an entity instance within a previously persisted envelope document. Assume this instance was created using the instance converter module for its entity type, and therefore is valid. Thus, the validation succeeds. The query returns an empty
xdmp:validation-errors element in XQuery and an empty object in JavaScript.
The following example validates an in-memory instance against the schema. The schema is based on the model from Getting Started With Entity Services. The instance was intentionally created without a required property (id) so that it will fail validation.
You identify PII entity properties using the
pii property of an entity model, as described in Identifying Personally Identifiable Information (PII). Then, use the es:pii-generate XQuery function or the es.piiGenerate JavaScript function to generate a security configuration artifact that enables stricter access control for PII entity instance properties.
The generated configuration contains an Element Level Security (ELS) protected path definition for each PII property, and an ELS query roleset configuration. The protected path configuration limits read access to users with the pii-reader security role. The query roleset prevents users without the pii-reader role from seeing the protected content in response to a query or XPath expression. The pii-reader role is pre-defined by MarkLogic.
To learn more about Element Level Security, protected paths, and query rolesets, see Element Level Security in the Security Guide.
For example, the following model descriptors specify that the
name and
bio properties can contain PII:
Assuming the above model descriptor is persisted in the database as
/es-ex/models/people-4.0.0.json, then the following code generates a database configuration artifact from the model:
The generated security configuration artifact should look similar to the following. If you deploy this configuration, then only users with the pii-reader security role can read the name and bio properties of a Person instance. The pii-reader role is pre-defined by MarkLogic.
{ "name": "People-4.0.0", "desc": "A policy that secures name,bio of type Person", "config": { "protected-path": [ { "path-expression": "/envelope//instance//Person/name", "path-namespace": [], "permission": { "role-name": "pii-reader", "capability": "read" } }, { "path-expression": "/envelope//instance//Person/bio", "path-namespace": [], "permission": { "role-name": "pii-reader", "capability": "read" } } ], "query-roleset": { "role-name": [ "pii-reader" ] } } }
Note that the configuration only includes protected paths for PII properties in the entity instance. Envelope documents also contain the original source document as an attachment by default. Any PII in the source attachment is not protected by the generated configuration. You might want to define additional protected paths or modify the
extract-instance-T function of your instance converter module to exclude the source attachment.
Deploy the artifact using the Configuration Management API. For example, if the file
pii-config.json contains the configuration generated by the previous example, then the following command adds the protected paths and query roleset to MarkLogic's security configuration:
curl --anyauth --user user:password -X PUT -i \ -d @./pii-config.json -H "Content-type: application/json" \
You can add additional configuration settings to the generated artifact, or merge the generated settings into configuration settings created and maintained elsewhere. For example, you could configure additional protected paths to control access to the source data for the name and bio properties in the source attachment of your instance envelope documents.
Use the es:database-properties-generate XQuery function or the es.databasePropertiesGenerate JavaScript function to create a database configuration artifact from the JSON
object-node or json:object representation of a model descriptor. This artifact is helpful for configuring your content database. You are not required to use this artifact; it is a convenience feature.
The generated configuration information always has at least the following items, and may contain additional property definitions, depending on the model:
If an entity type definition specifies entity properties for range index and word lexicon configuration, then the database configuration artifact includes corresponding index and/or lexicon configuration information.
For example, the following model descriptors specify a path range index for the
id and
rating properties and a word lexicon for the
bio property of the
Person entity type:
Assuming the above model descriptor is persisted in the database as
/es-ex/models/people-3.0.0.json, then the following code generates a database configuration artifact from the model:
The generated configuration artifact should look similar to the following. Notice that range index information is included for
id and
rating and word lexicon information is included for
bio.
{ "database-name": "%%DATABASE%%", "schema-database": "%%SCHEMAS_DATABASE%%", "path-namespace": [ { "prefix": "es", "namespace-uri": "" } ], "element-word-lexicon": [ { "collation": "", "localname": "bio", "namespace-uri": "" } ], "range-path-index": [ { "collation": "", "invalid-values": "reject", "path-expression": "//es:instance/Person/id", "range-value-positions": false, "scalar-type": "int" }, { "collation": "", "invalid-values": "reject", "path-expression": "//es:instance/Person/rating", "range-value-positions": false, "scalar-type": "float" } ], "triple-index": true, "collection-lexicon": true }
Note that the generated range index configuration disables range value positions and rejects invalid values by default. You might choose to change one or both of these settings, depending on your application.
You can add additional configuration settings to the generated artifact, or merge the generated settings into configuration settings created and maintained elsewhere.
You can use the generated configuration properties with your choice of configuration interface. For example, you can use the artifact with the REST Management API (after minor modification), or you can extract the configuration information to use with the XQuery Admin API.
To use the generated database configuration artifact with the REST Management API method PUT /manage/v2/databases/{id|name}/properties, make the following modifications:
%%DATABASE%%with the name of your content database.
%%SCHEMAS_DATABASE%%with the name of the schemas database associated with your content database.
For example, you can use a curl command similar to the following to change the properties of the database named es-ex. Assume the file
db-props.json contains the previously shown config artifact above, with the
database-name and
schema-database property values modified to es-ex and Schemas, respectively.
curl --anyauth --user user:password -X PUT -i \ -d @./db-props.json -H "Content-type: application/json" \
If you then examine the configuration for the es-ex database using the Admin Interface or the REST Management API method GET /manage/v2/databases/{id|name}/properties, you should see the expected range indexes and word lexicon have been created.
For more information about database configuration, see the following:
This section describes how to use the Entity Services API to generate a set of query options you can use to search entity instances using the XQuery Search API or the REST, Java, and Node.js Client APIs. This section covers the following topics:
For more details and examples, see Querying a Model or Entity Instances.
Generate model-based query options using the es:search-options-generate XQuery function or the es.searchOptionsGenerate JavaScript function. Pass in the JSON
object-node or json:object representation of a model descriptor.
For example, if the document
/es-gs/models/person-1.0.0.json is a previously persisted descriptor, then you can generate query options from the model with one of the following calls.
For a more complete example, see Example: Generating Query Options.
You can use the generated options in the following ways:
For an example and discussion of the options, see Example: Using the Search API for Instance Queries.
The generated options include the following:
<search:constraint <search:value> <search:element </search:value> </search:constraint>
<search:values <search:uri/> </search:values>
extract-document-dataoption for returning just the canonical entity instance(s) from matched documents. For example, the following option extracts just the
Personentity instance from matched documents:
<search:extract-document-data <search:extract-path xmlns: //es:instance/(Person) </search:extract-path> </search:extract-document-data>
additional-queryoption that constrains results to documents containing
es:instanceelements. For example:
<search:additional-query> <cts:element-query xmlns: <cts:element xmlns:es:instance</cts:element> <cts:true-query/> </cts:element-query> </search:additional-query>
<search:return-facets>false</search:return-facets> <search:transform-results
<search:search-option>unfiltered</search:search-option>
<search:constraint <search:value> <search:element </search:value> </search:constraint>
pathRangeIndexor
rangeIndexproperty of an entity type definition, a path range index constraint with the same name as the entity property. For example:
<search:constraint <search:range <search:element </search:range> </search:constraint>
elementRangeIndexproperty of an entity type definition, an element range index constraint with the same name as the entity property. For example:
<search:constraint <search:range <search:path-index xmlns: //es:instance/Person/rating </search:path-index> </search:range> </search:constraint>
wordLexiconproperty of an entity type definition, a word constraint with the same name as the entity property. For example:
<search:constraint <search:word> <search:element </search:word> </search:constraint>
tuplesoption with the same name as the entity type for finding co-occurrences of the indexed properties. For example:
<search:tuples <search:range <search:path-index xmlns: //es:instance/Item/price </search:path-index> </search:range> <search:range <search:path-index xmlns: //es:instance/Item/rating </search:path-index> </search:range> </search:tuples>
The generated options include extensive comments to assist you with customization. The options are usable as-is, but optimal search configuration is highly application dependent, so it is likely that you will extend or modify the generated options.
If the primary key property is also listed in the range index specification, then both a value constraint and a range constraint would be generated with the same name. Since this is not allowed, one of these constraints will be commented out. You can change the name and uncomment it. For an example of this conflict, see Example: Generating Query Options.
The following example generates a set of query options from a model and saves the results to a file on the filesystem so you can place it under source control or make modifications.
This example assumes the following descriptor has been inserted into the database with the URI
/es-ex/models/people-1.0.0.json.
{ "info": { "title": "People", "description": "People Example", "version": "1.0.0" }, "definitions": { "Person": { "properties": { "id": { "datatype": "int" }, "name": { "datatype": "string" }, "bio": { "datatype": "string" }, "rating": { "datatype": "float" } }, "required": [ "name" ], "primaryKey": "id", "pathRangeIndex": [ "id", "rating" ], "wordLexicon": [ "bio" ] }}}
The following code generates a set of query options from the above model. The options are saved to the file ARTIFACT_DIR
/people-options.xml.
The resulting options should be similar to the following.
<search:options xmlns: <search:constraint <search:value> <search:element </search:value> </search:constraint> > --> <search:constraint <search:range <search:path-index xmlns: //es:instance/Person/rating </search:path-index> </search:range> </search:constraint> <search:constraint <search:word> <search:element </search:word> </search:constraint> <search:tuples <search:range <search:path-index xmlns: //es:instance/Person/id </search:path-index> </search:range> <search:range <search:path-index xmlns: //es:instance/Person/rating </search:path-index> </search:range> </search:tuples> <!--Uncomment to return no results for a blank search, rather than the default of all results <search:term xmlns: <search:empty </search:term> --> <search:values <search:uri/> </search:values> <!--Change to 'filtered' to exclude false-positives in certain searches--> <search:search-option>unfiltered</search:search-option> <!--Modify document extraction to change results returned--> <search:extract-document-data <search:extract-path xmlns: //es:instance/(Person) </search:extract-path> </search:extract-document-data> <!--Change or remove this additional-query to broaden search beyond entity instance documents--> <search:additional-query> <cts:element-query xmlns: <cts:element xmlns: es:instance </cts:element> <cts:true-query/> </cts:element-query> </search:additional-query> <!--To return facets, change this option to 'true' and edit constraints--> <search:return-facets>false</search:return-facets> <!--To return snippets, comment out or remove this option--> <search:transform-results </search:options>
Notice that two constraints are generated for the
id property. A value constraint is generated because
id is the primary key for a
Person entity. A path range constraint is generated because
id is listed in the
pathRangeIndex property of the
Person entity type definition. Since it is not possible for two constraints to have the same name in a set of options, the second constraint is commented out:
>
If you do not need both constraint types on id, you can remove one of them. Alternatively, you can change the name of at least one of these constraints and uncomment the path range constraint.
For an example of using the generated options, see Example: Using the Search API for Instance Queries.
Library modules and some configuration artifacts that you generate using the Entity Services API must be installed before you can use them.
For example, if you're using the pre-configured App Server on port 8000, insert your instance converter module into the Modules database. For more details, see Importing XQuery Modules, XSLT Stylesheets, and Resolving Paths in the Application Developer's Guide.
For example if your content database is the pre-configured Documents database, deploy schemas to the Schemas database.
For example if your content database is the pre-configured Documents database, deploy templates to the Schemas database. For details, see Deploying a TDE Template.
Unless otherwise noted, you can install a module or configuration artifact using any document insertion interfaces, including the following MarkLogic APIs:
For an example of deploying a module using simple document insert, see Create and Deploy an Instance Converter (XQuery) or Create and Deploy an Instance Converter (JavaScript).
In addition, open source application deployment tools such as
ml-gradle and
roxy (both available on GitHub) support module deployment tasks. The Entity Services examples on GitHub use
ml-gradle for this purpose; for more details, see Exploring the Entity Services Open-Source Examples. | https://docs.marklogic.com/9.0/guide/entity-services/codegen | CC-MAIN-2022-27 | en | refinedweb |
Issue
I am totally new in python world. Here I am looking for some suggestion about my problem. I have three text file one is original text file, one is text file for updating original text file and write in a new text file without modifying the original text file. So file1.txt looks like
$ego_vel=x $ped_vel=2 $mu=3 $ego_start_s=4 $ped_start_x=5
file2.txt like
$ego_vel=5 $ped_vel=5 $mu=6 $to_decel=5
outputfile.txt should be like
$ego_vel=5 $ped_vel=5 $mu=6 $ego_start_s=4 $ped_start_x=5 $to_decel=5
the code I tried till now is given below:
import sys import os def update_testrun(filename1: str, filename2: str, filename3: str): testrun_path = os.path.join(sys.argv[1] + "\\" + filename1) list_of_testrun = [] with open(testrun_path, "r") as reader1: for line in reader1.readlines(): list_of_testrun.append(line) # print(list_of_testrun) design_path = os.path.join(sys.argv[3] + "\\" + filename2) list_of_design = [] with open(design_path, "r") as reader2: for line in reader1.readlines(): list_of_design .append(line) print(list_of_design) for i, x in enumerate(list_of_testrun): for test in list_of_design: if x[:9] == test[:9]: list_of_testrun[i] = test # list_of_updated_testrun=list_of_testrun break updated_testrun_path = os.path.join(sys.argv[5] + "\\" + filename3) def main(): update_testrun(sys.argv[2], sys.argv[4], sys.argv[6]) if __name__ == "__main__": main()
with this code I am able to get output like this
$ego_vel=5 $ped_vel=5 $mu=3 $ego_start_s=4 $ped_start_x=5 $to_decel=5
all the value I get correctly except $mu value.
Will any one provide me where I am getting wrong and is it possible to share a python script for my task?
Solution
Looks like your problem comes from the if statement:
if x[:9] == test[:9]:
Here you’re comparing the first 8 characters of each string. For all other cases this is fine as you’re not comparing past the ‘=’ character, but for $mu this means you’re evaluating:
if '$mu=3' == '$mu=6'
This obviously evaluates to false so the mu value is not updated.
You could shorten to
if x[:4] == test[:4]: for a quick fix but maybe you would consider another method, such as using the
.split() string function. This lets you split a string around a specific character which in your case could be ‘=’. For example:
if x.split('=')[0] == test.split('=')[0]:
Would evaluate as:
if '$mu' == '$mu':
Which is True, and would work for the other statements too. Regardless of string length before the ‘=’ sign.
This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0 | https://errorsfixing.com/modified-textfile-python-script/ | CC-MAIN-2022-27 | en | refinedweb |
#include "llvm/CodeGen/BasicTTIImpl.h"
#include "llvm/CodeGen/TargetSubtargetInfo.h"
#include "llvm/IR/Function.h"
#include "llvm/Support/CommandLine.h"
#include "llvm/Target/TargetMachine.h"
Go to the source code of this file.
This file provides the implementation of a basic TargetTransformInfo pass predicated on the target abstractions present in the target independent code generator. It uses these (primarily TargetLowering) to model as much of the TTI query interface as possible. It is included by most targets so that they can specialize only a small subset of the query space.
Definition in file BasicTargetTransformInfo.cpp. | https://llvm.org/doxygen/BasicTargetTransformInfo_8cpp.html | CC-MAIN-2022-27 | en | refinedweb |
Table of Content
- Introduction
- What is REST?
- Recommendations for RESTful Communication
- Conclusion
Introduction
In order to develop an application in any language or framework, we all know it’s not enough just to have a well-built UI and UX. In the vast majority of cases, applications not only store information within themselves or the device, but necessarily depend on a backend as well.
Almost all applications make use of an API to handle data—whether to send, modify, or receive it—and a large part of the operation and usability of an application relates to correct or incorrect communication with the services. When the existing communication is carried out with REST services, the practices that make that communication appropriate could be defined as best practices for RESTful communication. In this article, we will examine best practices when working with Flutter.
What is REST?
When we talk specifically about developing in Flutter, these best practices can also guarantee that our application has an optimal functioning in its entirety, since it will have an optimal handling of the information. But first, we must understand REST (representational state transfer). Simply put, REST describes any interface between systems that uses HTTP to handle data.
REST works as a client-server architecture. It doesn’t have state (session state), it can cache resources to improve performance, and it follows the same rules between all components. REST is layered, and it provides code on demand. Its primary identifiers are “get, post, put, patch, delete.”
Recommendations for RESTful Communication
1. Use Format and Information Correctly
It is highly recommended to select a unique URL address and set it as the base address in a class that holds the necessary constants to communicate with the server. To achieve this, the Dio Flutter Package () gives us an example, as the following extract from its docs shows:
Dio dio = new Dio(); // with default Options // Set default configs dio.options.baseUrl = ""; dio.options.connectTimeout = 5000; //5s dio.options.receiveTimeout = 3000; // or new Dio with a BaseOptions instance. BaseOptions options = new BaseOptions( baseUrl: "", connectTimeout: 5000, receiveTimeout: 3000, ); Dio dio = new Dio(options);
The server must then be provided with the required information so it can satisfy the request. In almost all cases, this will include the client’s authentication information, where the exchange formats are usually XML and JSON (for REST services). Currently, in most cases, Flutter uses JSON, perhaps because it is faster (as it requires fewer bytes for transit) and is designed to exchange information.
Below you’ll see what a JSON looks like. When communicating with REST services, this is what a Flutter application would receive to convert later.
{ "firstValue" : "string", "secondValue" : 1, "thirdValue" : 1 }
2. Make Use of Existing Resources
To make a request in the most efficient way, it is wise to consider the different packages that the Flutter team and the community have already provided, as these contain functions and classes that facilitate consuming HTTP resources. Such is the case with the HTTP package () created by the Dart developers, as well as the Dio package () created by the Flutter Chinese Network. You can find these and all other packages at pub.dev.
Other packages can also be chosen, but no matter which one you choose, it is always prudent to check on which operating systems the package is compatible with, how many pub points it has received, and how stable it is. The package documentation will provide examples of how to carry out requests. Again, from the docs (this time for an HTTP package), a request could look like this:
import 'package:http/http.dart' as http; var url = Uri.parse(''); var response = await http.post(url, body: {'name': 'doodle', 'color': 'blue'}); print('Response status: ${response.statusCode}'); print('Response body: ${response.body}'); print(await http.read(''));
3. Implement Asynchronous Programming
The moment an application in Flutter starts communicating with the server, we are automatically in the asynchronous field. Why? Because the response to every request will be available at some point in the non-immediate future. The use of async and await keywords on Flutter is not only highly recommended but practically essential. Using them guarantees that the response is indeed asynchronous and that we can read the required information or send the specified information. Otherwise, the application would try to continue working without “awaiting” a response from the server. When an operation is asynchronous, it returns an object that represents a delayed calculation, known in Flutter as a “future.” Here is a simple example of an asynchronous function, from the official site dart.dev:
Future<String> createOrderMessage() async { var order = await fetchUserOrder(); return 'Your order is: $order'; }
When using futures, every function is identified with the keyword async; that’s how it becomes asynchronous. To wait for a value, each request must be preceded by the keyword await. In this way, the application will always wait until it has fetched the needed data before proceeding with further operations. Finally, it is also recommended that a time limit be established for each operation; in this case, an error or exception message is received if the time limit is exceeded.
4. Handle Authentication
We should always opt to develop applications that use API keys, even if they are not important to whomever handles the information. This should especially be considered with regard to protecting resources, since the number of requests per client can be limited in this way. I remember doing tests with a free API service once. At some point, I reached my limit and wasn’t able to do any more requests. This is certainly acceptable, especially if an application we develop consumes our own API.
On the other hand, speaking of authentication, it is always better to implement login and registration. It is also advisable to use a scheme that involves security tokens, so-called “bearer tokens.” When this is incorporated, the application should work programmatically with the generated token whenever authentication is required. It might be said that this is a backend topic, but we all know that in most cases, both backend and frontend/mobile belong to the same team or company.
5. Appropriate Representative Entities Should Handle Information
When making HTTP requests, one recommendation is to create something like a template to correctly handle all the information received. JSONs contain organized information; to use this information effectively, a class must be designed to serve as a model (whose variables are either of a defined type or dynamic, depending on what is received). This class must be able to receive the data and store it in an object. It will then be necessary to convert the received string into a more manipulable representation of a JSON. There are different ways to achieve this. One recommendation is to use either the package offered by Google, called json_serializable, or the Flutter library, called dart:convert.
When it comes to correctly converting the data that the Flutter application receives, we’re not just talking about good conventions. We must also consider how we can best convert the data in order to be able to use it most easily within our app. As previously stated, creating a model is essential. Before that, however, we must make sure we prepare the field to manage calculated values so they can be sent to the constructor of the class. To achieve this, we use a factory constructor. Let’s say we want to create a model for the JSON that was shown in a previous example. Using Flutter’s dart:convert library, your class could be something like:
import 'dart:convert'; Room roomFromJson(String str) => Room.fromJson(json.decode(str)); String roomToJson(Room data) => json.encode(data.toJson()); class Room { Room({ this.firstValue, this.secondValue, this.thirdValue, }); String firstValue; int secondValue; int thirdValue; factory Room.fromJson(Map<String, dynamic> json) => Room( firstValue: json["firstValue"], secondValue: json["secondValue"], thirdValue: json["thirdValue"], ); Map<String, dynamic> toJson() => { "firstValue": firstValue, "secondValue": secondValue, "thirdValue": thirdValue, }; }
After receiving the data from a request and storing it in a model like this, you will be able to handle everything in the future without any unnecessary troubles. With a model such as this, the data can be used again later within the widgets.
6. Consider Using a Controller and Always Handle Errors
Sometimes it will be necessary to use streams to constantly update the information received. A recommended approach to simplify this practice is to use a state management package such as Bloc, GetX, or Provider (among others). You can create a class like a controller that knows how to inform the widgets if the information is being loaded, if the process has been completed, or if there has been an error. (If there has been an error, everything necessary must always be implemented so that the application warns in the most concrete way about the status of its request, usually done with try and catch.) Likewise, it should be shown visually whether or not it was possible to receive or handle the requested data.
7. Always Modularize
Last, but not least, we all know that the goal of all the previous practices is that the user can experience friendly interaction with the data the application is handling. Once the JSON information has been received and saved in a model, the information should be correctly displayed in the widget. But be careful: Everything must be modular. That is, there should be no logic that does not belong to the widgets in the class that you want to render. Also, one should not manage session requests or communicate directly with the API layer in the rendered class. (See again the image above.)
Conclusion
By following these practices, we can ensure better communication with our servers, and obviously, this will be positive for our applications themselves. In this article, I have presented a number of different points and nuances that will help you better plan your application when it comes to receiving, sending, or modifying data in communication with a REST service. All of these practices have the common goal of ensuring that our end users have a truly satisfactory experience with any application we develop. | https://www.krasamo.com/restful-communication-in-flutter/ | CC-MAIN-2022-27 | en | refinedweb |
1. Pico:ed-Python Samples
1.1. Add Python Package
Download and unzip the package: lib.zip Install and open it: thonny
Click “Tools” to see more “Options…”
Click “Interpreter” to see more choices by clicking the arrow, and choose Rasperry Pi Pico.
Click the arrow and select “Try to detect port automatically” and click “ok” to confirm.
Cnnect the USB with pico_ed and click “View” to choose “Files”.
Getting in the “Files” folder, open the downloaded and unzipped folder of pico_ed.
Click the right mouse button on the “lib” folder and select “Move to Recyle Bin” to upload the data to pico_ed. After the operation, click “stop” button.
After adding the file, start programming.
Click “File”-”New”
Program and save it in pico_ed.
Enter the name ‘main.py” to the jumped-out column and click “OK” for confirmation.
1.2. Sample Projects
Result
Press button A to light on the LED and button B to turn it off.
Result
Press button A to display 1234567890 on the LEDs screen and button B to display abcdefghijklmnopqrstuvwxyz.
Project 03: The Music Player
from Pico_ed import * #import file from machine import Pin while True: if ButtonA.is_pressed(): #Detect if button A is pressed, if yes, return 1. music.phonate("1155665-4433221-5544332-5544332-1155665-4433221") #Use this method to play music and the numbers are equivalent to different tones. #Use method music.phonate("numbered musical notation")
Result
Press button A to play music.
Project 04: Light on the LED connecting to P1.
Hardware Connections
Program
from Pico_ed import * #Program as this way from machine import Pin p1 = Pin(pin.P1, Pin.OUT) # Set the pins in output mode, enter with the number of pin.P. while True: if ButtonA.is_pressed(): #Detect if button A is pressed, if yes, return 1. p1.value(1) #P1 pin up if ButtonB.is_pressed(): #Detect if button B is pressed, if yes, return 1. p1.value(0) #P1 pin down
Result
Press button A to light on the LED and button B to turn it off.
1.3. Technical Files | https://www.elecfreaks.com/learn-en/pico-ed/pico_ed_python.html | CC-MAIN-2022-27 | en | refinedweb |
@ Page
Defines page-specific (.aspx file) attributes used by the ASP.NET page parser and compiler.
<%@ Page attribute="value" [attribute="value"...] %>
Attributes.
For more information, see the AsyncTimeout property.
AspCompat.
Note
Setting this attribute to true can cause your page's performance to degrade. For more information, see the Remarks section..
Note Precompilation Overview..
ClientTarget
Indicates the target user agent (typically, a Web browser such as Microsoft Internet Explorer) for which ASP.NET server controls should render content. This value can be any valid alias as defined within the <clientTarget> section of the application's configuration file. For more information, see the ClientTarget property..
CodeFileBaseClass Command-Line Compiler.
ContentType
Defines the HTTP content type of the response as a standard MIME type. Supports any valid HTTP content-type string. For a list of possible values, search for MIME in the MSDN Library.
Culture, see the Culture property and validation of events in postback and callback scenarios. true if events are being validated; otherwise, false. The default is true.
Page event validation reduces the risk of unauthorized postback requests and callbacks. When the enableEventValidation property is set to true, ASP.NET allows only the events that can be raised on the control during a postback request or callback. With this model, a control registers its events during rendering and then validates the events during the post-back or callback handling. All event-driven controls in ASP.NET use this feature by default.
It is strongly recommended that you do not disable event validation. Before disabling event validation, you should be sure that no postback could be constructed that would have an unintended effect on your application.
EnableSessionState
Defines session-state requirements for the page. true if session state is enabled; ReadOnly if session state can be read but not changed; otherwise, false. The default is true. These values are case-insensitive. For more information, see ASP.NET Session State Overview.
EnableTheming
Indicates whether themes are used on the page. true if themes are used; otherwise, false. The default is true..
ErrorPage
Defines a target URL for redirection if an unhandled page exception occurs. For more information, see the ErrorPage property.
Explicit
Determines whether the page is compiled using the Visual Basic Option Explicit mode. true indicates that the Visual Basic explicit compile option is enabled and that all variables must be declared using a Dim, Private, Public, or ReDim statement; otherwise, false. The default is false.
Note
This attribute is ignored by languages other than Visual Basic. Also, this option is set to true in the Machine.config configuration file. For more information, see ASP.NET Configuration Files.
Inherits
Defines a code-behind class for the page to inherit. This can be any class derived from the Page class. This attribute is used with the CodeFile attribute, which contains the path to the source file for the code-behind class. The Inherits attribute is case-sensitive when using C# as the page language, and case-insensitive when using Visual Basic as the page language.
If the Inherits attribute does not contain a namespace, ASP.NET checks whether the ClassName attribute contains a namespace. If so, ASP.NET attempts to load the class referenced in the Inherits attribute using the namespace of the ClassName attribute. (This assumes that the Inherits attribute and the ClassName attribute both use the same namespace.)
For more information about code-behind classes, see ASP.NET Web Page Code Model..
Note.
Note
Developers can define this attribute for all pages by setting the maintainScrollPostitionOnPostback attribute (note that it is case-sensitive in configuration files) on the <pages> element of the Web.config file.
MasterPageFile
Sets the path to the master page for the content page or nested master page. Supports relative and absolute paths. For more information, see the MasterPageFile property.
MetaDescription
Sets the MetaDescription property. If the page markup also includes a "description" meta element, the value in the @ Page directive overrides the value in markup.
MetaKeywords
Sets the MetaKeywords property. If the page markup also includes a "keywords" meta element, the value in the @ Page directive overrides the value in markup.
ResponseEncoding.
Src.
Strict
Indicates that the page should be compiled using the Visual Basic Option Strict mode. true if Option Strict is enabled; otherwise, false. The default is false.
Note
This attribute is ignored by languages other than Visual Basic.
StyleSheetTheme Overview and the Trace property..
Transaction
Indicates whether COM+ transactions are supported on the page. Possible values are Disabled, NotSupported, Supported, Required, and RequiresNew. The default is Disabled.
UICulture
Specifies the user interface (UI) culture setting to use for the page. Supports any valid UI culture value. For more information, see the UICulture property.
ValidateRequest.
WarningLevel
Indicates the compiler warning level at which you want the compiler to treat warnings as errors, thus aborting compilation of the page. Possible warning levels are 0 through 4. For more information, see the WarningLevel property.
Remarks.
Note MyComObject comObj; public void Page_Load(){ // Use comObj here when the code is running on the STA thread pool. comObj = New MyComObject(); // Do something with the combObj object. }
<%@ Page Dim comObj As MyComObject Public Sub Page_Load() 'Use comObj here when the code is running on the STA thread pool. comObj = New MyComObject() ' Do something with the combObj object. End Sub </script>
Note
Adding an @ Master directive to a master page does not allow you to use the same directive declaration in pages that depend on the master. Instead, use the pages element to define page directives globally.
Example" %>
See Also
Reference
Text Template Directive Syntax | https://docs.microsoft.com/en-us/previous-versions/dotnet/netframework-4.0/ydy4x04a%28v%3Dvs.100%29 | CC-MAIN-2019-30 | en | refinedweb |
Whatever your programs are doing, they often have to deal with vast amounts of data. This data is usually represented and manipulated in the form of strings. However, handling such a large quantity of input in strings can be very ineffective once you start manipulating them by copying, slicing, and modifying. Why?
Let's consider a small program which reads a large file of binary data, and
copies it partially into another file. To examine out the memory usage of this program, we will use memory_profiler, an excellent Python package that allows us to see the memory usage of a program line by line.
@profile def read_random(): with open("/dev/urandom", "rb") as source: content = source.read(1024 * 10000) content_to_write = content[1024:] print("Content length: %d, content to write length %d" % (len(content), len(content_to_write))) with open("/dev/null", "wb") as target: target.write(content_to_write) if __name__ == '__main__': read_random()
Running the above program using memory_profiler produces the following:
$ python -m memory_profiler memoryview/copy.py Content length: 10240000, content to write length 10238976 Filename: memoryview/copy.py Mem usage Increment Line Contents ====================================== @profile 9.883 MB 0.000 MB def read_random(): 9.887 MB 0.004 MB with open("/dev/urandom", "rb") as source: 19.656 MB 9.770 MB content = source.read(1024 * 10000) 29.422 MB 9.766 MB content_to_write = content[1024:] 29.422 MB 0.000 MB print("Content length: %d, content to write length %d" % 29.434 MB 0.012 MB (len(content), len(content_to_write))) 29.434 MB 0.000 MB with open("/dev/null", "wb") as target: 29.434 MB 0.000 MB target.write(content_to_write)
The call to
source.read reads 10 MB from
/dev/urandom. Python needs to allocate around 10 MB of memory to store this data as a string. The instruction on the line just after,
content[1024:], copies the entire block of data minus the first KB — allocating 10 more megabytes.
So what's interesting here, is to notice that the memory usage of the program increased by about 10 MB when building the variable
content_to_write. The slice operator is copying the entirety of
content, minus the first KB, into a new string object.
When dealing with extensive data, performing this kind of operation on large byte arrays is going to be a disaster. If you already have written C code, you know that using
memcpy() has a significant cost, both in term of memory usage and regarding general performance: copying memory is slow.
However, as a C programmer, you also know that strings are arrays of characters and that nothing stops you from looking at only part of this array without copying it, through the use of basic pointer arithmetic – assuming that the entire string is in a contiguous memory area.
This is possible in Python using objects which implement the buffer protocol. The buffer protocol is defined in PEP 3118, which explains the C API used to provide this protocol to various types, such as strings.
When an object implements this protocol, you can use the
memoryview class constructor on it to build a new memoryview object that references the original object memory.
>>> s = b"abcdefgh" >>> view = memoryview(s) >>> view[1] 98 >>> limited = view[1:3] >>> limited <memory at 0x7fca18b8d460> >>> bytes(view[1:3]) b'bc'
Note:
98is the ASCII code for the letter
b.
In the example above, we use the fact that the
memoryview object's slice operator itself returns a
memoryview object. That means it does not copy any data but merely references a particular slice of it.
The graph below illustrates what happens:
Therefore, it is possible to rewrite the program above in a more efficient manner. We need to reference the data that we want to write using a memoryview object, rather than allocating a new string.
@profile def read_random(): with open("/dev/urandom", "rb") as source: content = source.read(1024 * 10000) content_to_write = memoryview(content)[1024:] print("Content length: %d, content to write length %d" % (len(content), len(content_to_write))) with open("/dev/null", "wb") as target: target.write(content_to_write) if __name__ == '__main__': read_random()
Let's run the program above with the memory profiler:
$ python -m memory_profiler memoryview/copy-memoryview.py Content length: 10240000, content to write length 10238976 Filename: memoryview/copy-memoryview.py Mem usage Increment Line Contents ====================================== @profile 9.887 MB 0.000 MB def read_random(): 9.891 MB 0.004 MB with open("/dev/urandom", "rb") as source: 19.660 MB 9.770 MB content = source.read(1024 * 10000) <1> 19.660 MB 0.000 MB content_to_write = memoryview(content)[1024:] <2> 19.660 MB 0.000 MB print("Content length: %d, content to write length %d" % 19.672 MB 0.012 MB (len(content), len(content_to_write))) 19.672 MB 0.000 MB with open("/dev/null", "wb") as target: 19.672 MB 0.000 MB target.write(content_to_write)
In that case, the
source.read call still allocates 10 MB of memory to read the content of the file. However, when using
memoryview to refer to the offset content, no more memory is allocated.
This version of the program ends up allocating 50% less memory than the original version!
This kind of trick is especially useful when dealing with sockets. When sending data over a socket, all the data might not be sent in a single call.
import socket s = socket.socket(…) s.connect(…) # Build a bytes object with more than 100 millions times the letter `a` data = b"a" * (1024 * 100000) while data: sent = s.send(data) # Remove the first `sent` bytes sent data = data[sent:] <2>
Using a mechanism as implemented above, the program copies the data over and over until the socket has sent everything. By using
memoryview, it is possible to achieve the same functionality with zero-copy, and therefore higher performance:
import socket s = socket.socket(…) s.connect(…) # Build a bytes object with more than 100 millions times the letter `a` data = b"a" * (1024 * 100000) mv = memoryview(data) while mv: sent = s.send(mv) # Build a new memoryview object pointing to the data which remains to be sent mv = mv[sent:]
As this won't copy anything, it won't use any more memory than the 100 MB
initially needed for the
data variable.
So far we've used
memoryview objects to write data efficiently, but the same method can also be used to read data. Most I/O operations in Python know how to deal with objects implementing the buffer protocol. They can read from it, but also write to it. In this case, we don't need
memoryview objects – we can ask an I/O function to write into our pre-allocated object:
>>> ba = bytearray(8) >>> ba bytearray(b'\x00\x00\x00\x00\x00\x00\x00\x00') >>> with open("/dev/urandom", "rb") as source: ... source.readinto(ba) ... 8 >>> ba bytearray(b'`m.z\x8d\x0fp\xa1')
With such techniques, it's easy to pre-allocate a buffer (as you would do in C to mitigate the number of calls to
malloc()) and fill it at your convenience.
Using
memoryview, you can even place data at any point in the memory area:
>>> ba = bytearray(8) >>> # Reference the _bytearray_ from offset 4 to its end >>> ba_at_4 = memoryview(ba)[4:] >>> with open("/dev/urandom", "rb") as source: ... # Write the content of /dev/urandom from offset 4 to the end of the ... # bytearray, effectively reading 4 bytes only ... source.readinto(ba_at_4) ... 4 >>> ba bytearray(b'\x00\x00\x00\x00\x0b\x19\xae\xb2')
The buffer protocol is fundamental to achieve low memory overhead and great performances. As Python hides all the memory allocations, developers tend to forget what happens under the hood, at a high cost for the speed of their programs!
It's also good to know that both the objects in the
array module and the functions in the
struct module can handle the buffer protocol correctly, and can, therefore, efficiently perform when targeting zero copy. | https://julien.danjou.info/high-performance-in-python-with-zero-copy-and-the-buffer-protocol/ | CC-MAIN-2019-30 | en | refinedweb |
Stating that "this is a feature not a bug" does not make it so.
This breaks existing code and reduces the capabilities of the `ast` module.
For example, how does one get the location of the docstring now?
From a syntactic point of view.
def foo():
and
def foo():
b'"help"
barely differ.
The S in AST stands for Syntax not Semantics. | https://bugs.python.org/msg312567 | CC-MAIN-2019-30 | en | refinedweb |
A few months ago I had a really frustrating debate with my younger brother. He had come up to JHB to come for a visit and we decided to talk about programming. Of course I thought I would put a good pitch in for F#, but just couldn’t seem to do it any justice.
Eventually his point was as follows - “What really is the difference between declaring a functional solution vs an iterative solution. Sure, in F# you have something like the Seq.map function, but isn’t it just a shorthand for a for loop or something like that – isnt it just syntactical sugar?”
Of course he was wrong, but at the time I just battled to give him a concrete and yet simple enough example of how a functional solution is at a higher level than an iterative solution.
Then a few weeks ago I found a classic example which I think might illustrate the point a bit better. It all happened while I was looking into tail-recursive functions through reflector.
Lets say you have a recursive solution in F# that looks like the following…
let rec fooNonTail n = match n with | 0 -> 0 | _ -> 2 + fooNonTail (n-1)
If you were to go and look at the equivalent C# code generated via reflector (by examining the IL code) you would see something like the following…
public static int fooNonTail(int n) { switch (n) { case 0: return 0; } return (2 + fooNonTail(n - 1)); }
That pretty much makes sense… we defined a recursive function in F#, and the IL came out as a recursive function in C#…
Now it gets interesting… let’s say instead of having the original F# fooNonTail function we replaced it with a tail recursive function that looked like the following in F#…
let fooTailRec n =
let rec innerfooTailRec acc n =
match n with | 0 when n <=0 -> acc | _ -> innerfooTailRec (acc+2) (n-1) innerfooTailRec 0 n
Not to much of a difference from the fooNonTail function. We have moved things around a bit, but in essence it is still a recursive function.
This is where the fun begins… suddenly it gets interesting when we look at what has happened behind the scenes in the IL code.
Instead of getting a similar method to the first IL code we have to look around a bit more for the actual implementation, and when you find it, it looks something like the following…
public override int Invoke(int acc, int n) { while (true) { switch (n) { case 0: { int num = n; int num2 = 0; if (num > num2) { break; } return acc; } } n–; acc += 2; } }
That’s not recursive at all??? What happened?
What has happened here is somewhere, something recognized that instead of expressing the function in IL code as recursive, it could instead convert it to a normal iterative while loop… which has several operational/performance benefits.
To me that’s amazing and is a classic example of part of the abstraction layer that a functional language has over an iterative language. It works at just a slightly higher level. In this situation, because I was “solving” the problem, instead of telling the compiler how to iterate each step to get to a solution, when the compiler came across a situation where it recognized it could reformat for optimizations, it did it behind the scenes and wala…
So while this is not a complete blog post on the power of functional languages and F# in particular, I believe it at least illustrates that F# works at a slightly higher level than C# (iteratively) and that it is well worth examining the IL code in reflector to see exactly how your F# applications are translated.
What does worry me a bit about such an example is that it is subtle… meaning the optimization could easily be missed unless you kept an eye out for it.
Well, I look forward to your comments / crits if you have anything to add… or examples to give - just keep it nice ;-) | http://blog.markpearl.co.za/A-solid-example-of-the-difference-between-a-Functional-and-Iterative-Language.-F-style/ | CC-MAIN-2019-30 | en | refinedweb |
A flutter plugin that sets the android status bar transparent and status icon.
IOS flutter is built in, set appbar [brightness] [backgroundColor] directly.
To use this plugin, add
flutter_status_bar_light as a dependency in your pubspec.yaml file.
dependencies: flutter_status_bar_light: ^1.0.2
Set android status bar transparent.
@override void initState() { super.initState(); FlutterStatusBar.setTranslucent(); }
Set android status bar icon
@override void initState() { super.initState(); FlutterStatusBar.setLightStatusBar(); }
example/README.md
Demonstrates how to use the flutter_status_bar_status_bar_light: ^1.0.3
You can install packages from the command line:
with Flutter:
$ flutter pub get
Alternatively, your editor might support
flutter pub get.
Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:flutter_status_bar_light/flutter_status_bar_light.dart';
We analyzed this package on Jul 15, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using:
Detected platforms: Flutter
References Flutter, and has no conflicting libraries. | https://pub.dev/packages/flutter_status_bar_light | CC-MAIN-2019-30 | en | refinedweb |
Created on 2018-12-11 04:40 by yahya-abou-imran, last changed 2018-12-11 16:11 by yselivanov. This issue is now closed.
In asyncio.Task help:
| set_exception(self, exception, /)
| Mark the future done and set an exception.
|
| If the future is already done when this method is called, raises
| InvalidStateError.
|
| set_result(self, result, /)
| Mark the future done and set its result.
|
| If the future is already done when this method is called, raises
| InvalidStateError.
These doctrings are inherited from asyncio.Future.
But in fact it's wrong since::
def set_result(self, result):
raise RuntimeError('Task does not support set_result operation')
def set_exception(self, exception):
raise RuntimeError('Task does not support set_exception operation')
Just adding another docstring is not a good solution - at leas for me - because the problem is in fact deeper:
This prove by itself that a Task is not a Future in fact, or shouldn't be, because this breaks the Liskov substitution principle.
We could have both Future and Task inheriting from some base class like PendingOperation witch would contain all the methods of Future except these two setters.
One problem to deal with might be those calls to super().set_result/exception() in Task._step():
except StopIteration as exc:
if self._must_cancel:
# Task is cancelled right before coro stops.
self._must_cancel = False
super().set_exception(exceptions.CancelledError())
else:
super().set_result(exc.value)
except exceptions.CancelledError:
super().cancel() # I.e., Future.cancel(self).
except Exception as exc:
super().set_exception(exc)
except BaseException as exc:
super().set_exception(exc)
raise
One way to deal with that would be to let a Task have a Future.
"Prefer composition over inheritance" as they say.
I want to work on PR for this if nobody goes against it...
PS: I really don't like when some people says that Python core developers are known to have poor knowledge in regard to OOP principles. So I really don't like letting something like this in the standard library...
> One way to deal with that would be to let a Task have a Future.
> "Prefer composition over inheritance" as they say.
>
> I want to work on PR for this if nobody goes against it...
I'm not against it, unless it doesn't have backward incompatibility
or performance regression.
But I'm not sure you estimate the difficulty correctly: there are C implementation of Future and Task. You need to have deep knowledge of Python/C APIs.
> PS: I really don't like when some people says that Python core developers are known to have poor knowledge in regard to OOP principles. So I really don't like letting something like this in the standard library...
Personally speaking, I dislike treating OOP principles like Ten Commandments. Principles have some reasons. And these reasons are reasonable not for all cases. When people say "it's bad because it violates principle!", they may have poor knowledge about the prinicple.
If they really know the principle, they must describe real-world problem caused by the violation.
In this case, I agree that misleading docstring is a small real-world problem caused by the violation. While it can be fixable without fixing the violation.
Generally, `set_result` or `set_exception` is called by the creator of the Future. So requiring knowledge of concrete class is not a big problem.
On the other hand, awaiting future object without knowing concrete class is common. But Task is awaitable. So there are no problem here.
-1 on this; there is no clear win in doing this refactoring, only a hard to estimate chance of making a regression.
Yahya, feel free to tackle other asyncio bugs or improvements, this one is just something that we aren't comfortable doing right now. | https://bugs.python.org/issue35456 | CC-MAIN-2019-30 | en | refinedweb |
The Social Impact Heroes of Social Media: “Listen to your intuition. Your heart knows the way.” with Larissa Lowthorp and Candice Georgiadis
.
As a part of my series about social media stars who are using their platform to make a significant social impact, I had the pleasure of interviewing Larissa Lowthorp is a design, entertainment and technology creative. She is the founder and president of TimeJump Media. In 2017, Larissa donated all of her belongings and hit the road with her beagle, her laptop and two suitcases of essentials to see what life had in store.
Thank you so much for doing this with us! Can you tell us a story about what brought you to this specific career path?
Hi! I’m so glad to be able to be part of this series. Thank you for the opportunity. I really appreciate it and I think it’s great that you’re highlighting the positive ways that influencers are using their platforms for good.
I’ve been sharing my life online in one form or another since my early teens — so social networking was a natural extension of that.
It’s a way for me to mesh my widely disparate talents and interests into a unified persona. My vivid imagination and creativity fuels my drive to innovate for a better world.
I hope to serve as an inspiration to people with big dreams, to be an encouraging voice to those who have yet to uncover their passions to discover what makes their souls sing.
I can’t remember a time when I wasn’t driven by a burning curiosity to learn everything imaginable about this universe, or a time when I wasn’t consumed with a need to create. I’ve always had a feeling that I’ve been put on this earth for reasons far bigger than myself, for reasons I don’t fully comprehend.
As a kid, I never fit in at school and I didn’t “get” it. For years, I was bullied for being different — for being myself. I went home in tears nearly every day. My classmates were downright cruel. At one point, the situation became so bad that I begged my parents to let me change school districts. When I was younger, bullying wasn’t under the spotlight as it is today, and the party line was, “kids can be like that.”
I’d fake being sick to avoid my classmates (pretty sure my mom was onto me). I missed weeks and months of school throughout the year, but I tested well — in the third-grade state-mandated aptitude test, I tested at the post-graduate level. Faculty and my parents recommended me to audit university classes but the request was denied due to my age.
It wasn’t until middle school that I began taking more advanced classes (this was only after I spent three weeks being intensively tested, yet again, to see if I belonged in what was then called “special ed.” At the time, my sister was in graduate school working toward her master’s degree in theology, and used me to practice her theorems and debates, and when I engaged the guidance counselor on a discourse about existential nihilism and Kierkegaard, she called my mom, who — I paraphrase — said “I told you so.”).
Even then, I never did my homework (didn’t see the point) and I got poor grades. My dad was of the mindset that I’d learn more out of the classroom and traveling with him, and I developed an insatiable wanderlust from a young age.
The fact that I did poorly in school didn’t mean I wasn’t continually learning — I devoured all the books I possible could and was reading from the library’s adult section from the middle of first grade. On my seventh birthday, my dad gave me an unabridged copy of one of his favorite books: The Fellowship of the Rings by J.R.R. Tolkien. My mom said it was over my head and advised I set it aside for a few years. Just to prove her wrong, I read the entire novel from cover-to-cover and discussed it at length with my dad. My parents believed that learning was learning, and since I was always in discovery mode, they didn’t sweat my grades or attendance too much.
My original ideas and ability to think outside the box set me apart — and there was a long time when I didn’t fully embrace that, because I felt something was wrong with me.
I wish I could open the eyes of every young person out there who’s experiencing doubt or shame, to have them realize that who they are is beautiful — and to never feel the need to hide from what makes you unique. There isn’t one soul on this planet who’s born without a special gift or talent to offer the world — but too many people never discover what that is, or are afraid to share it. My older sister always encouraged me to become the best version of myself. She believed in me when I didn’t believe in myself. I’m very fortunate to have been blessed with a family who believes in my talents, even when they don’t always understand what drives me.
Since before I can remember, my creativity was my escape. Acting allowed me to become somebody else. Writing allowed me to think with different perspectives and put myself into the shoes of the people who tried to tear me down, and to have empathy for what others may be going through. I put my design eyes on anything and everything I could get my hands on.
When I was in tenth grade, I devised an elaborate plan to gain wide exposure, to start a company that did many different things, and to use my fame and fortune to re-invest it into my passion projects, those things that would fundamentally improve the foundations of people’s lives, and revolutionize the framework of the world at large. It seemed to me that all of the problems we faced couldn’t be that hard to solve. The money was there. The knowledge was there. So why wasn’t it already done? I figured that if wealthy people in power weren’t using their money and influence to improve things, I’d become rich and famous myself and do exactly that. So I continued acting, and I began to pick up work as a model.
As a teen, there was very little structure to the concept. It seemed impossible –for too long, I wrongly allowed naysayers and doubters to hold me back. My desire to change the world never left and it’s only grown stronger, particularly in recent years. I wanted to help my family. I wanted to change the world. I won’t stop until I have — and even then, I’ll keep going. It wasn’t until I met my boyfriend and discovered that we share the same visionary ideas about bettering people’s lives and the world at large that I began to open my soul to exploring this side of myself once again.
By the time social media came along, people began following my posts for insights on how I saw the world (different from others, apparently), tech anecdotes, and to see how I merged the IT with my artistic side. It’s allowed me great freedom to be able to express different facets of myself.
My parents were involved in technology and manufacturing. I was rebuilding computers with my dad and programming video games in Visual Basic from elementary school. I began web programming when I was in middle school and it became a hobby. When I was a broke college student, I began bartering design services for things like photographs, hair styling, you name it. I established a roster of freelance clients who later sent new business referrals.
I had to leave university after my dad got sick. I needed a way to quickly earn money to support myself and my family. Web design was the most lucrative talent I had, and I put it to work. You always hear that “art doesn’t pay the bills” and, at the time, I had no other resources available — I had to get creative and do what needed to be done in order to get by. Focusing on technology achieved that. Eviction and repossession notices had been sent… my dad was in ICU for two months before he passed away. My uncle provided support to us during that time. I’ll never forget the time I was speeding to the emergency room to be with him, and answered the phone in case it was his doctor. It was a debt collector — when I told him I couldn’t talk because I was on my way to see my dying father, he accused me of lying to avoid paying the debt.
I didn’t choose a technology career. It just happened. I ended up having a talent for it and eventually got a great corporate job. About six years ago, I realized that I felt stuck. I knew I couldn’t keep going. My creativity was being stifled. I’d become depressed. My energy wasn’t in alignment and I felt out of harmony with the universe. I listened. I became more involved in my fashion pursuits, and I re-ignited my interest in the film industry.
Following this, I wrote, produced and directed a short film which screened as part of the Cannes International Film Festival to critical acclaim. I began writing feature-length screenplays — one of which was written with my mom based on her original concept. Mom and I have always done artistic things together. She established my firm belief in spreading random acts of kindness wherever I go. My other screenplays are children’s fantasy features and dark comedy action thrillers. They’ve been well-received, praised by industry insiders for their rich imagination and originality, and are currently in development for theatrical release.
As I began traveling regularly and became more entrenched within the entertainment industry, I realized that having a 9–5 job wasn’t conducive to my life goals, and I changed directions on a wing and a prayer. I left my corporate job and began consulting full-time in 2015 while working toward getting my films off the ground and following my dreams. I decided to document the journey on Instagram. I didn’t know it at the time, but my move out of the corporate culture motivated others I worked with to do the same — including my manager at the time. I’m very thankful that my actions have inspired people to pursue their passions.
In 2017, things shifted and my life did a complete 180. I was feeling entirely unmoored. There were very few people in my life who were able to anchor and guide me. My boyfriend was one of those people, and it was with his encouragement that I founded TimeJump Media. He’d known that it was a stepping stone I’d envisioned toward much larger goals of making huge positive changes in the world and, having started his own business and charity organization in the past, he helped me navigate what I found to be an overwhelming process. He’s used his experience in the entertainment industry in full support of my goals. I’m very thankful for and blessed to have his continued encouragement.
I continue to document my life online and I’m operating in several different spheres –entertainment, fashion, tech, corporate, and moonlighting on passion projects of mine. I’ve got ADD in a bad way, so to me, this feels natural. I used to try and hide from what made me different, but I’ve come to realize that the people who are doing truly original things — they can’t have a map because it’s uncharted territory. What I can say is that I’ve been happiest when I’ve followed my heart and intuition. It’s never led me astray. Since becoming more involved with film, and documenting that as a social influencer, it’s seemed that my path forward illuminates just as I’m approaching a dark corner.
I’m trying to say “yes” to life! A heart open to abundance and possibility has taken me far — and I believe it can do so for anyone. YES, you have to work hard and NO it won’t always be easy — sometimes it’ll be exhausting and scary — but at the end of the day, can you say you did what you loved and left the world a little bit better than it was when you woke up? It doesn’t mean you’ll never face hardship or strife. It means that you overcome it.
Can you share the most interesting story that happened to you since you began this career?
The most interesting story that happened to me since I began this career isn’t any one thing in particular. It’s been a chain-reaction series of choices and events over the years that have guided me toward this path, this moment.
I was fearful to embrace who I was, and my full potential, for most of my life. I had a tendency to minimize my accomplishments. My mind was full of ideas that were so revolutionary, so out of the ordinary, that, for the most part, I kept them entirely to myself. I didn’t think that anyone cared, or would understand. I’ve always been extremely artistic and imaginative. Throughout my life, teachers, friends, family, colleagues, and others have noted this, but when I was younger, all I wanted to be was normal. Except, that’s not why I’m here.
I spent years hiding this side of myself — trying to fit into a mold that wasn’t made for me. I was so consumed with being accepted and fearful of rejection that I lost myself. Having been there, I would urge any girls and young women out there — never do that. Yes, I know it’s so much easier said than done.
Life nudged me in the direction of a highly varied career path which allowed me the freedom to travel as I began expanding my brand and exploring my creative passions once more. This, and other life circumstances, led me — not entirely of my own design — to attend the Cannes Film Festival in France in 2014, and while there, I experienced a sense of calm and peace that was entirely unfamiliar, but very refreshing. I was finally beginning to be myself and follow my heart. I didn’t have any business being there — I couldn’t afford it. The entire affair was supremely impractical but I wanted to see what happened. I told myself that best case scenario, new doors would open, and worst case scenario, I had a vacation on the French Riviera! Win-win, right? I finally felt a calling — and had a deep inner knowledge, which came from somewhere beyond myself, that I had to pursue it, no matter what.
It was very impractical and highly inadvisable to quit my corporate job, but I had an overwhelming sense that it was something I had to do — no matter the risks. It was scary and exhilarating at the same time. However, I still didn’t commit to it fully. Over the the course of the next couple of years, a chain of events just kind of unfolded which led me to this moment.
I was no longer resisting the universe — I was going with it and despite all logical reasoning to the contrary, things just seemed to work out the way they were supposed to and it finally seemed like I was on the path meant for me — when before, it had always seemed forced.
Can you share a story about the funniest mistake you made when you were first starting? Can you tell us what lesson you learned from that?
One of my early clients was a best-selling author. I priced out his project far lower than market value and spent time well above and beyond to get it right for him. He was an insomniac and would call me in the middle of the night, then send me rant-filled emails for not being available 24/7. He told me that he’d like to put me in contact with his marketing manager and set up a three-way conference call. A few minutes into the call, they disagreed about something and it turned into a full-on screaming match!
I later learned that the author’s marketing manager was his ex-wife. He insisted on having conference calls with her and I endured more screaming between the two of them. I didn’t know whether or not to speak up, so stayed on the line and listened to them rehash their dirty laundry.
I learned the importance of speaking up and asserting yourself. I learned to never under-value the worth of yourself or your work — nobody else will do it for you. I learned this the hard way as there have been a number of times in the past when people took my ideas and passed them off as their own, taking full credit without acknowledgement.
In years past, I was far too timid when this happened. I was hesitant and didn’t want to rock the boat, to hurt anyone’s feelings, or to make anyone mad at me. The result was that there were times when I didn’t speak up when I should have, particularly for those ideas that were truly transformative and served to catapult the reputations and careers of others, which kept me back.
There are always more ideas to be had and you have to keep looking forward. I’ve always spoken up for the rights of others, but it’s taken a lot of effort to make speaking up for myself a habit.
If you don’t speak up for yourself, people will walk all over you. Someone once told me, “you have to teach people how to treat you” and there’s a lot of truth in that.
Ok super. Let’s now jump to the core focus of our interview. Can you describe to our readers how you are using your platform to make a significant social impact?
Historically, I’ve used my platform to share the things that matter to me — whether it’s what I’m doing throughout the day, funny or interesting things I see, things I like, dislike, or find amusing. Over time, that’s evolved into sharing my perspectives and opinions on a variety of things. That’s what my audience loves — I provide inspiration to see the world in a different way. I was recently featured in The Wall Street Journal and discuss the importance of understanding your audience — but it begins with an understanding of yourself.
I’m leveraging my social channels to raise awareness and amplify my message about my incredible new organization, FemmePower. FemmePower is dedicated to the support and empowerment of current and future female business owners, entrepreneurs, and micro-entrepreneurs throughout the world and from all walks of life.
FemmePower will become a crowd-resourced organization with a global reach. We work with female entrepreneurs establish business plans, connect with likeminded business partners and supporting services, obtain financing and micro-financing, craft promotional strategies, educate women on the basics of business ownership, and secure access to foundational elements such as materials or inventory. We connect women to women and to like-minded industry veterans in the role of mentors and coaches to provide guidance and answer questions.
We’ll connect female entrepreneurs to free, low-cost and accessible tools to drive sustained success. We’ll provide access to financing, opportunities for education, business knowledge, basic financing skills and meet women wherever they are in life and in the world to formulate viable plans to financial security, and independence.
By empowering women, FemmePower strengthens families and communities.
We provide tailored services to under-served, minority, and marginalized female business owners and entrepreneurs-to-be globally with an aim to permanently lift women and their families from poverty.
We serve women in transition by supporting female refugees, survivors of trafficking, forced labor, domestic abuse, and enslavement, and their families, by nurturing growth from strong new roots to a place of independence and financial security via business ownership. We aid women experiencing life transitions to pave the path to security and success.
FemmePower shall provide coaching and small business ownership classes online and in local communities. We work to build literacy and financial competency. We shall seek and find creative ways for women living in repressed conditions to shape their lives via business ownership in a way that seeks to minimize the risk of societal or political repercussion.
We believe that all women have the right to education, lifelong happiness, security, and independence.
Female entrepreneurs face a number of barriers to business ownership at all levels even as they become business owners at a higher rate than ever. Most of these female entrepreneurs are small business owners or micro-entrepreneurs.
FemmePower seeks to support these endeavours and to nurture their growth. Women have a worldwide historical pattern of obtaining employment in low-skilled, labor-intensive industries that consign their role to a societally-predetermined profession that’s seen as being appropriate for women. This practice undermines educational achievement, perpetuates cultural stereotypes, and truncates upward mobility.
Additionally, in a 2000 report, Worldbank found that gender relations plays a key dynamic in female business ownership . Worldbank reports that in Vietnam. there is a correlation between higher rates of abuse and households where women earn a higher income than their husbands, or are the family’s main income earner.
Worldbank also notes that although the Philippines has a higher proportion of female college graduates than male, women have little in the way of career mobility and are more often seen in production operator positions while more men are working as technicians or engineers.
There are a number of near-universal challenges faced by women entrepreneurs which are driven by a lack of education and insufficient access to resources. Female business owners have inadequate access to financial and credit services, insufficient connectivity, communications and information, and a drought of financial and business management skills. Large-scale efforts and major infrastructure changes are required to support women entrepreneurs, including through access to small loans, markets, and training. FemmePower seeks to remove these obstacles by equipping women with the tools they need in order to establish a viable path toward job and income security.
Women living in transition economies are particularly vulnerable to shrinking social sector services and market competition. Women around the globe impacted by the so-called “motherhood penalty” with little regard to economic status or race. For example, a survey conducted in China’s Shanxi province found that one fifth of women workers had suffered job losses in some regions and industries, with childbearing responsibilities listed as one of the main reasons for the lay-offs (Cooke 2001). To offset income insecurity and wage gaps, many women in Vietnam are forced to take on multiple jobs, with almost a quarter of women being both self-employed and engaged in wage work (ADB 2002). Women should never be put in the position of having to choose between motherhood and career. Business ownership affords women the opportunity to make educated choices in both their personal and professional lives.
Wow! Can you tell us a story about a particular individual who was impacted by this cause?
All women are impacted by this cause. FemmePower exists to empower existing business owners as they seek to begin, improve and grow. We’re here to support those interested in exploring business ownership learn more and get their dreams off the ground. For those who are lost and seeking their way, we can provide insight. We are the next step in the journey forward for women emerging from crisis situations and entering a rebuilding phase of their lives. For women living in vulnerable and repressed situations, our trusted network will aid in your efforts to achieve autonomy in a safe and discreet manner.
Was there a tipping point the made you decide to focus on this particular area? Can you share a story about that?
Here’s what I’d like to know. Once you have that exposure, once you have people who are interested in your causes, in your life, and in keeping pace with the things you’re doing and want to accomplish — how can you empower those followers to turn around and do good themselves? What’s the point of being a celebrity or having a large following if you don’t use it to full effect to improve the world?
Money, fame, vanity, and material possessions are all fleeting and will not leave a lasting footprint on history. In order to effect real change, one must be willing to put themselves out there, to risk things that others would not, and to take chances that haven’t been tried before. It requires a lot of guts, creativity, and persistence. Because you will fail, and you’ll fail a lot. As long as you learn from your mistakes, they’re never missed opportunities. Keep on going. I’ve made more mistakes than I can count, and I make new ones every day.
My career path has been anything but straightforward. I’ve always been extremely artistic, to the point where my math teacher told my mom that I was the best artist he’d ever had in class (I doodled all over my assignments and turned them in full of sketches of elaborate fantasy worlds and missing the answers). Art is my life, and I always knew I wanted to do something creative — but it wasn’t practical and I didn’t know where to begin.
From high school, I had a nebulous idea of starting one large company that did a lot of different things. I dreamed that at some point, my company would be making enough money and become well-known enough that I could use it as a platform to do what I really wanted to do — to foster a positive, worldwide transformation and work toward eradication of unnecessary problems such as poverty, hunger, lack of education, and conflict. Whatever was already being done wasn’t enough — and I didn’t understand why, because the resources are already there, they just weren’t being utilized. It was painfully clear.
But how? I was paralyzed. There are so many excellent charities and organizations out there that are doing wonderful things. I wanted my organization to be more than just one of many — my hope was for whatever I did to fill a real gap and really improve people’s lives in a sustainable way. I spent years trying to think of what that could be. I saw major problems in the world — but had a hard time identifying them for what they were or effecting a real way to change things and so I put it to the back of my mind. It stayed there for years.
I’d effectively run and operated businesses since I was a teenager (my parents were entrepreneurs and were very encouraging — plus it got me out of the house. I could be a pretty annoying kid.)
More recently, however, getting TimeJump off the ground posed new challenges — and it was a larger endeavor than I’d previously tried. I wanted to do it right. I wanted to set this up to sustain me for years to come because I was tired of spinning my wheels and doing the same thing every day. I didn’t want to keep building other people’s dreams for them. It was time to build mine, and to make a difference.
I frequently encountered dead ends which required me to come up with creative work-arounds. Banks were reluctant finance a small business without an established track record. This presented a paradox, because without that track record, I was unable to access resources required to grow, and lacking that, I was unable to service my niche as fully as I’d envisioned. It’s been a heavy learning curve, and very difficult at times. FemmePower will use and expand upon my experience to make things easier for other female business owners so they can focus on the things that matter most in their lives.
Although I emerged from a background of abuse (in various forms), this will never define me — even as it shapes who I am and how my endeavors will change the world for the better. Throughout this journey, I found very limited resources available to underpin long-term success. I defied the odds, despite every obstacle thrown in my path — things that would stop almost anyone else — and I plan to continue to defy the odds and stand up for what’s right for as long as I live. I don’t believe people ever want to give up — they encounter insurmountable hurdles. What if those hurdles could be removed?
I recently woke up one day and all of this had gelled in my mind to create FemmePower. The idea was fully formed and I couldn’t rest until I’d taken action on it. The idea came from beyond me. It was this amazing calling from God that I couldn’t get out of my head. I knew I had to pursue it. My journey away from abuse, and toward becoming a successful artist, technology executive, creative, and female entrepreneur, could be tapped to serve a much wider role and to fill a very real missing link in the pipeline toward achieving lasting autonomy, security and financial independence for other women — no matter what one’s life circumstances may be.
FemmePower is a resource providing aid to women from all walks of life, grow viable businesses, establish reachable goals, and to challenge themselves to do better.
Human trafficking and safely removing individuals from of hostage situations is an issue near and dear to my heart. Trafficked women can become inadvertently involved in the trafficking pipeline while they were trying to find gainful employment to lift their families from poverty. Many accepted (false) job offers to work overseas which were a bait-and-switch, and realized too late that the job was entirely different than what was promised, involving forced prostitution or other de-humanizing and illegal activities. Their captors cut trafficked individuals off from their families (or they’re forced to check-in with their families and fraudulently report health and happiness to fly under the radar) and, avoid return (when it is even possible — many have their identities stripped and passports taken) for fear of causing shame to their families or the societal repercussions they may face. Captors use both physical and emotional torture and manipulation tactics to gain compliance.
By seeking new opportunities and financial independence, these women became involved in something nefarious, and there are limited resources available to these individuals after a crisis has passed. Recovery and emotional and psychological rehabilitation is a lifelong process. There is an uphill battle of legal issues, particularly when they’ve been forced to perform illegal activities such as drug trafficking, organized crime, and prostitution — and this, in part, can be a major barrier to rebuilding their lives. As survivors enter the next phase of their lives, with robust support, their entrepreneurial spirit can be re-awakened and nurtured into a viable path toward a better life.
Survivors face tremendous psychological impact, and PTSD, anxiety, depression and substance abuse can result. FemmePower will work hand-in-hand with mental health professionals and legal case workers to aid in the healing and rebuilding process.
Society at large needs to shift their mindset from classifying survivors as victims, and work together to support these women as they pave sustainable paths to gainful careers. Trauma psychology plays a key role, and FemmePower will work with mental heath professionals serving survivors to nurture, support and cultivate the ability to thrive for life via business ownership for those who wish to pursue that path. FemmePower is a step toward the future for women emerging from vulnerable and at-risk situations. We foster social rehabilitation and re-integration via business ownership.
We will operate a network of vetted and trusted business ownership proxies for women living in repressive conditions worldwide who are working to establish secure and long-term exit plans.
FemmePower works hand-in-hand with crisis organizations such as Gino McKoy’s Kinder Krisis to provide long-range aid to women and their families after safe environs have been established. FemmePower is designed to support women and their families as they enter the next phase of their lives looking toward a bright future. We’ll lay a network of roots throughout the world promoting freedom of education, free exchange of resources, positive change and growth. FemmePower shall work in tandem with organizations throughout the world who are already involved in their local communities to ensure that we have a positive cultural impact.
FemmePower’s icon is the lotus flower because it is born of the dark and comes from mud. This symbolizes the journey of the female entrepreneur. There are universal truths and challenges that all women business owners have in common with one another no matter where they came from, or where they’re going.
The transformative power of the lotus has special meaning to me . I was born in July, my birth flower is the water lily (lotus) and my life has ebbed and flowed like the tides with phases of the moon — blossoming, fading and blooming again into something more beautiful than I could have imagined. The lotus is a symbol of hope and enlightenment, which is what FemmePower provides.
Are there three things the community/society/politicians can do help you address the root of the problem you are trying to solve?
Yes, absolutely! Everyone can become involved with FemmePower. Strong women build strong communities. I’d love to hear from other women business owners who would like to become involved in our outreach and mentorship programme and inspire others.
I’d also love to hear from people willing to invest and finance women in business, from micro-loans all the the way to more substantial financing efforts. Access to credit is a key issue that women business owners face when attempting to grow their companies and when attempting to source inventory or materials. I love Kiva’s model of social financing and hope to implement something similar, or work directly with Kiva, Grameen Bank, and other local financiers and angel investors who would be willing to support women in business. I hope to speak with attorneys who are willing to provide low-cost or pro-bono services to female start-ups, and with people who want to work with our clients hands-on. Anyone who is interested in becoming involved in our network of mentors, people willing to volunteer their time and space on their feed to draw awareness, and as a way to highlight inspirational success stories of other female enterpreneurs.
Politicians can take real action beyond vocal advocacy and enforce anti-trafficking laws and ensure that women facing criminal charges as a result of their situation receive immunity from legal as they begin the process of rebuilding their lives. We can earmark more federal grants, scholarships, and money to get new female owned and operated businesses off the ground and plan for long-term success.
Female business owners who would like to be interviewed and profiled on the FemmePower website and social channels, please reach out! I love to share inspiring stories.
What specific strategies have you been using to promote and advance this cause? Can you recommend any good tips for people who want to follow your lead and use their social platform for a social good?
FemmePower is quite new. Things are just getting off the ground and reception so far has been incredibly promising. I’m so excited about this and I hope you will be, too. I’m using a grassroots social media strategy and pointing people to the website () as a starting point. The first thing I’d like to do to advance the cause is to raise awareness via social media (I’ve already started on Instagram at) about challenges women in business face at all levels, across industries, cultures, races and professions. I’ve been on a fact-finding and sharing mission to raise the voice of the cause. I’m first connecting with localized resources that are already in place to advance female business ownership and other like-minded influencers. I’ll then leverage my primary social media accounts to highlight our efforts and our progress. This is a journey and I want to take you with.
My biggest tip for anyone who’d like to follow my lead and use your social platform for good is to do it! You have a voice — raise it. Your followers watch you for a reason — there is something that makes them feel connected to you and interested in your life, your views. There’s no good deed too small. There are so many eyes on your feeds, why not take the time out of your day to inspire somebody? You never know when that will make all the difference for them to keep going. But don’t use it as your only method — get out there and get your hands dirty, meet others, never stop expanding your network and horizons because you never know what amazing door will open next.
What are your “5 things I wish someone told me when I first started” and why? Please share a story or example for each.
To be honest, if someone had told me five things when I first started, I probably wouldn’t have. I’ve always had to learn things the hard way, and experience it for myself. It’s a chief complaint of my family and boyfriend. At times they get to say “I told you so” but had I not learned to follow my heart and intuition, I wouldn’t be where I’m at now, I wouldn’t have had the incredible experiences I’ve had thus far and I wouldn’t have the guts to create an endeavor like FemmePower.
1..
2. Be vocal! Talk to everyone about what you’re doing — don’t be afraid to toot your own horn. It used to be that I was shy and reserved to share my ideas and what I’m working on — but whenever I have, it’s had an overwhelmingly positive result. Not everyone will be receptive or supportive to what you’re doing, and that’s okay. It’s all about awareness — the more people who know, the larger your potential network becomes.
3. Don’t be afraid to ask for help. I tend to be very independently-minded and, in the past, was reluctant to trouble people and had thought that reaching out for guidance and support was a sign I wasn’t competent, when in fact, it’s the opposite. Everyone needs support and those learnings will expand your horizons. Everyone starts somewhere. My knowledge is nothing compared to the vastness of the universe — we need to rely on one another. Most people will be eager to help and, when possible, offer to connect you with others or share knowledge they have to share.
4. Be true to yourself. Don’t let others define you. This seems like an over-shared platitude, but I think it’s so popular because it’s far easier said than done. In my life — past and present — I’ve encountered a lot of people who wanted to pigeon-hole me into focusing on just one thing, when the truth is, that I’ve been tremendously unhappy when I’ve tried. The truth is that I’m multi-faceted in talents and interests, and my truth is to live them all. They all exist in harmony for me. Even as I’ve been building out my social network, the major advice is to narrow focus to one thing — but that’s just not me. Initial feedback for TimeJump Media was dubious and many advised me to narrow the focus. I seriously considered it — I didn’t want to take the wrong steps. But in the end, I’ve got to do what vibes with my perspective and when that is aligned, I have seen that things fall into place — even when it seems to be against all odds. It’s amazing to see.
5. Don’t take things personally and be adaptable. The arts are a fickle place to be. It is highly unpredictable — you never know what will catch on or what will sink. An artist’s output is highly subjective, and (I can’t speak for other artists) for me, everything I create is tied to a piece of my identity. When somebody responds poorly to something I’ve made, it feels like a piece of me is torn away. My moods are extremely mercurial. I used to focus on one negative comment out of many, many positive ones and let put me into a funk for days. I’ve had to learn to recognize that their opinions aren’t a refection of who I am or the value of my work or worth as a person. An artist’s role is to get people to react, and those reactions come in many forms. You can’t take it personally or it could destroy you. People’s reactions to your creativity are a reflection of themselves and their own worldview, not of you.
You are a person of enormous influence. If you could inspire a movement that would bring the most amount of good to the most amount of people, what would that be? You never know what your idea can trigger. :-)
My long-term goal and the next phase of FemmePower is to develop a global “underground railroad” of sorts to aid women seeking to exit repressive regimes. This will be network of allies who are willing and able to help women living in such conditions bring themselves and their families to safety — across borders if necessary — and to seek asylum where they can rebuild their lives and find a new equilibrium of normal as they look toward the future — but most of all, to be free, and happy.
If I could inspire a movement that would bring the most amount of good to the most amount of people, it would be a movement to freely share knowledge. Everyone has a special knowledge and in-depth understanding of everything in this world. The Internet has been such an amazing development because it provides a vast amount of knowledge across the world. But there’s still a cost to accessing the web. It frequently costs to get online — not everyone has access to a public library or free wi-fi. The barrier to entry is literacy. If you’re unable to read, you find limited value in it.
With a global free mind-exchange, I would connect people to like-minded people in a knowledge-transfer network in which they can tap into that exchange of resources and be able to communicate with experts on any subject in an accessible way. It could be face-to-face. It could be online. It could be in written letters, over the phone, or via any other means. There’s still a separation between online life and off — if there were a way to meld and merge these into a truly free pool of human intelligence, how much more powerful and well-off would the human race be?
I’ve had incredible opportunities to speak with many people, and every single one of them has had fabulous ideas about something — how to improve things, how to change things, how to simplify things, how to connect ideas that aren’t readily apparent. Despite advances in connectivity over the past 25 years, people still have trouble getting their ideas out, and sharing their expertise. Access to education is limited and expensive. Student loan debt is crippling generations of Americans. Why? Knowledge can be free — and we are all enriched by its exchange. It’s already there but we aren’t harnessing its full potential.
Can you please give us your favorite “Life Lesson Quote”? Can you share how that was relevant to you in your life?
Nah — life lessons for me aren’t found in Pinterest quotes. At times, I’ll share ones I find to be meaningful and articulate my life philosphophy — but if you’re getting your life lessons from memes, you’re not living!
Go out there, open your heart, your mind and your soul to the beauty and abundance the universe has to offer. It’s out there. You can find it. Stop over-thinking and allow yourself to vibe with the universe. No matter what you’ve faced or are facing — live life to the fullest. Fail frequently and use those mistakes and failures to rise higher in the future. I’m here to tell you: You have boundless possibilities living within you — even if they haven’t been unlocked yet.
Is there a person in the world, or in the US whom you would love to have a private breakfast or lunch with, and why? He or she might just see this, especially if we tag them. :-)
I’d love to have a private breakfast or lunch with my sister Pauline, who was adopted as an infant. I’ve never met her. She was born in Ottumwa, Iowa on February 11th, 1967. Her birth name was Pauline Lee Neff. It was a closed adoption managed by American Home Finding Association. We’ve been searching for her for years. We don’t know the name of her adoptive family, her current name or the name she was raised with, where she was raised or is living now. We don’t even know if she knows she was adopted or would want to find us. But Pauline — if you’re reading this, we love you and think of you every day. If anyone out there has leads about who she might be or how to connect, I’d be tremendously thankful if you could get in touch!
How can our readers follow you on social media?
I’m most active on Instagram at and on Facebook at. Feel free to follow me to keep up with me and shoot me a message to say hello! FemmePower can be followed at and.
Also check out my websites:
My personal website:
FemmePower:
TimeJump Media:
Photo credit: John Wagner / Hair, makeup and styling by Jansel Hutton
This was very meaningful, thank you so much!
THANK YOU!!!!! I’m very excited to see the series and read about what other influencers are doing and hopefully we can join forces to make an even bigger impact in the future! | https://medium.com/authority-magazine/the-social-impact-heroes-of-social-media-listen-to-your-intuition-ed7523be613f | CC-MAIN-2019-30 | en | refinedweb |
Scroll down to the script below, click on any sentence (including terminal blocks!) to jump to that spot in the video!
gstreamer0.10-ffmpeg
gstreamer0.10-plugins-goodpackages.
Alright, remember this hardcoded quote? Let's create a new class that can replace this with a random Samuel L Jackson quote instead.
In
AppBundle I'll create a new directory called
Service. Inside of our new
Service directory let's add a php class called
QuoteGenerator, and look how nicely it added that namespace for us!
Let's get to work by adding
public function getRandomQuote(). I'll paste in some quotes, then we can use
$key = array_rand($quotes); to get one of those quotes and return it:
Next, I want to register this as a service and use it inside of my controller. So hit
Shift+
Command+
O, search for
services.yml, delete the comments under the services key, and put our service name there instead. I'll give it a nickname of
quote_generator:
Tip
If you're using Symfony 3.3, your
app/config/services.yml contains some extra code
that may break things when following this tutorial! To keep things working - and learn
about what this code does - see
Notice that PHPStorm is autocompleting my tabs wrong, I want to hit tab and have it give me four spaces. So let's fix that real quick in preferences by first hitting
command+,, then searching for "tab". In the left tree find yml under "Code Style" and update the indent from 2 to 4. Click apply, then ok and that should do it!
Yep, that looks perfect. As I was saying: we'll call the service,
quote_generator, but this name really doesn't matter. And of course we need the
class key, and we have autocomplete here too. If you hit
Control+
Space you'll get a list of all the different keys you can use to determine how a service is created, which is pretty incredible.
So we'll do
class, but don't type the whole long name! Like everywhere else, just type the last part of the class: here
Quote and hit tab to get the full line. Now add an empty
arguments line: we don't have any of those yet:
This is now ready to be used in
MovieController.
Use
Command+
Shift+
] to move over to that tab. And here instead of this quote, we'll say
$this->get('') and the plugin is already smart enough to know that the
quote_generator service is there. And it even knows that there is a method on it called
getRandomQuote():
This is one my favorite features of the Symfony plugin.
Save that, head back to the form, refresh and now we see a new random quote at the bottom of the page.
Now in
QuoteGenerator, let's pretend like we need access to some service - like maybe we want to log something inside of here. The normal way of doing that is with dependency injection, where we pass the logger through via the
constructor. So let's do exactly that, but with as little work as possible.
I could type
public function __construct, but instead I'm going to use the generate command. Hit
Command+
N and pick the "Constructor" option from the menu here. I don't think this constructor comment is all that helpful, so go back into preferences with
Command+
,, search for "templates", and under "file and code templates", we have one called "PHP Constructors". I'll just go in here and delete the comment from the template.
Ok, let's try adding the constructor again. Much cleaner:
At this point we need the logger, so add the argument
LoggerInterface $logger:
This is the point where we would normally create the
private $logger property above, and set it down in the constructor with
$this->logger = $logger;. This is a really common task, so if we can find a faster way to do this that would be awesome.
Time to go back to the actions shortcut,
Option+
Enter, select "Initialize fields", then choose
logger, and it adds all that code for you:
We don't even have to feed it!
Farther down, it's really easy to use,
$this->logger->info('Selected quote: '.$quote);:
We've added the argument here, so now we need to go to
services.yml, which I'll move over to. And notice it's highlighting
quote_generator with a missing argument message because it knows that this service has one argument. So we can say
@logger, or even
Command+
O and then use autocomplete to help us:
Head back, refresh, it still works and I could go down here to click into my profiler to check out the logs. Or, in PhpStorm, we can go to the bottom right Symfony menu, click it and use the shortcut to get into the Profiler. Click that, go to logs, and there's our quote right there.
The autocompletion of the services and the ability to generate your properties is probably one of the most important features that you need to master with PHPStorm because it's going to help you fly when you develop in Symfony. | https://symfonycasts.com/screencast/phpstorm/service-shortcuts | CC-MAIN-2019-30 | en | refinedweb |
Opened 13 years ago
Closed 13 years ago
Last modified 48 years ago
#250 closed bug (Fixed)
getUserEntryForID dies
Description
The following program dies with an "Illegal instruction" error: module Main where import System.Posix.User main = do getUserEntryForID 0 return () This is also the case for all the other uids I tried. I'm running ghc 6.2.1 on Solaris 9. - Ed
Change History (2)
comment:1 Changed 13 years ago by
comment:2 Changed 13 years ago by
Logged In: YES user_id=126328 Fixed in CVS and I expect it to be available in the upcoming 6.2.2. This turned out to be much more tricky after all: Solaris defines some functions in system headers which the mangler mangled to death when it should leave them untouched.
Note: See TracTickets for help on using tickets. | https://ghc.haskell.org/trac/ghc/ticket/250 | CC-MAIN-2017-39 | en | refinedweb |
how software can use the MicroBlaze user vector exception functionality, including simple code examples to illustrate typical use cases, and references to the documentation where the feature is described.
The MicroBlaze processor provides a user exception that can be invoked by calling the user exception vector.
This article describes how to declare and call a user exception handler from software.
The User Vector, by default at address 0x8, is defined in the GNU Compiler Tool crt0.o file, where it calls the function named _exception_handler.
This function is defined in the newlib library, but not used anywhere in the BSP, so you can directly implement the function in your code.
It can be written in a high level language (C, C++) or assembler.
There are two ways to call a user exception from software, using the normal subroutine call instruction bralid, or using the brki instruction.
The brki instruction has the advantage of automatically disabling breaks, and thus also interrupts.
When a user exception handler coded in C is called with brki, breaks must be enabled in the handler before returning.
This can be done with an msrclr instruction, provided that the MicroBlaze parameter C_USE_MSR_INSTR is set to 1.
Also, the return from the user exception handler normally assumes that the call was made with an instruction using a delay slot.
This can be handled by adding a nop instruction after the brki.
When the MMU is enabled, the user exception handler is executed in real privileged mode, which makes it suitable as an operating system call entry point.
Examples
Here are two simple code examples written in C:
Note that although a message is printed in the handler in these examples, this is generally not a good practice.
Figure 1 Calling handler from bralid
#include <stdio.h>
#define CALL_USER_VECTOR asm("bralid r15,0x8 ; nop")
void _exception_handler ()
{
xil_printf("In user exception!\r\n");
}
)
{
xil_printf("Main: Calling user exception...\r\n");
xil_printf("Main: Done.\r\n");
return 0;
}
Figure 2 Calling handler from brki
#include <stdio.h>
#define CALL_USER_VECTOR asm("brki r15,0x8 ; nop")
#define CLEAR_BIP asm(msrclr r0,8)
void _exception_handler ()
{
xil_printf("In user exception!\r\n");
CLEAR_BIP;
}
int main(
)
{
xil_printf("Main: Calling user exception using brki...\r\n");
xil_printf("Main: Done.\r\n");
return 0;
} | https://www.xilinx.com/support/answers/53823.html | CC-MAIN-2017-39 | en | refinedweb |
Hi,
I have a "masterpage" custom control which holds two ContentControl elements (one used for "header content" and one for "main content"). Until now it has only been used in static scenarios but now I want to be able to change the datatemplate
used in the main content area dynamically according to some kind of MainContentType property. My first thought is to use ContentTemplateSelector as shown in the code below. My problem is that when running the code the MainContentSelector.SelectTemplate
is entered but the input parameter "item" is null. What do I have to do to have the item holding information of wich datatemplate to return.
Here follows the code of the masterpage custom control.
using System.Windows;
using System.Windows.Controls;
namespace MasterPage
{
public class MasterPageControl : Control
{
public static DependencyProperty MainContentProperty =
DependencyProperty.Register("MainContent",
View Complete Post
View:
-
Categories:
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend | http://www.dotnetspark.com/links/40709-datatemplateselector-custom-control.aspx | CC-MAIN-2017-39 | en | refinedweb |
Sort the array. For every pair of sticks u, v with stick u occuring before v (u <= v), we want to know how many w occuring after v have w < u + v.
For every middle stick B[j] = v, we can use two pointers: one pointer i going down from j to 0, and one pointer k going from the end to j. This is because if we have all w such that w < u + v, then decreasing u cannot make this set larger.
Let's look at an extension where our sorted array is grouped into counts of it's values. For example, instead of dealing with A = [2,2,2,2,3,3,3,3,3,4,4,4], we should deal with only B = [2, 3, 4] and keep a sidecount of C[2] = 4, C[3] = 5, C[4] = 3. We'll also keep a prefix sum P[k] = C[B[0]] + C[B[1]] + ... + C[B[k-1]] (and P[0] = 0.)
When we are done setting our pointers and want to add the result, we need to add the result taking into account multiplicities (how many times each kind of triangle occurs.) When i == j or j == k, this is a little tricky, so let's break it down case by case.
- When i < j, we have C[B[i]] * C[B[j]] * (P[k+1] - P[j+1]) triangles where the last stick has a value > B[j]. Then, we have another C[B[i]] * (C[B[j]] choose 2) triangles where the last stick has value B[j].
- When i == j, we have (C[B[i]] choose 2) * (P[k+1] - P[j+1]) triangles where the last stick has value > B[j]. Then, we have another (C[B[i]] choose 3) triangles where the last stick has value B[j].
def triangleNumber(self, A): C = collections.Counter(A) C.pop(0, None) B = sorted(C.keys()) P = [0] for x in B: P.append(P[-1] + C[x]) ans = 0 for j, v in enumerate(B): k = len(B) - 1 i = j while 0 <= i <= j <= k: while k > j and B[i] + B[j] <= B[k]: k -= 1 if i < j: ans += C[B[i]] * C[B[j]] * (P[k+1] - P[j+1]) ans += C[B[i]] * C[B[j]] * (C[B[j]] - 1) / 2 else: ans += C[B[i]] * (C[B[i]] - 1) / 2 * (P[k+1] - P[j+1]) ans += C[B[i]] * (C[B[i]] - 1) * (C[B[i]] - 2) / 6 i -= 1 return ans | https://discuss.leetcode.com/topic/92223/python-straightforward-with-explanation | CC-MAIN-2017-39 | en | refinedweb |
Wiki Team/Guide/Formatting
For a quick overview of wiki markup see the Wiki Team/Guide/Overview!
Text will show up just as you type it (provided you begin it in the first column). Multiple spaces are compressed, and line endings are ignored (except blank lines).
Use a blank line to start a new paragraph. Multiple blank lines add more vertical space.
Wiki markup code is supposed to be simple and easy to learn. You can also use most HTML markup, if you prefer it, except for links.
Contents
Fonts
Use these to change fonts:
Sections
It is often useful to divide articles into sections and subsections. The following markup can be used. You must begin these on a new line.
An article with four or more headings will automatically create a table of contents. Using HTML heading tags also creates proper section headings and a table of contents entry.
Lists
Wiki markup makes lists fairly easy:
- See also Template:Definition table
- HTML lists are also allowed
- Blank lines should be avoided, they break list numbering and sub-list. Use <br> instead.
Linking
Linking is covered in a separate page: Wiki Team/Guide/Links
Preformatted text
Text which does not begin in the first column of the wiki editing box will be shown indented, in a fixed width font.
For example, this line was preceded by a single space. It preserves whitespace formatting: A B C D E F G H I
A single paragraph of text can be entered with a space in column 1 (the first character position in the wiki editing box) of every line to preserve the raw text format of the entry.
For example, this block of lines was entered with 4 spaces before each line. A B C D E F G H I
The parser tags
<pre> and </pre> may be entered before and after blocks of preformatted text to preserve the raw format of the text. This is suitable for chat transcripts such as Design_Team/Meetings/2009-03-01.
Line wrapping
Line wrapping can be adjusted with the style property white-space (See)
- With this tag and style,
<pre style="white-space:pre-wrap">, whitespace is preserved by the browser. Text will wrap when necessary, and on line breaks.
Code Examples
Raw text is appropriate for showing computer code or command examples, such as this famous little program:
main(){ printf("Hello, World!\n"); }
- Coding format may be embedded in a line of regular text with the <code>
code text</code> wiki syntax.
- Blocks of pre-existing code (often with structured linebreaks) may be preserved by entering a space before
<nowiki>in the first wiki page column as in this example:
<nowiki</nowiki>
The result looks like the following:
Stopping Text flow
See mediawikiwiki:Help:Images#Stopping_the_text_flow.
Tables
Tables are covered in a separate page: Help:Tables
nowiki
Use <nowiki> to insert text that is not parsed by the wiki, e.g.,
- <nowiki>====Heading 4====</nowiki>
will result in ====Heading 4====, rather than the creation of an actual heading, Heading 4
Categories
An important navigational tool in the wiki is the use of categories. It is customary to include one or more category tags to the end of a page to indicate in what topical area the page belongs. For example, this page should be listed under the [[Category:General public]] and [[Category:HowTo]] categories. You can create a new category, if really necessary, simply by including a new category tag, e.g., [[Category:My new category]].
Templates
Another tool that helps make the wiki more readily navigable is the use of templates (Please select the Template namespace in the pull-down menu on the Special:Allpages page to view the current list of available templates). Templates provide a consistent look to pages as well as a shorthand for communicating things like {{Stub}} and {{Requesttranslation}}. Feel free to ask for help (here) regarding creating new templates you may need.
Translation
We have a mechanism for maintaining multilingual versions of pages in the wiki. Please see Translation Team/Wiki Translation for more information about translations. We also recommend the use of the Google machine translation for pages that have yet to be translated by a human expert. To see a Google translation, simply use the dropdown selector in the sidebar. | http://wiki.sugarlabs.org/go/Wiki_Team/Guide/Formatting | CC-MAIN-2017-39 | en | refinedweb |
Wikiversity:Developing Wikiversity through action research
This page is to coordinate activities in this collaborative action research project to develop and, in some senses, define Wikiversity. Essentially, it asks: "What are we doing to learn and facilitate learning on Wikiversity, and how can we improve what we are doing?" Some background to this research can be found on the "About" page and, on a more personal level, here. (Disclosure: this research constitutes the bulk, if not the whole, of my empirical research towards a PhD in education - however, I do not claim any 'ownership' over the research whatsoever. Cormaggio talk 22:51, 6 February 2008 (UTC)) Discussion about this project can be found and continued on the talk page, and if you wish to add yourself as an interested participant, please list yourself on the participants page. But again, please, edit this page to make it a truly collaborative research project...
Note: Cormac Lawler presented the results of his research as "Wikiversity: a project struggling with its own identity" at Wikimania 2010 in Gdansk, Poland.
Contents
Scope of research[edit]
This comes from discussion on the Colloquium in January/February 2008 - please also see Agenda
- Clarifying learning model on Wikiversity (see Wikiversity learning model and Learning by doing, and workspaces below)
- Develop engaging content/activities that would serve to build learning communities
- Engage in outreach to invite participation
- Explore tools for improving communication
Goals of research[edit]
- Identify affordances and constraints of a wiki for learning - and apply this to the practice of building learning communities
- ...
Means of carrying out research[edit]
- Form groups - set goals, tasks, and deadlines (/Groups, meetings and deadlines)
Workspaces[edit]
- Wikiversity learning model/Reading group
- Wikiversity learning model/Discussion group
- Making Wikiversity a personal learning space
- Building successful learning communities on Wikiversity
- Wikiversity:Being educational
Theories in use[edit]
- See /Theories
- Mainly activity theory (specifically Engestrom's "expansive learning"), and communities of practice
- Also strands of radical pedagogy, power, democracy, organisational learning (please add ideas to the theories page)
Form of research[edit]
The box below outlines a series of steps that correspond to an action research methodology for facilitating and assessing the development of Wikiversity. These steps correspond to headings below - all of which, of course, are editable. Steps 1-3 (inclusive) are the "pre-research" steps, and envisaged to be worked on immediately and simultaneously; 4-7 (inclusive) constitute one "cycle", or iteration, of the action research spiral (see action research page for details).
Projects[edit]
(these need updating and refactoring in parts)
Describe Wikiversity[edit]
- See Wikiversity:Reports for now..
- Wikiversity:Reports/15 January 2007
- add others..
- Something should be said about the interaction which a Wiki permits...asynchronous communication, decentralized information gathering, "trust"..et cetera
- /List of uses of Wikiversity
- What sort of participants does Wikiversity attract? /participant profiles
- /Wikiversity identity - what sort of social/community identities result from participating on Wikiversity
- Aspects that work/don't work (See Wikiversity:Reports, Wikiversity:What I have learned, Wikiversity:Problems I have encountered...)
- List of introspective dichotomies - eg, how is Wikiversity characterized with respect to:
- inclusivity vs. exclusivity?
- process vs. product?
- process vs. jurisprudence?
- virtual vs. brick-and-mortar?
- prescriptive vs. descriptive?
- aggregation of existing knowledge vs. generating new knowledge?
Increasing participation[edit]
Economy of thought: Currently, not too many learning communities have cropped up. This might be due to the lack of participants on Wikiversity. If we adopt a "learner-teacher" model, there may be a mismatch between participant interests. Essentially, we have an "economy of thought" analogous to bartering...
- Increase participation through Wikiversity outreach. It is important to let people know we are here, but it is more important to let them know that we know they are out there. If you find a group that you think could or should interact with Wikiversity, go ahead and start them a page here, linking it with learning resources that are "up their alley".
- Some examples:
- Wikiversity and Connexions collaboration - a project to build and research (amongst other things) a technical bridge between the two projects.
- SchoolForge - This is a Coalition of organizations that is dedicated to promoting and develping Free, Libre and Open Source Software in Education. Wikiversity has effectively "joined" the coalition by adding their page in the main namespace. Both SchoolForge and Wikiversity grow by linking common resources such as the Online Education Database and organizations like the FLOSSE Posse
- Wikimania - Wikiversity has an intrinsic funtion as a wiki-based community that is flexible enough to "go with the flow" of events and online social networking. As a Wikimedia Foundation project, we can prove our worth by helping to organize learning activities and providing services such as the OpenMoko/Internet Kiosk, VoIP teleconferencing and Wiki Campus Radio.
- other examples that could increase participation
- Document feedback, "Follow-through" and progress at Wikiversity talk:Wikiversity outreach. As examples, follow links to the home sites of organizations that are listed under Category:Workspaces. See if their site includes a reciprocol link to Wikiversity. Several of those sites, particularly those that use MediaWiki contain a Wikiversity page that serves as an "outpost" for linking relevent shared resources, defining roles and coordinating action.
- Another Question: How important/effective is reciprocal linking in increasing participation and aiding collaboration? Does the practice spread people too thin? Or does it actually build community?
- Increasing commitment within the established community
- Holding regular meetings
- Determining tasks
- Having weekly deadlines for these tasks
- Trying to build something people want to participate in
- Using this community to increase the number of participants and to keep those who are already involved
- Do some marketing: politely making people enthusiastic for Wikiversity
- Conflict management: Conflicts can be lethal to a group of people. Solutions: different guilds for different opinions within Wikiversity. Stimulate social bonding, so friendship becomes more important than being part of a language community.
- Some tasks in organizing can be dull. So, there must be a goal to achieve. A goal could be to build up which can be appealing to a large crowd and could make people proud of having achieved it.
- What do you want to achieve?
- I want to make a historical atlas.--Daanschr 10:02, 30 January 2008 (UTC)
Establishing credibility[edit]
As Wikipedia must continue to prove its value as an enclopedia, so must Wikiversity prove itself as an online learning community. What actions can insure that Wikiversity content is viable? Some stubby ides have been planted at Wikiversity:Quality and some Content development guidlines have been offered.
- More questions: Are the naming conventions established in the beginning here at the English language Wikiversity compatible with those in mainstream academia? Are they being replicated equivalently on other language versions? Could Wikiversity gain credibility by improving communications across political barriers and cultural boundaries?
Outreach[edit]
- Go back out to other Wikimedia sisterprojects, and get opinions and impressions on how Wikiversity is seen from other parts of the Wikimedia-sphere. See if we can use Wikiversity's Academic freedom to aid other Wikimedians in their efforts from within projects with policies that restrict certain desirable movements. This should involve going out of our "comfort zone" to projects we haven't been involved with, in order to 'learn new things and bring them back here to teach/share/explore how Wikiversity can become more effective as a service community.
- Exercise the Wikiversity outreach "muscle" by taking notes and building relevance within other Internet spheres such as Google Groups, Myspace or YouTube - and, of course, with other education-oriented communities - to find other Wikiversitans "out there"..
- Team up and build Wikiversity "outposts" in somewhat the same way that "brick-and-mortar" institutions build extension services and field offices. Build learning resources here at Wikiversity for tracking the work we do in other spheres. Introduce people here to new vistas and show them the "tips and tricks" for working (and playing) in other Internet environments.
How action/reflection is to be carried out[edit]
Please add suggestions about various aspects of point three above - and sign your comment to indicate that this is your point-of-view, and not necessarily someone else's.
- Ideas for groups, meetings and deadlines have been proposed - we can work on these at /Groups, meetings and deadlines.
- Perhaps, when we develop a number of principles of the research, they should be developed into a page like /Principles (or perhaps one or more of /Overview, /Proposed methodology, or /Proposed methods would be better?). The timeframe for this work could be developed at /Timeframe; milestones at /Milestones, Intended outcomes at /Intended outcomes ...
Other projects[edit]
There is another action research project on Wikiversity underway at Learning to learn a wiki way - please also participate there if you are interested.
Elearning_in_open_source_education is investigating the concept of 'open source education' and e-learning within such approaches. | https://en.wikiversity.org/wiki/Wikiversity:Developing_Wikiversity_through_action_research | CC-MAIN-2017-39 | en | refinedweb |
Hastily-made repositories often don't work
If you create your repository before its URL has been created and is resolving online, you may have an inaccessible repository. Assuming it doesn't yet contain any information, just delete it and recreate it.
Alternatively, you may experience a DNS propagation delay if you can access your repository via http within the DreamHost system, but see an empty directory from outside DreamHost. If you can't wait, try querying the DreamHost DNS servers for your IP and use user share the same namespace, even if under different URLs. With a single FTP user, a given repository name can only be used once for all svn domains.
If you create, you cannot have a second repository named. If you do configure, it.
You must configure a subdomain such as realsvn.exampleOnDreamHost.com, then create a subdomain for svn.exampleNOTonDreamHost.com.
Finally, create a mirror to that domain from the domain you want. Make sure you wait for DNS propagation to occur before testing everything.
Handmade repos are read-only panel to delete and recreate your repository. Your test repositories will not function properly in production as. Overwriting files causes them to have your user's group instead of dhapache. If your repository is relatively mature when you make this mistake, and you have no backup, you won't have any options.
Some clients require MD5 passwords
If you get "access denied" warnings, you may need to use MD5 passwords instead of the standard variety created by the DreamHost panel. Use htpasswd -nm username to generate the entry (ala username:passwd-hash) for your reponame.passwd file, then use an editor to replace your standard password entry.
Don't overwrite your reponame.access> and reponame.passwd files with echo "username:passwd-hash" > reponame.passwd. Overwriting files causes them to have your user's group instead of dhapache, leaving your repository in a read-only or inaccessible state.
Interfacing with Subversion from PHP
You'll need to install the PECL extension SVN:
.htaccess security set as a working copy checked out from your Subversion repository. If you elect to do this, be certain to modify your site's .htaccess to prevent users from browsing Subversion's control files. Something simple in the root of your site such as the following should suffice:
RedirectMatch 403 /\.svn must manually be prepared that if you now use the DreamHost control panel to change settings (at least if you make changes to the user list), the panel not only reverts the file permissions back to dhapache but also overwrites both files, overwriting any changes you made to them. So you'd should always have a backup to reconstruct anything important that might get overwritten.
"chmod 644" is giving "world" read access to those files which may be a security risk. While these files won't be accessible via http (unless you also publish your svn/ which would likely be unwise), these files are accessible to other shell users to your account and MAYBE by other DreamHost account holders on your shared host. Therefore, for real security, you should not use http, but instead https (which is also the only option if accessing via WebDAV) or SSH. | https://help.dreamhost.com/hc/en-us/articles/215466008-Subversion-considerations-and-caveats | CC-MAIN-2017-39 | en | refinedweb |
Dear haskell-cafe patrons,
I've been working through an exercise in Hudak's _The Haskell School of
Expression_ (ex. 3.2, creating a snowflake fractal image), and am seeing
some strange drawing behavior that I'm hoping somebody can shed some
light on.
My initial solution is below (it requires HGL for Graphics.SOE):
module Main where
import Graphics.SOE
main =
runGraphics (
do w <- openWindow "Snowflake Fractal" (600, 600)
fillStar w 300 125 256 (cycle $ enumFrom Blue)
spaceClose w
)
spaceClose w =
do k <- getKey w
if k == ' '
then closeWindow w
else spaceClose w
minSize = 2 :: Int
fillStar :: Window -> Int -> Int -> Int -> [Color] -> IO ()
fillStar w x y h clrs | h <= minSize
= return ()
fillStar w x y h clrs
= do drawInWindow w
(withColor (head clrs)
(polygon [t1p1,t1p2,t1p3,t1p1]))
drawInWindow w
(withColor (head clrs)
(polygon [t2p1,t2p2,t2p3,t2p1]))
sequence_ $ map recur [t1p1,t1p2,t1p3,t2p1,t2p2,t2p3]
where tanPiOverSix = tan(pi/6) :: Float
halfSide = truncate $ tanPiOverSix * fromIntegral h
hFrag = truncate $ tanPiOverSix * tanPiOverSix * fromIntegral h
(t1p1,t1p2,t1p3) =
((x, y), (x-halfSide, y+h),(x+halfSide, y+h))
(t2p1,t2p2,t2p3) =
((x-halfSide, y+hFrag),(x, y+h+hFrag),(x+halfSide, y+hFrag))
reVert y = y - ((h - hFrag) `div` 3)
recur pnt =
fillStar w (fst pnt) (reVert (snd pnt)) (h`div`3) (tail clrs)".
Versioning info:
CPU: Pentium M
OS: Gentoo GNU/Linux, kernel 2.6.18
GCC: 4.1.1
GHC: 6.6
HGL: 3.1
HUGS: March 2005
[all software compiled from source using gentoo ebuilds]
Is anybody else familiar with this behavior? If not, any suggestions as
to where I should file this as a potential bug? GHC? HGL? Both? Elsewhere?
Thanks in advance for any information.
Calvin
p.s. Any stylistic or other comments about the code welcome too. | http://article.gmane.org/gmane.comp.lang.haskell.cafe/18064 | CC-MAIN-2017-39 | en | refinedweb |
On 27.02.2016 19:29, Christophe Henry wrote:
[...]
> public class CustomApplication extends Application
> {
> private static instance @Override void onCreate()
> {
> super.onCreate()
> instance =this }
>
> public trait ApplicationUtilities
> {
> public ContextgetApplicationContext(){return instance.applicationContext }
> }
> }
>
> While this piece of code would have worked with an abstract class or a
> Java 8's interface, it throws a compilation error with a trait.
>
> I know I'm trying to do unnatural things with this language but is there
> any reason for this limitation and is there a plan to implement a
> feature like this in future releases?
Hmmm, I wonder if this can be made sense of or not. A trait can be seen
as some independent piece that can be added to a class. But for it to be
independent, it cannot be non-static. In your code the trait
implementation would depend on an instance of CustomApplication and
cannot be created without supplying such an instance. If I now have a
random class X, how can I let that use the trait without an instance of
CustomApplication? I cannot. Then how does the instance get there? Is
there one instance per trait usage, or do all traits share the same
CustomApplication instance (as would be the case in the non-static case
for inner classes!)d
So what makes no sense is to have a
class X implements CustomApplication.ApplicationUtilities {}
What could work is something like:
class X {
def someField
trait T {
def getField() {return someField}
}
class Y implements T {
}
def foo(){ return new Y() }
}
def x = new X()
assert x.someField == x.foo().getField()
I kind of doubt you wanted this later version... and does not anyway...
yes, the inner class cases have been totally ignored so far. Even making
the trait static is raising havoc in the compiler... and that at least
should have passed compilation and only failed when trying to access
"someField".
Though... further thought leads me to see things a little different. In
a trait you can do
trait X {
def foo() { return bar }
}
class Y implements X{
private bar = 1
}
The property bar is not existing in the trait, it is expected on
"traited" class. Following this logic, what does it mean to access the
field of an outer class? Strictly following this logic means not to be
able to do that. Resolving first outer class fields and then traited
class means to get into trouble with dynamic properties.
bye Jochen | http://mail-archives.apache.org/mod_mbox/groovy-dev/201602.mbox/%[email protected]%3E | CC-MAIN-2017-39 | en | refinedweb |
Social Coding/Hosting a Project at GitHub
Contents
Eclipse Foundation-managed GitHub Organizations
The Eclipse Foundation manages several organizations on GitHub; one for each of the forges. to ensure the authors have valid CLAs, and that the contribution is Signed-off-by.
Moving Your Existing GitHub Project to Eclipse
You will need to engage in the project creation process. Please see Starting a New Project. complete history will remain in the repository, but it will be obscured to a separate namespace.
-! | http://wiki.eclipse.org/index.php?title=Social_Coding/Hosting_a_Project_at_GitHub&oldid=356506 | CC-MAIN-2017-39 | en | refinedweb |
Hi Stefan,
On 4/25/07, Stefan Seelmann <[email protected]> wrote:
>
> Hi Alex,
>
> Alex Karasulu schrieb:
> > Hi Stefan,
> >
> > I specifically had worked a JIRA issue to make sure the modify timestamp
> on
> > the
> > schema subentry at cn=schema was modified when there were any changes to
> > schema entries made via cn=schema modifs or changes made directly on
> schema
> > entries under ou=schema. However I may have botched this. I'm very
> > interested
> > in finding out if the timestamp update is failing.
> >
> > Alex
> >
>
> The good news: timestamp update works fine.
Oh coolio!
However I found the reason why LS fails to detect if the schema was
> modified. When connecting to the directory LS requests the attributes
> modifyTimestamp and createTimestamp by its names from the server, but
> they aren't returned:
> --------------------------------------------------------------
> $ ldapsearch -x -h localhost -p 10389 -D "uid=admin,ou=system" -w
> "secret" -s base -b "cn=schema" "(objectClass=subschema)" modifyTimestamp
>
> # schema
> dn: 2.5.4.3=schema
> --------------------------------------------------------------
>
>
> However when requesting *all* operational attributes the timestamps are
> returned:
> --------------------------------------------------------------
> ldapsearch -x -h localhost -p 10389 -D "uid=admin,ou=system" -w "secret"
> -s base -b "cn=schema" "(objectClass=subschema)" "+" | grep
> "modifyTimestamp:"
>
> modifyTimestamp: 20070425172817Z
> --------------------------------------------------------------
>
>
> I verified that with both 1.5.0 and the current trunk. Should I file a
> Jira?
>
>
> But also LS executes on invalid search resquest. RFC4512 specifies that
> the client must use the search filter "(objectClass=subschema)", LS
> sends "(objectClass=*)". It isn't a big deal to change that.
Hmm this is a tough one. Really (objectClass=*) is a relaxed form of the
(objectClass=subschema) so why should it not return (I know I coded it to
return only when you ask for subschema). I wonder if the RFC allows for the
* version?
Unfortunately I'm taking care of some last minute errands today but I will
investigate this later. Perhaps a JIRA is in order just so we can come to a
conclusion on this and change either LS or ApacheDS to suite the proper
behavoir.
Thanks for the investigation,
Alex | http://mail-archives.apache.org/mod_mbox/directory-users/200704.mbox/%[email protected]%3E | CC-MAIN-2015-32 | en | refinedweb |
On Thu, 2007-03-08 at 16:32 +1300, Sam Vilain wrote:<snip>> Kirill, 06032418:36+03:> > I propose to use "namespace" naming.> > 1. This is already used in fs.> > 2. This is what IMHO suites at least OpenVZ/Eric> > 3. it has good acronym "ns".> > Right. So, now I'll also throw into the mix:> > - resource groups (I get a strange feeling of déjà vú there)<offtopic>Re: déjà vú: yes!It's like that Star Trek episode ... except we can't agree on the nameof the impossible particle we will invent which solves all our problems.</offtopic>At the risk of prolonging the agony I hate to ask: are all of thesegroupings really concerned with "resources"?> - supply chains (think supply and demand)> - accounting classesCKRM's use of the term "class" drew negative comments from Paul Jacksonand Andrew Morton about this time last year. That led to my suggestionof "Resource Groups". Unless they've changed their minds...> Do any of those sound remotely close? If not, your turn :)I'll butt in here: task groups? task sets? confuselets? ;)Cheers, -Matt Helsley-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to [email protected] majordomo info at read the FAQ at | https://lkml.org/lkml/2007/3/8/12 | CC-MAIN-2015-32 | en | refinedweb |
License: BSD3 Author: Dean Herington Homepage: Category: Testing Build-Depends: base Synopsis: Unit testing framework for Haskell Exposed-modules: Test.HUnit, Test.HUnit.Base, Test.HUnit.Lang, Test.HUnit.Terminal, Test.HUnit.Text Extensions: CPP
and the following Setup.hs:
import Distribution.Simple main = defaultMain
Example 2. A package containing executable programs
Name: TestPackage Version: 0.0 License: BSD3 Author: Angela Author Build-Depends: HUnit Executable: program1 Main-Is: Main.hs Hs-Source-Dir: prog1 Executable: program2 Main-Is: Main.hs Hs-Source-Dir: prog2 Other-Modules: Utils
with Setup.hs the same as above.
Example 3. A package containing a library and executable programs
Name: TestPackage Version: 0.0 License: BSD3 Author: Angela Author Build-Depends: HUnit Exposed-Modules: A, B, C Executable: program1 Main-Is: Main.hs Hs-Source-Dir: prog1 Other-Modules: A, B Executable: program2 Main-Is: Main.hs Hs-Source-Dir: prog2 Other-Modules: A, C, Utils
with Setup.hs the same as above..2). A few packages require more elaborate solutions (see Section 2.3).
The package description file should have a name ending in .cabal, and contain several "stanzas" separated by blank lines. Each stanza consists of a number of field/value pairs, with a syntax like mail message headers.
case is not significant in field names
to continue a field value, indent the next line
to get a blank line in a field value, use an indented "."
Lines beginning with "--" are treated as comments and ignored.
The syntax of the value depends on the field. Field types include:
Either a sequence of one or more non-space non-comma characters, or a quoted string in Haskell 98 lexical syntax.
An arbitrary, uninterpreted string.
A letter followed by zero or more alphanumerics or underscores.
Some fields take lists of values, which are optionally separated by commas, except for the build-depends field, where the commas are mandatory.
The first stanza describes the package as a whole, as well as the library it contains (if any), using the following fields:
The unique name of the package, without the version number (required).
The package version number, usually consisting of a sequence of natural numbers separated by dots (required).
The type of license under which this package is distributed. License names are the constants of the License type.
The name of a file containing the precise license for this package.
The name of the holder of the copyright on the package.
The original author of the package.
The current maintainer of the package, if different from the author. packages, possibly annotated with versions, needed to build this one, e.g. foo > 1.2, bar. If no version constraint is specified, any version is assumed to be acceptable.
A list of modules added by this package (required if this package contains a library).
This stanza may also contain build information fields (see Section 2.1.2) relating to the library.
Subsequent stanzas (if present) describe executable programs contained in the package, using the following fields, as well as build information fields (see Section 2.1.2).
The name of the executable program (required).
The name of the source file containing the Main module, relative to the hs-source-dir directory (required).
The following fields may be optionally present in any stanza, and give information for the building of the corresponding library or executable. See also Section 2.2 for a way to supply system-dependent values for these fields.
Is the component buildable? (default: True.) Like some of the other fields below, this field is more useful with the slightly more elaborate form of the simple build infrastructure described in Section 2.2.
A list of modules used by the component but not exposed to users. For a library component, these would be hidden modules of the library. For an executable, these would be auxiliary modules to be linked with the file named in the main-is field.
The name of root directory of the module hierarchy, relative to the package root (default: "."). #-}
Additional options for GHC. You can often achieve the same effect using the extensions field, which is preferred.
Options required only by one module may be specified by placing an OPTIONS_GHC pragma in the source file affected., typically containing function prototypes for any foreign imports used by the package. These will be included in any compilations via C.
A list of directories to search for header files..2.
Command-line arguments to be passed to the linker. Since the arguments are compiler-dependent, this field is more useful with the setup described in Section 2.2.
On Darwin/MacOS X, a list of frameworks to link to. Take a look at Apple's developer documentation to find out what frameworks actually are. This entry is ignored on all other platforms.
For some packages, implementation details and the build procedure depend on the build environment. The simple build infrastructure can handle many such situations using a slightly extended Setup.hs:
import Distribution.Simple main = defaultMainWithHooks defaultUserHooks
This file package.buildinfo exists after the configuration step, subsequent steps will read it to obtain additional settings for build information fields (see Section 2.1.2),.2., --with-hc-pkg and --prefix options to the configure command are passed on to the configure script.
the --copy-prefix option to the copy command becomes a setting of a prefix variable on the invocation of make install.
You can write your own setup script conforming to the interface of Section 3, possibly using the Cabal library for part of the work. One option is to copy the source of Distribution.Simple, and alter it for your needs. Good luck. | https://www.haskell.org/cabal/release/old/users-guide-0.1/x29.html | CC-MAIN-2015-32 | en | refinedweb |
Type: Posts; User: Mybowlcut
I only mentioned it because I couldn't find the source of the error, so thought it may be due to the metaprogramming code. I see now that I am completely blind!
Cheers. :)
Hey.
I'm running into a problem with a class that uses template metaprogramming. I'm not well-versed with the methodology, so I'm a bit confused as to why I'm getting the error.
The screen...
To convert an int to a string. As for appending a string to another string, that's just string concatenation.
Actually, you have misunderstood what I meant. I was asking nuzzle how he would do this:
I misread what he wrote, as I thought he said he was somehow enforcing that derived class virtual functions...
Sorry, I understand that, but I still don't understand how you would enforce that a virtual function declared as private in a derived class but declared public in the base class is not called from a...
How can they not be accessible considering my first post's sample code and output?
Here is my GUI_Object class:
class GUI_Object;
typedef boost::shared_ptr<GUI_Object> gui_object_ptr;
typedef boost::weak_ptr<GUI_Object> weak_gui_object_ptr;
class Screen_Event;
// The...
The real reason that it came about was that I was creating a GUI builder, and I needed to have a function that could create an object that derived from GUI_Object based on a string parameter and...
Because they suck at designing inheritance hierarchies. ;)
I have a GUI_Object base class and decided recently that I needed my Text class to derive from it. However, the Text class shouldn't have...
I'm surprised I haven't encountered this yet (or perhaps I have but somehow didn't notice):
#include <iostream>
#include <vector>
class Base
{
public:
virtual bool Is_Mouse_Over()...
That completely slipped my mind haha. Cheers. :)
Edit: Nevermind, will let them work it out from Paul's post.
As described in the code below: it doesn't seem possible, but I could really use it... Does anyone have any ideas of how to get around this?
#include <iostream>
#include...
That's awesome that you've figured out what's right for you. My Dad just realised after 49 years of life that he wants to direct movies, so he is studying that. Never too late to do what you want to...
Care to elaborate or just going to leave it at that?
Just bumping this up to the top.
Like that?
I've just started properly using precompiled headers in my game engine:
// stdafx.h : include file for standard system include files,
// or project specific include files that are used...
So then IDBProvider is an abstract type - you either made it inherit from a class with a pure virtual function and did not override it, or you gave it a pure virtual function directly. Your options...
To create a pure virtual function:
virtual int getInt(string columnLabel) = 0;
Please use code tags.
Please use code tags to format your code.
Your program gave me this linker error in Visual Studio:
Error 2 error LNK2019: unresolved external symbol _main referenced in function...
How did you add the files into your project? I have used CSerial successfully before, so it is definitely a problem with how you've added it or where it is on your computer. Chances are you might not...
Yeah I know it's legal - that was my point, although I should have phrased it better.
Great..
Function get_player_attributes(team_no, players_per_team, attribute_name)
' ...
get_players_attributes = player_attributes
End Function
I should mention at this point...
I guess it is "classic ASP", but I'm not sure because I have very limited experience with it.
' Used to get an array of a specific teams players' atttributes (e.g. name, goals, points)
Function... | http://forums.codeguru.com/search.php?s=6644dde5f452882112bedc40c223833b&searchid=7388481 | CC-MAIN-2015-32 | en | refinedweb |
You can subscribe to this list here.
Showing
4
results of 4
In the current release I have added a somewhat experimental "strict java" mode.
When activated with:
setStrictJava(true)
BeanShell will:
1) Require typed variable declarations, method arguments and return types
(no loose types allowed)
2) Modify the scoping of variables to be consistent with the above - to
look for the variable declaration first in the parent namespace, as
would a java method inside a java class. (As opposed to assuming
local scope and auto-allocation as in normal bsh.) e.g. you can write
a method called incrementFoo() that will do the correct thing without
referring to super.foo.
Getting this to work required only tiny changes to the namespace class, etc.
so I thought it might be useful to make it available. I anticipate that it
might be useful for people teaching Java, who don't want to deal with
explanations of differences in behavior between Java and BeanShell.
Note that there are some limitations - the GUI desktop environment is not
written to conform to "strict Java", so it probably won't run with that
mode turned on... So for now you'll be stuck with running scripts from
the command line.
Pat
With the 1.2b1 release I have added a new dynamically loadable extension - the
ReflectManager. This allows us to add new reflection capabilities supported
by later versions of Java without breaking backwards compatability.
The current use for this is to allow use of the java.lang.reflect accessibility
API to grant BeanShell access to private/protected fields, methods, and
non-public classes.
Since this is still somewhat experimental (and changes certain behavior in
field and method lookup when activated), it is turned off by default. To
activate accessibility do this:
setAccessibility( true );
I'd appreciate feedback on this new feature...
Thanks,
Pat
For those of you interested in using BeanShell as the scripting language
for an application that supports IBM's Bean Scripting Framework - The
necessary adapter is now included in the BeanShell 1.2b1 release. (You can
also grab it separately if you want).
Unfortunately we will have to wait for the BSF group to add us to the list
of "known" scripting languages in order to be automatically detected. So
in the mean time if you are writing an application that must be aware of
BeanShell you must do something like:
// register beanshell with the BSF framework
BSFManager mgr = new BSFManager();
String [] extensions = { "bsh" };
mgr.registerScriptingEngine(
"beanshell", "bsh.util.BeanShellBSFEngine", extensions );
Please let me know if you have any feedback on the adapter's behavior. It was
pretty straightforward. We are even supporting some of the more interesting
capabilities such as anonymous method invocation.
Pat
I've posted a 1.2 beta release... But I'd like to hold off making a big
announcement until we get some feedback and make this a final version.
This had a few bug fixes over the last release and some new features included
(and available separately as optional packages for those who don't want them):
- IBM Bean Scripting Framework (BSF) adapter.
- Reflective accessibility control. i.e. the ability to access private
methods, fields, and non-public classes (still alpha testing this).
- An experimental "strict Java" mode that alters the way bsh behaves slightly
to be more useful for those who are teaching Java.
I'll follow up in separate email about the new features individually and,
as always, I'm trying to make time to update the docs...
Pat | http://sourceforge.net/p/beanshell/mailman/beanshell-developers/?viewmonth=200107&viewday=27 | CC-MAIN-2015-32 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.