text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
In the following code I get an error from xcode saying: "Cannot use mutating member on immutable value: 'fahrenheitValue' is a 'let' constant." This code sample is from The Big Nerd Ranch Guide for iOS 6th edition. A part from not really understanding why the book would be wrong, I understand the meaning of the error but I don't get how I could work around this... Could somebody tell me what I'm doing wrong here? import UIKit class ConversionViewController: UIViewController { @IBOutlet var celsiusLabel: UILabel! var fahrenheitValue: Measurement<UnitTemperature>? { didSet { updateCelsiusLabel() } } var celsiusValue: Measurement<UnitTemperature>? { if let fahrenheitValue = fahrenheitValue { return fahrenheitValue.convert(to: .celsius) }else{ return nil } } } The problem is these two lines: if let fahrenheitValue = fahrenheitValue { return fahrenheitValue.convert(to: .celsius) You can't call convert(to:) on fahrenheitValue because fahrenheitValue is a constant and convert(to:) is trying to modify that constant, hence the error. The solution is to replace convert(to:) to converted(to:). The former doesn't return anything and tries to modify the receiver. The latter creates a new measurement and returns the new value. That is what you want. return fahrenheitValue.converted(to: .celsius)
https://codedump.io/share/cUXBufMo4WSS/1/error-when-unwrapping-a-optional
CC-MAIN-2019-47
en
refinedweb
Show Table of Contents Example 14.1. Making a class indexable with Example 14.2. Defining a custom Chapter 14. Mapping Domain Objects to the Index Structure 14.1. Basic Mapping In Red Hat JBoss Data Grid, the identifier for all @Indexedobjects is the key used to store the value. How the key is indexed can still be customized by using a combination of @Transformable, @ProvidedId, custom types and custom FieldBridgeimplementations. The @DocumentIdidentifier does not apply to JBoss Data Grid values. The Lucene-based Query API uses the following common annotations to map entities: - @Indexed - @Field - @NumericField 14.1.1. @Indexed The @Indexedannotation declares a cached entry indexable. All entries not annotated with @Indexedare ignored. Example 14.1. Making a class indexable with @Indexed @Indexed public class Essay { } Optionally, specify the indexattribute of the @Indexed annotation to change the default name of the index. 14.1.2. @Field Each property or attribute of an entity can be indexed. Properties and attributes are not annotated by default, and therefore are ignored by the indexing process. The @Fieldannotation declares a property as indexed and allows the configuration of several aspects of the indexing process by setting one or more of the following attributes: name - The name under which the property will be stored in the Lucene Document. By default, this attribute is the same as the property name, following the JavaBeans convention. store - Specifies if the property is stored in the Lucene index. When a property is stored it can be retrieved in its original value from the Lucene Document. This is regardless of whether or not the element is indexed. Valid options are: Store.YES: Consumes more index space but allows projection. See Section 15.1.3.4, “Projection” Store.COMPRESS: Stores the property as compressed. This attribute consumes more CPU. Store.NO: No storage. This is the default setting for the store attribute. index - Describes if property is indexed or not. The following values are applicable: Index.NO: No indexing is applied; cannot be found by querying. This setting is used for properties that are not required to be searchable, but are able to be projected. Index.YES: The element is indexed and is searchable. This is the default setting for the index attribute. analyze - Determines if the property is analyzed. The analyze attribute allows a property to be searched by its contents. For example, it may be worthwhile to analyze a text field, whereas a date field does not need to be analyzed. Enable or disable the Analyze attribute using the following:The analyze attribute is enabled by default. The Analyze.YES Analyze.NO Analyze.YESsetting requires the property to be indexed via the Index.YESattribute. The following attributes are used for sorting, and must not be analyzed. norms - Determines whether or not to store index time boosting information. Valid settings are:The default for this attribute is Norms.YES Norms.NO Norms.YES. Disabling norms conserves memory, however no index time boosting information will be available. termVector - Describes collections of term-frequency pairs. This attribute enables the storing of the term vectors within the documents during indexing. The default value is TermVector.NO. Available settings for this attribute are: TermVector.YES: Stores the term vectors of each document. This produces two synchronized arrays, one contains document terms and the other contains the term's frequency. TermVector.NO: Does not store term vectors. TermVector.WITH_OFFSETS: Stores the term vector and token offset information. This is the same as TermVector.YESplus it contains the starting and ending offset position information for the terms. TermVector.WITH_POSITIONS: Stores the term vector and token position information. This is the same as TermVector.YESplus it contains the ordinal positions of each occurrence of a term in a document. TermVector.WITH_POSITION_OFFSETS: Stores the term vector, token position and offset information. This is a combination of the YES, WITH_OFFSETS, and WITH_POSITIONS. indexNullAs - By default, null values are ignored and not indexed. However, using indexNullAspermits specification of a string to be inserted as token for the null value. When using the indexNullAsparameter, use the same token in the search query to search for null value. Use this feature only with Analyze.NO. Valid settings for this attribute are: Field.DO_NOT_INDEX_NULL: This is the default value for this attribute. This setting indicates that null values will not be indexed. Field.DEFAULT_NULL_TOKEN: Indicates that a default null token is used. This default null token can be specified in the configuration using the default_null_token property. If this property is not set and Field.DEFAULT_NULL_TOKENis specified, the string "_null_" will be used as default. Warning When implementing a custom FieldBridgeor TwoWayFieldBridgeit is up to the developer to handle the indexing of null values (see JavaDocs of LuceneOptions.indexNullAs()). 14.1.3. @NumericField The @NumericFieldannotation can be specified in the same scope as @Field. The @NumericFieldannotationproperties. The @NumericFieldannotation accept the following optional parameters: forField: Specifies the name of the related @Fieldthat will be indexed as numeric. It is mandatory when a property contains more than a @Fielddeclaration. precisionStep: Changes the way that the Trie structure is stored in the index. Smaller precisionStepslead to more disk space usage, and faster range and sort queries. Larger values lead to less space used, and range query performance closer to the range query in normal @Fields. The default value for precisionStepis 4. @NumericFieldsupports only Double, Long, Integer, and Float. It is not possible to take any advantage from a similar functionality in Lucene for the other numeric types, therefore remaining types must use the string encoding via the default or custom TwoWayFieldBridge. Custom NumericFieldBridgecan also be used. Custom configurations require approximation during type transformation. The following is an example defines a custom NumericFieldBridge. Example 14.2. Defining a custom NumericFieldBridge indexedValue = Long.valueOf( decimalValue .multiply(storeFactor) .longValue());.
https://access.redhat.com/documentation/en-us/red_hat_data_grid/7.0/html/developer_guide/chap-mapping_domain_objects_to_the_index_structure
CC-MAIN-2019-47
en
refinedweb
#include <wx/valnum.h> Validator for text entries used for integer entry. This validator can be used with wxTextCtrl or wxComboBox (and potentially any other class implementing wxTextEntry interface) to check that only valid integer values can be entered into them. This is a template class which can be instantiated for all the integer types (i.e. short, int, long and long long if available) as well as their unsigned versions. By default this validator accepts any integer values in the range appropriate for its type, e.g. INT_MIN..INT_MAX for int or 0..USHRT_MAX for unsigned short. This range can be restricted further by calling SetMin() and SetMax() or SetRange() methods inherited from the base class. When the validator displays integers with thousands separators, the character used for the separators (usually "." or ",") depends on the locale set with wxLocale (note that you shouldn't change locale with setlocale() as this can result in a mismatch between the thousands separator used by wxLocale and the one used by the run-time library). A simple example of using this class: For more information, please see wxValidator Overview. Type of the values this validator is used with. Validator constructor.
https://docs.wxwidgets.org/trunk/classwx_integer_validator.html
CC-MAIN-2019-47
en
refinedweb
A python utility for baking and extracting Open Badges metadata from images. Project description This package contains the utilities needed to “bake” Open Badges metadata into PNG or SVG image files or extract (‘unbake’) metada from such images. This Open Badges Bakery is produced by Concentric Sky. Installation pip: pip install openbadges_bakery Command Line Interface There is a command line interface for baking and unbaking assertion data. To bake a badge, identify the existing BadgeClass image with the input_filename and desired baked Assertion image filename to be created as well as the data to be baked into the image. bakery bake [input_filename] [output_filename] --data='{"data": "data"}' To extract Open Badges data from an image, use the unbake command. bakery unbake [input_filename] Output_filename is optional if you want the baked data to be written to a file. bakery unbake [input_filename] [output_filename] Python Interface The bake and unbake functions are available when installed as a python module To bake a badge, pass in an open file as input_file and the string of the badge data you wish to bake into the image. Result is an open TemporaryFile that contains the data. from openbadges_bakery import bake output_file = bake(input_file, assertion_json_string) To unbake a badge, pass in an open file as input_file: from openbadges_bakery import unbake output_file = unbake(input_file) Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/openbadges_bakery/
CC-MAIN-2019-47
en
refinedweb
service alerts technical support password managers tracker de filme tricks temporary e mail forwarding service share folders service online free explore the Earth tips and trick restaurant online invoicing service video share download DVD Burner Software Converter people search decrypt PDF can run a specific product verificare asigurare RCA Top taguri online software application trick service tips and trick service online tips and trick service online download freeware conver web application freewar Converter window tips and tricks freew freeware application video Taguri aleatoare removal tool excel password recovery picture resize forward proxy files eBook networking convert documents online disk recovery make PDF documents vba pdf somn linistit dll library 25GB free storage unlocking remove files from your hard drive without fear they could be recovered editor on web site free internet accelerator typing master Flv format change video game development Taguri speciale online invoicing service synchronize bookmarks combining bookmarks tips for social networking safety free online scan service OCR web service perfect for social media import bookmarks firefox export bookmarks free service for viewing and sharing high resolution imagery server monitoring service watch all your favorite movies E Commerce payment processing services weather service scan virus service web service services social media scan service hidden services web service online service online free organize favorites social tracking direct file transfer IS-08-ZVB Cautarea dvs. a gasit 1 - 10 din 1384 rezultate Currently 4.18/5 1 2 3 4 5 Rating: 4.2 /5(1965 voturi) Kismet - layer2 wireless network detector, sniffer, and intrusion detection system) Runtime WEP decoding wireless network detector sniffer intrusion detection system freeware tool Hidden SSID decloaking Currently 4.15/5 1 2 3 4 5 Rating: 4.1 /5(1961 voturi) Fring - make FREE Mobile calls, video calls live chat to other fringsters, GTalk, AIM, Yahoo and other Make free mobile calls with fring, IM with Skype, MSN Messenger, ICQ, Google Talk, SIP and Twitter Contacts. fring is a free mobile application that lets users communicate with friends on popular networks over their mobile phone's internet connection. fring users make free mobile calls, video calls, live chat & more, from their mobile phone with all their friends on fring & other internet services like Skype, MSN Messenger, GoogleTalk, AIM, ICQ , Facebook & Twitter, all through one central, integrated phone book. See the full feature list here. fring is completely free. It's free to download and free to use to make calls, video calls, instant messages and more, all via your mobile phone's internet connection (over 'IP'). fring has millions of users on 1000s of supported mobile devices across approximately 200 countries, and is growing exponentially - adding more than half a million new users every month. Start fringing today! cellular VoIP cellular phone application free cellular VoIP calls free mobile VoIP mobile VoIP telephony mobile application Currently 2.96/5 1 2 3 4 5 Rating: 3.0 /5(491 voturi) RamDisk - create a virtual disk out of RAM disk ramdisk RAM Drive RAMdisk virtual disk cache disk cache caching disk caching block cache block caching file system cache memory tips and tricks Currently 4.15/5 1 2 3 4 5 Rating: 4.1 /5(1968 voturi) Golden Records Vinyl to CD Converter - Convert Vinyl to CD Use this program to convert vinyls to CD or MP3 and rip analog cassette tapes to your computer or CD. Download free software trial. Golden Records is software that helps you to convert your vinyl LP records, analog audio tapes or cassettes to CD or to wav or mp3 files using your computer. file formats * Normalizes the volume of the recordings when converting to CD * Applies dc offset correction when converting analog to mp3 * Phono RIAA eq can be applied in the software so no pre-amplifier is required. Connect a record player directly to the computer * Can convert 78 RPM records playing on a 45 RPM player convert vinyl to CD convert analog audio cassette tool for convert 2.88/5 1 2 3 4 5 Rating: 2.9 /5(468 voturi) Stratiform - Makes Tweaking Firefox Looks Really Simple The ultimate user friendly customization add-on for Firefox. Puts the power in the average user's hands to customize the browser's look. "Stratiform" is an excellent add-on for Mozilla Firefox 4.0 created by our good friends "SpewBoy" and "SoapyHamHocks". This add-on allows you to customize almost every aspect of the brand new UI of Firefox 4.0. You can customize toolbar buttons, icons, text fields, colors. It also allows you to change tabs style and color. You can also customize orange Firefox button's appearance. Even you can change its text. Its a must have add-on for Mozilla Firefox 4.0 users. Using this add-on is extremely fun. add on browser customize aspect of the browser makes tweaking firefox tips and tricks Currently 4.19/5 1 2 3 4 5 Rating: 4.2 /5(1922 voturi) DriverMax - place where you can download the latest driver updates... Any driver, from any manufacturer...icsson Toshiba Western Digital Acer Agere Asus ATI Broadcom Canon Compaq Creative Labs Dell Epson Gigabyte Hewlett-Packard IBM Kye Lenovo Linksys Marvell Motorola NEC Nokia Okidata PC Chips Pinnacle Systems Ricoh Samsung Sceptre Sis Corporation (Silicon Integrated Systems) Texas Instruments VIA Technologies Yamaha ...and other manufacturers latest driver update free download latest drivers tool analyze Currently 4.20/5 1 2 3 4 5 Rating: 4.2 /5(1948 voturi) Proces explorer - information about freeware application handle leaks 1 2 3 4 5 6 7 8 9 Plugin-uri Detalii / Unelte webmasteri Cautare Titlu Descriere URL Tag Sfaturi cautare Ultimele cautari IS-08- ZVB do not filter Video free-online-ocr soft free interpretare vise the who shared produ verificare or asigurare auto or online serials symbian-soft or not open doc online Management B-95-BVG internet windows-password-recovery-bootdisk it security windows 7 alexandra alexa stardock weather mobile software making a bootable dvd firefox photo editor online unblocked proxy sites vizionare filme online hi5 id finder test speed Internet Password Lock mysql do not filter muzica revo uninstaller audioexpert hi5 email finder iti e frica corector ortografic produ
http://www.bookmarksuri.com/caut/IS-08-ZVB/
CC-MAIN-2018-05
en
refinedweb
Formatting Numbers Hi guys, I'm new on developing software on Qt. I would like to get some numbers formatted as string with 2 digits for every numbers. This to get the numbers with value < 9 formatted as "09". I'm using the lines of code below but they don't give me back what I would like to obtain and I can't understand where the mistake is. Please could you help me and check my code? @ uint hours=1, minutes=2, seconds=3; QString str = QString("%1:%2:%3") .arg(hours, 1, 10) .arg(minutes, 2, 10, '0') .arg(seconds, 2, 10, '0'); ui->chronoDisplay->setText(str); //chronoDisplay is a QLabel //I'm expecting that str has got this format: "1:02:03"@ Thank you in advance. I guess that this is getting in your way: [quote] If fillChar is '0' (the number 0, ASCII 48), the locale's zero is used. [/quote] (from the [[doc:QString]] documentation) I am wondering, why you are not using [[doc:QTime]] to format your time: @ QTime t(hours, minutes, seconds); ui->chronoDisplay->setText(t.toString("h:mm::ss")); @ Edit: fixed second line of above code (replaced QTime:: with t. ) [quote author="Andre" date="1333447407"]I guess that this is getting in your way: [quote] If fillChar is '0' (the number 0, ASCII 48), the locale's zero is used. [/quote] (from the [[doc:QString]] documentation) I am wondering, why you are not using QTime to format your time: @ QTime t(hours, minutes, seconds); ui->chronoDisplay->setText(QTime::toString("h:mm::ss")); @ [/quote] Just because I didn't know this possibility... Now that I know it I'm going to try to use it! Thank you for the advice! Hi Andre, I've followed your advice but I've got the same result. I had forgotten to say that I want this application to run under symbian 3 OS. Could be this the problems? Could be. Could you check the ouput of QLocale::systemLocale().zeroDigit() ? If that is a space, then I guess you have your culprit... I don't think so because either the techniques I've followed, have given me back a string filled with "0", the problem is that the number of 0 returned are more than expected! Despite the fact I think the problem is different, how can I set the zeroDigit() property to be sure that it returns a '0'? You can't set that value from inside Qt. It is set by the system. However, could you show us what your output is now then? I was under the impression from your previous posts that you did not get enough 0's. Now, you tell us, you get too many. So how does your output look? I would like to show you an image which shows the output I'm receiving on the device simulator, but how can I link this image? Just put the image on some public place (I use my public DropBox folder), and you can link it into your message using the small picture icon in the bar above the editor window. Ok, this is the result I'm gettim from the simulator after the code is processed. !(nokia simulator image)! this is the URL of the image: [quote author="Andre" date="1333447407"]@ QTime t(hours, minutes, seconds); ui->chronoDisplay->setText(QTime::toString("h:mm::ss")); @ [/quote] Why this code worked just one time and now it doesn't work again? This is the error I'm receiving when I try to use the code above C:\Documents and Settings\tlp31\Documenti\C++_Project\chrono-build-simulator..\chrono\mainwindow.cpp:120: error: cannot call member function 'QString QTime::toString(const QString&) const' without object and this is the code: @ QTime t(hours, minutes, seconds); ui->chronoDisplay->setText(QTime::toString("h:mm::ss"));@ You need to call t::toString, not QTime::toString in your second line. The error message describes that quite clearly, I think. Note: I now see that I wrote it wrong in my example. My apologies. I'm sorry but it still doesn't work. The compiler says: [..] 121: error: 't' is not a class or namespace How I have to initialize a QTime class properly? @ QTime time(hours, minutes, seconds); ui->chronoDisplay->setText(time.toString("h:mm::ss")); @ This should work. Yes it works, and with this modification also the UI is working properly. Thank you Andre for your support in this matter.
https://forum.qt.io/topic/15554/formatting-numbers
CC-MAIN-2018-05
en
refinedweb
I am trying to practice TDD. My understanding is that TDD should go like this I use PosExplorer.GetDevices method to look for printers available on network. However, the DeviceInfo object in the DeviceCollection returned by GetDevices() method does not include information on LogicalNames, HardwareId, HardwarePath. In my app, I need to discover available printers and create an instance of those printers. I am using epson t88iv printers. So, I've installed libusb and pyUSB on my OS X Lion (10.7.3) machine, and I have the following script running: import usbimport timeif __name__ == "__main__": while True: busses = usb.busses() print busses[0] print busses[0].__dict__ time.sleep(2) I have a single USB device plugged in: a I was wondering whether the python library has a function that returns a file's character encoding by looking for the presence of a BOM. I've already implemented something, but I'm just afraid I might be reinventing the wheel Update: (based on John Machin's correction): import codecsdef _get_encoding_from_bom(fd): first_bytes = fd.read(4) Function discovering encoding from What tools or techniques do you recommend for discovering C# extension methods in code? They're in the right namespace, but may be in any file in the solution. Specifically: I do have Resharper (v4), so if that has a mechanism I'm not aware of - please share! I'm explicitly NOT referring to in-app purchases. Is it possible to discover the purchase date of the application itself on iOS? I'd like to reward early purchasers. Rewarding early users (those who launched the app) is not the way to go. I'd like to reward people who bought the game, for example, between Jan 1st and Jan 31st, even the portion of customers who who made I am trying to programmatically discover the first and last names of the iPhone user. Is this possible at all? Calling this ... getpwuid( getuid() )->pw_gecos == "Mobile User" ..alas. Iterating over the address book finds all address book records, but doesn't distinguish between the device owner and anyone else (that I can tell). Given that th Can anyone think of a method which would allow for client-side decryption of a publicly available file without the client being able to determine the decryption key? Of course, if the key was in a JavaScript variable then it would be readily available to the user. Any ideas? The Ruby (and RoR) community publishes a large number of gems. But more often than not using these gems requires a good amount of effort, specially if one is new to Ruby. It would to be nice if Ruby experts (rockstars) share the best approaches to utilize inadequately documented gems. Thanks--arsh
http://bighow.org/tags/discovering/1
CC-MAIN-2018-05
en
refinedweb
Often, you will find the need to create custom order parameters for your path sampling simulation.: OrderParameter __init__method. This method will be called when PyRETIS is setting up a new simulation and it will be fed variables from the PyRETIS input file. Systemobject as its argument. Let us see how this can be done in practice. To be concrete, we will consider an order parameter defined as follows: The distance between a particular particle and a plane positioned somewhere along the x-axis. In order to define a new class to use with PyRETIS, we first import some additional libraries. import numpy as np from pyretis.orderparameter import OrderParameter And we set up a new class, representing our new order parameter: class PlaneDistanceX(OrderParameter): """A positional order parameter. This class defines a very simple order parameter which is just the position of a given particle. """ def __init__(self, index, plane_position): """Initialize txt and a name when we initialise the parent class (the line calling super().__init__). This is simply following the convention defined by OrderParameter. class object in order to access the positions. Finally, we are returning the order parameter. But note that we return this as a negative number. We are using an external engine (e.g. GROMACS) and we just want to reference the file containing the current configuration, and pass this file reference on to another external library or program. Here, we will make use of some external tools to obtain the order parameter, for simplicity, let us here make use of the mdtraj library. First, at the top of your file, add the import of this library: import numpy as np import mdtraj from pyretis.orderparameter import OrderParameter In order to access the file containing the configuration, we make use of the ParticlesExt.config attribute. Add the following method to the PlaneDistanceX class: def calculate(self, system): """Calculate the order parameter.""" filename = system.particles.config[0] traj = mdtraj.load(filename) return some_function_to_obtain_order(traj) We make use of the new order parameter by adding the Orderparameter section to the input file: Orderparameter -------------- class = PlaneDistanceX module = orderparameter1.py index = 0 plane_position = 1.0 As you can see, we specify the following:
http://pyretis.org/user/orderparameters.html
CC-MAIN-2018-05
en
refinedweb
Hello everyone, So, I am not quite sure if some of you had this problem, but it seems that SEFAZ SP already deactivated the webservice for status check (consulta de status) for layout 2.00. This means, that even the NF-es issued with layout 2.00 have to be checked in the new webservice. For that, we have created a new Note to deal with that. Please pay attention that this changes will be for all regions (CUFs). This happened for CT-e as well. So, after implementing the following Note all requests will be send to through the namespace 008 to the new webservice. 2151214 – Usage of NF-e Status Check (layout 3.1) for older versions Also, please note that the Support Package 20 that should be released in the middle of April will already have this correction. Happy Easter! Tiago.
https://blogs.sap.com/2015/04/02/status-check-consulta-status/
CC-MAIN-2018-05
en
refinedweb
. net use F: \\servername\path password /USER:domain\username That is assuming you are ok with putting the credentials in the batch file. If that's not your style, then you could use VB Script to run the same net use command in the background while popping up windows asking for username and password as suggested here: DriveMapAdd("T:", "\\server2\share", 0, "domainname\username", "password") Where "T" is the letter to map, "\\server2\share" is the servername and share, the '0' makes it persistant, and the rest explains itself. See this page for complete usage: AuotIt Rocks for this type of task. One word of caution, safegaurd the script, as it will contain a clear text copy of the user's password. I certainly can write a VB script that would have input fields for user name then password. I just thought someone may know a way to utilize the standard windows prompt that the users are familiar with. Create well-organized and polished visualizations of your virtual and backup environments when planning VMware vSphere, Microsoft Hyper-V or Veeam deployments. It helps you to gain better visibility and valuable business insights. #include <C:\Program Files\AutoIt3\Include\GuiC #include <C:\Program Files\AutoIt3\Include\Edit $gui = GuiCreate("Authenticate",1 GUICtrlCreateLabel("Userna $username = GUICtrlCreateInput("",10,3 GUICtrlCreateLabel("Passwo $password = GUICtrlCreateInput("",10,9 $go = GuiCtrlCreateButton("OK",1 $cancel = GuiCtrlCreateButton("Cance GUISetState() Do $msg = GUIGetMsg() If $msg = $go Then DriveMapAdd ("Y:","\\aecc\Zdrive",0,$u Exit EndIf If $msg = $cancel Then Exit EndIf Until GUIGetMsg() = $GUI_EVENT_CLOSE GuiDelete($gui)
https://www.experts-exchange.com/questions/28289457/Mapping-drive-to-alternate-system.html
CC-MAIN-2018-05
en
refinedweb
On Mon, Feb 07, 2011 at 10:54:24AM -0200, Henrique de Moraes Holschuh wrote: > On Mon, 07 Feb 2011, Jonathan Nieder wrote: > >. > > 1996 was a long time ago, the world was much smaller back then. It was > still a very very poor choice of naming, and it should have been named > ax25node from day one. Similar AX.25 tools "call" and "listen" were renamed in 2007 to ax* (package ax25-apps), because those names were too generic (according to the changelog). I think renaming the node binary to axnode is reasonable and consistent with this, but I don't think the nodejs program should be using that name either. > If push comes to shove, nobody is going to try to force _them_ to give > up that name. You can get the package itself renamed to ax25node, and > have the required "node" transitional package in squeeze+1, so as to > have no "node" package in squeeze+2, but rename the executable itself? > not likely. We did it with call and listen, both used from the command line more frequently, so it's not out of the question. > 4. as the one with the weaker claim, node.js can move its executable out > of the generic namespace or rename its executable to something else. I think it should do that anyway. Hamish
https://lists.debian.org/debian-devel/2011/02/msg00175.html
CC-MAIN-2018-05
en
refinedweb
an approximately area of the graph (1) work out the desired probability distribution function such that the area under a portion of the curve is equal to the probability of a value being randomly generated in that range, then (2) integrate the probability distribution to determine the cumulative distribution, then (3) invert the cumulative distribution to get the quantile function, then (4) transform your uniformly distributed random data by running it through the quantile function, and you’re done. Piece of cake! Next time: a simple puzzle based on an error I made while writing the code above. (*) A commenter correctly points out that the set of real values representable by doubles is not uniformly distributed across the range of 0.0 to 1.0, and that moreover, the Random class is not documented as guaranteeing a uniform distribution. However, for most practical applications it is a reasonable approximation of a uniform distribution. (**) Using the awesome Microsoft Chart Control, which now ships with the CLR. It was previously only available as a separate download. "This graph has exactly the same information content as the "histogram limit" probability distribution; its just a bit easier to read." I don't get this comment, not in its current absolute form. Yes, if the information you're trying to get at is the probability of getting a value less than some given value, the integral graph is easier to read. But if the information you're trying to get at is the relative predominance of any given value, the original bell-curve graph is easier to read. That's the whole point of graphing/visualization. It reveals the data in ways that can be intuitively understood. But many types of data (including random samples) have a variety of interesting facets that are useful to understand. We select a type of graph of the data that best shows us the facet of interest. Each type of graph is only "easier to read" inasmuch as it fits the facet of interest. Pick a different facet, and a different type of graph is "easier to read". Claiming that the integral graph is easier to read than the bell-curve graph presupposes that what we care about is the cumulative distribution. If in the article we had started with that statement — that we care about the cumulative distribution — then the comment in question could have been made in a context-dependent way. But in its current form, with the absolute generalization, it seems prejudiced against bell-curve graphs. 🙁 Good post Eric, I enjoyed reading it "In fact, no matter how many buckets of uniform size you make, if you have a large enough sample of random numbers then each bucket will end up with approximately the same number of items in it." There is a theorem called the law of large numbers, which says that the average result of an experiment will go closer to the expected value if one performs more trials. In other words if I do just a few experiments the histogram of a uniform distribution will look like a curve with several sharp bends. If I do infinitely many experiments the histogram will look like a perfect straight line parallel to a horizontal axis. That settles it – next time I'm in near Seattle, I'll definitely have to get you some of these: flowingdata.com/…/plush-statistical-distribution-pillows This is standard work in applying a lookup transformation (LUT) to a grayscale image. LUT's have been in image processing since the 1980s. Just about any colored image of Saturn uses one since the original images are in grayscale. Applying the inverse CDF of the distribution of interest to uniform variates is a very generic method, quite robust. Unfortunately, some quite common distributions have inverse CDFs that are expensive to compute. In these cases, "acceptance methods" (aka acceptance-rejection) can sometimes be faster, although they sound inefficient. In fact, the familiar Marsaglia / Box-Muller methods of generating standard normal variates from uniform ones are acceptance methods. See Wikipedia for details. The proposal to simulate Cauchy variables sounds dodgy to me. The Cauchy distribution has no defined moments – from the mean on up, they don't exist. But of course, any finite sample from the Cauchy distribution does not have this property. Speaking generally, simulations based on the Cauchy distribution will tend not to be very similar when started from different seeds. The Microsoft Codename "Cloud Numerics" Lab (…/numerics.aspx) has a good set of statistical functions for .NET including the inverse cumulative density function for many distributions. Despite the name, you don't have to run it in the cloud. I'm glad I can just #include <random> in C++11 now for all this. Is there not already something like a System.Numerics.Random namespace? Consider me surprised that C++'s anemic standard library has something the BCL doesn't if so. @Simon Personally I'd think anyone needing say a cauchy distribution, will almost certainly need some additional statistical libraries anyhow – and then these functions could easily be added there. It just doesn't seem like an especially "standard" thing to do. Now I'm certainly not against vast standard libraries, but only the c++ guys could add support for half a dozen different distributions, but still not include a XML parser. @voo: Not to get too far off topic, but unlike the BCL, the C++ standard is entirely maintained by volunteers. If you'd like to volunteer to specify an XML parser and represent your ideas at standard committee meetings for 5+ years to make sure it gets into the standard, please do so. Unfortunately, this time around, no one was willing to do that. There is an effort underway to build a much larger set of standard libraries for C++. Herb Sutter talked about this a bit in his Day 2 keynotes at the recent Going Native 2012 conference: channel9.msdn.com/…/C-11-VC-11-and-Beyond. Take a look at the video of his day 2 session. »If I do infinitely many experiments the histogram will look like a perfect straight line parallel to a horizontal axis.« Only with very high probability. 😉 I was surprised to hear you say that the .NET pseudo-random generator creates a unifrom distribution of doubles. Neither the Random.NextDouble() online documentation (msdn.microsoft.com/…/system.random.nextdouble.aspx), nor the intellisense comments make this guarantee. Since the valid values for a double type are _not_ uniformly distributed, I saw no reason for Random.NextDouble() to be. If developers can rely on a uniform distribution, it would be nice for the documentation to reflect that. Cool, Thank you!! @Mashmagar The documentation for the constructor of `Random` (don't ask me why they put it there, instead of on the method itself) states: > The distribution of the generated numbers is uniform; each number is equally likely to be returned. But the documentation only describes what the implementers wanted to do. The actual implementation is so bad, that depending on what you do, you get significant biases. * NextDouble has only 2 billion(2^31) different return values. There are around 2^53 (too lazy to look up the exact value, might be wrong by a factor of 2) different doubles in the interval between 0 and 1. * Next(maxValue) For some carefully chosen maxValues, it's *very* biased. For the full story, see my rant at stackoverflow.com/…/445517 @Code in Chaos: I did analyze the implementation. It is almost a correct implementation of an algorithm designed by Donald Knuth. (Actually Knuth's algorithm was a block-based algorithm, which was (correctly) converted to a stream random generator by the book Numerical Recipes in C (2nd Ed.) There is one mistake in the implementation which will drastically reduce the period of the RNG. They also threw away 56 more values than Knuth specifed for algorithm startup, but that is harmless. The alogithm proper returns an integer in the range [0,int.Maxval]. To get a Double they multipy it by 1/int.MaxVal. That creates a fairly reasonable approximation to a uniform double in the range [0,1]. The problem is how they generate integers. They literally do `(int)(Sample()*maxValue)` where Sample() is equivlent to NextDouble(). That is pretty obviously a terrible way to generate a random Integer, and is what results in most of the bias. Finally, the original implementation of Random was technically buggy, since NextDouble (and Next) were specified to return [0,1) and [0,maxValue) respectivly, but they actually returned [0,1] and [0,maxValue]. Note that the maxValue return for int only occured very rarely 1 in int.MaxValue times on average. However the way they fixed this bug results not only in additional bias, (although very slight), but also completely ruins the period of the generator. That was especially boneheaded, since there is a trivial change they could have made instead (requiring the addition of only 4 characters to the source code, not counting whitespace) that would cause neither bias nor damage to the generator's period.
https://blogs.msdn.microsoft.com/ericlippert/2012/02/21/generating-random-non-uniform-data-in-c/
CC-MAIN-2016-40
en
refinedweb
Michal Vitecek wrote: > > Peter Hansen wrote: > >". > > i must say that this behaviour of "global" variables is IHMO pretty > uncommon and non-intuitive and i wonder what was the reason the > developers chose this way: > 1) either that global variables have module-only scope This is definitely the case. Check the language definition, particularly this section: > 2) method even though it is imported into different scope still > looks for data in module in which it was defined The method is *not* imported into a different scope. If you think that, you still need to learn about "binding" in Python. The method is defined in its original scope, which is inside the "imported" module. You cannot change that. What you are doing is creating a *local* name, bound to the function in the "imported" module (but not affecting it in any way), and which just happens to have the same name as that function. The function is not changed, and is completely unaware of the fact that you have created another name bound to it, so it *must* look in its own module's global namespace for the global name. In any case, aside from all this non-intuitive (but as a newbie to a new language, "unexpected" might be a better word for you to use, since without an understanding of how the language works internally you probably can't have valid intuitions about it at all), I have to agree with the other comments which point out "this is definitely not the right way to do things". You should be adopting a more modular approach, and you can definitely do this without any use of global variables. -Peter
https://mail.python.org/pipermail/python-list/2003-January/200126.html
CC-MAIN-2016-40
en
refinedweb
Issue Type: Task Created: 2009-11-19T06:42:38.000+0000 Last Updated: 2010-01-29T18:04:57.000+0000 Status: Resolved Fix version(s): - 1.10.0 (27/Jan/10) Reporter: Matthew Weier O'Phinney (matthew) Assignee: Matthew Weier O'Phinney (matthew) Tags: - Zend_Loader Related issues: Attachments: The autoloader needs to be updated to follow the guidelines outlined by the PHP Interoperability Group, per the following page:… This will also provide us with an autoloader capable of loading code using actual PHP namespaces, though we will likely wait to push full support until we have a better idea of how the resource autoloader will work in 2.0. Posted by Glen Ainscow (darkangel) on 2010-01-29T13:28:34.000+0000 @Matthew, This was implemented in 1.10 (for the basic autoloader at least) -- shouldn't this ticket be closed? Posted by Matthew Weier O'Phinney (matthew) on 2010-01-29T18:04:57.000+0000 Added in 1.10.0.
https://framework.zend.com/issues/browse/ZF-8339?actionOrder=desc
CC-MAIN-2016-40
en
refinedweb
dendropy.model.coalescent: The Coalescent¶ Functions, classes, and methods for working with Kingman’s n-coalescent framework. dendropy.model.coalescent. coalesce_nodes(nodes, pop_size=None, period=None, rng=None, use_expected_tmrca=False)[source]¶ Returns a list of nodes that have not yet coalesced once periodis exhausted. This function will a draw a coalescence time, t, from an exponential distribution with a rate of choose(k, 2), where kis the number of nodes. If periodis given and if this time is less than period, or if periodis not given, then two nodes are selected at random from nodes, and coalesced: a new node is created, and the two nodes are added as child_nodes to this node with an edge length such the the total length from tip to the ancestral node is equal to the depth of the deepest child + t. The two nodes are removed from the list of nodes, and the new node is added to it. tis then deducted from period, and the process repeats. The function ends and returns the list of nodes once periodis exhausted or if any draw of texceeds period, if periodis given or when there is only one node left. As each coalescent event occurs, all nodes have their edges extended to the point of the coalescent event. In the case of constrained coalescence, all uncoalesced nodes have their edges extended to the end of the period (coalesced nodes have the edges fixed by the coalescent event in their ancestor). Thus multiple calls to this method with the same set of nodes will gradually ‘grow’ the edges, until all the the nodes coalesce. The edge lengths of the nodes passed to this method thus should not be modified or reset until the process is complete. dendropy.model.coalescent. constrained_kingman_tree(pop_tree, gene_tree_list=None, rng=None, gene_node_label_fn=None, num_genes_attr='num_genes', pop_size_attr='pop_size', decorate_original_tree=False)[source]¶ Given a population tree, pop_treethis will return a pair of trees: a gene tree simulated on this population tree based on Kingman’s n-coalescent, and population tree with the additional attribute ‘gene_nodes’ on each node, which is a list of uncoalesced nodes from the gene tree associated with the given node from the population tree. pop_treeshould be a DendroPy Tree object or an object of a class derived from this with the following attribute num_genes– the number of gene samples from each population in the present. Each edge on the tree should also have the attribute pop_size_attris the attribute name of the edges of pop_treethat specify the population size. By default it is pop_size. The should specify the effective haploid population size; i.e., number of gene in the population: 2 * N in a diploid population of N individuals, or N in a haploid population of N individuals. If pop_sizeis 1 or 0 or None, then the edge lengths of pop_treeis taken to be in haploid population units; i.e. where 1 unit equals 2N generations for a diploid population of size N, or N generations for a haploid population of size N. Otherwise the edge lengths of pop_treeis taken to be in generations. If gene_tree_listis given, then the gene tree is added to the tree block, and the tree block’s taxa block will be used to manage the gene tree’s taxa. gene_node_label_fnis a function that takes two arguments (a string and an integer, respectively, where the string is the containing species taxon label and the integer is the gene index) and returns a label for the corresponding the gene node. if decorate_original_treeis True, then the list of uncoalesced nodes at each node of the population tree is added to the original (input) population tree instead of a copy. Note that this function does very much the same thing as contained_coalescent(), but provides a very different API. dendropy.model.coalescent. contained_coalescent_tree(containing_tree, gene_to_containing_taxon_map, edge_pop_size_attr='pop_size', default_pop_size=1, rng=None)[source]¶ Returns a gene tree simulated under the coalescent contained within a population or species tree. containing_tree - The population or species tree. If edge_pop_size_mapis not None, and population sizes given are non-trivial (i.e., >1), then edge lengths on this tree are in units of generations. Otherwise edge lengths are in population units; i.e. 2N generations for diploid populations of size N, or N generations for diploid populations of size N. gene_to_containing_taxon_map - A TaxonNamespaceMapping object mapping Taxon objects in the containing_treeTaxonNamespace to corresponding Taxon objects in the resulting gene tree. edge_pop_size_attr - Name of attribute of edges that specify population size. By default this is “pop_size”. If this attribute does not exist, default_pop_sizewill be used. The value for this attribute should be the haploid population size or the number of genes; i.e. 2N for a diploid population of N individuals, or N for a haploid population of N individuals. This value determines how branch length units are interpreted in the input tree, containing_tree. If a biologically-meaningful value, then branch lengths on the containing_treeare properly read as generations. If not (e.g. 1 or 0), then they are in population units, i.e. where 1 unit of time equals 2N generations for a diploid population of size N, or N generations for a haploid population of size N. Otherwise time is in generations. If this argument is None, then population sizes default to default_pop_size. default_pop_size - Population size to use if edge_pop_size_attris None or if an edge does not have the attribute. Defaults to 1. The returned gene tree will have the following extra attributes: pop_node_genes - A dictionary with nodes of containing_treeas keys and a list of gene tree nodes that are uncoalesced as values. Note that this function does very much the same thing as constrained_kingman(), but provides a very different API. dendropy.model.coalescent. discrete_time_to_coalescence(n_genes, pop_size=None, n_to_coalesce=2, rng=None)[source]¶ A random draw from the “Kingman distribution” (discrete time version): Time to go from n_genesgenes to n_genes-1 genes in a discrete-time Wright-Fisher population of pop_sizegenes; i.e. waiting time until n-geneslineages coalesce in a population of pop_sizegenes. dendropy.model.coalescent. expected_tmrca(n_genes, pop_size=None, n_to_coalesce=2)[source]¶ Expected (mean) value for the Time to the Most Recent Common Ancestor of n_to_coalescegenes in a sample of n_genesdrawn from a population of pop_sizegenes. dendropy.model.coalescent. extract_coalescent_frames(tree, ultrametricity_precision=1e-05)[source]¶ Returns a list of tuples describing the coalescent frames on the tree. That is, each element in the list is tuple pair consisting of where: the first element of the pair is the number of separate lineages remaining on the tree at coalescence event, and the second element of the pair is the time between this coalescence event and the earlier (more recent) one. dendropy.model.coalescent. log_probability_of_coalescent_frames(coalescent_frames, haploid_pop_size)[source]¶ Under the classical neutral coalescent citep{Kingman1982, Kingman1982b}, the waiting times between coalescent events in a sample of $k$ alleles segregating in a population of (haploid) size $N_e$ is distributed exponentially with a rate parameter of :math`frac{{k choose 2}}{N_e}`: .. math:: \Pr(T) = \frac{{k \choose 2}}{N_e} \e{- \frac{{k \choose 2}}{N_e} T}, where $T$ is the length of (chronological) time in which there are $k$ alleles in the sample (i.e., for $k$ alleles to coalesce into $k-1$ alleles). dendropy.model.coalescent. log_probability_of_coalescent_tree(tree, haploid_pop_size, ultrametricity_precision=1e-05)[source]¶ Wraps up extraction of coalescent frames and reporting of probability. dendropy.model.coalescent. mean_kingman_tree(taxon_namespace, pop_size=1, rng=None)[source]¶ Returns a tree with coalescent intervals given by the expected times under Kingman’s neutral coalescent. dendropy.model.coalescent. node_waiting_time_pairs(tree, ultrametricity_precision=1e-05)[source]¶ Returns a list of tuples of (nodes, coalescent interval time) on the tree. That is, each element in the list is tuple pair consisting of where: the first element of the pair is an internal node representing a coalescent event on the tree, and the second element of the pair is the time between this coalescence event and the earlier (more recent) one. dendropy.model.coalescent. pure_kingman_tree(taxon_namespace, pop_size=1, rng=None)[source]¶ Generates a tree under the unconstrained Kingman’s coalescent process. dendropy.model.coalescent. pure_kingman_tree_shape(num_leaves, pop_size=1, rng=None)[source]¶ Like dendropy.model.pure_kingman_tree, but does not assign taxa to tips. dendropy.model.coalescent. time_to_coalescence(n_genes, pop_size=None, n_to_coalesce=2, rng=None)[source]¶ A random draw from the “Kingman distribution” (discrete time version): Time to go from n_genesgenes to n_genes-1 genes in a continuous-time Wright-Fisher population of pop_sizegenes; i.e. waiting time until n-geneslineages coalesce in a population of pop_sizegenes. Given the number of gene lineages in a sample, n_genes, and a population size, pop_size, this function returns a random number from an exponential distribution with rate $choose( pop_size, 2)$. pop_sizeis the effective haploid population size; i.e., number of gene in the population: 2 * N in a diploid population of N individuals, or N in a haploid population of N individuals. If pop_sizeis 1 or 0 or None, then time is in haploid population units; i.e. where 1 unit of time equals 2N generations for a diploid population of size N, or N generations for a haploid population of size N. Otherwise time is in generations. The coalescence time, or the waiting time for the coalescence, of two gene lineages evolving in a population with haploid size $N$ is an exponentially-distributed random variable with rate of $N$ an expectation of $frac{1}{N}$). The waiting time for coalescence of any two gene lineages in a sample of $n$ gene lineages evolving in a population with haploid size $N$ is an exponentially-distributed random variable with rate of $choose{N, 2}$ and an expectation of $frac{1}{choose{N, 2}}$.
https://pythonhosted.org/DendroPy/library/coalescent.html
CC-MAIN-2016-40
en
refinedweb
- OSI-Approved Open Source (82) - GNU General Public License version 2.0 (50) - BSD License (9) - Artistic License 2.0 (6) - GNU Library or Lesser General Public License version 2.0 (6) - GNU General Public License version 3.0 (5) - Mozilla Public License 1.1 (3) - Apache License V2.0 (2) - Apache Software License (2) - MIT License (2) - PHP License (2) - Academic Free License (1) - Affero GNU Public License (1) - Apple Public Source License (1) - Artistic License (1) - Common Public License 1.0 (1) - Public Domain (5) - Creative Commons Attribution License (3) - Grouping and Descriptive Categories (172) - Linux (172) - BSD (149) - Modern (86) - Windows (83) - Mac (68) - Other Operating Systems (26) - Solaris (17) Top Apps - Audio & Video - Business & Enterprise - Communications - Development - Home & Education - Games - Graphics - Science & Engineering - Security & Utilities - System Administration Analog for Mac OS X The Mac OS X Port of analog which has been specially rewritten into ObjC code to take advantage of Cocoa. The rewrite will also incorporate new features not found in the console version. Includes a graphical interface, support for XML property lists. . Battle of the Sexes for Quake The cult-classic class-based capture the flag game for quake2 and quake3. The modifications from the original source release are sublicensed under the GPL except where it may conflict with the original licensing terms from Id Software.. Download Manager Download Manager is a set of PHP scripts which enable you to add a file download section to your web site. FloodWorld III This is the long awaiting new C++ FloodWorld, dubbed FloodWorld III. The original was in Perl and version II was unreleased. This is a new endeavor encapsulting highly efficient code for IRC Network Flood Protect. Free Standards Group Free Standards Group Golf League Manager This Web-based (PHP) software package handles all aspects of starting, organizing, and scheduling a golf league. Partial list of features: survey potential members, score posting, USGA handicap tracking, scheduling, and points-based competitions. Hololog Hololog is a Web reporting tool for the Apache common log format. Use Hololog to understand how people use yuor Web site and to improve its organisation and content.. Loggerblogger Uberblog Loggerblogger uberblog is a Blogging software that provides a clean URL namespace, standards-compliance, and various advanced features hard to find in other blogging server software. MergeRight MergeRight is a graphical tool for comparing and merging multiple versions or revisions of a text or source file, with or without a specified common ancestor, with recommendations for passages in conflict, easy inclusion and exclusion, and live editing. NES Open Switch The 'NES Open Switch' network protocol stack. OpenVault Media Management System OpenVault is middleware that helps applications manage removable media. In a Storage Area Network (SAN), it arbitrates access by applications to removable media drives. It is the basis for the IEEE 1244 standard: Media Management System. Paracelsus A Medical Information System (MIS) Application Server based on Web Services and J2EE and administrative tools (Plugins for Eclipse,...) Pub-Monkey Some informations: Pub-Monkey is written in C++ with CommonC++2 and will run only on POSIX conform OSs, but not on Windows(No CC++2). Java-Monkey is written in Java and needs FtpBean () Qmail POP-toaster setup guide This is a documentation project that is useful for people who are looking to set up an e-mail system with an easy-to-use client webmail front end and support for name-based virtual domains. It proposes a Qmail-Vmailmgr-Courier-SquirrelMail tie-in as the b RamOS OS Project RamOS is a highly optimised platform based on the linux kernel. It is the foundation for other projects. RealAI A conscious Artificial Intelligence running on peer to peer distributed computing, And ability to run on a single pc using next generation processor and pcs... SQL Filesystem Allows to mount different SQL servers as NFS server
https://sourceforge.net/directory/os%3Aposix/license%3Aother/audience%3Asysadmins/?page=6
CC-MAIN-2016-40
en
refinedweb
This chapter describes the support in Pro*C/C++ for user-defined objects. This chapter contains the following topics: In addition to the Oracle relational datatypes supported since Oracle8, Pro*C/C++ supports user-defined datatypes, which are: An object type is a user-defined datatype that has attributes, the variables that form the datatype defined by a CREATE TYPE SQL statement, and methods, functions and procedures that are the set of allowed behaviors of the object type. We consider object types with only attributes in this guide. For example: --Defining an object type... CREATE TYPE employee_type AS OBJECT( name VARCHAR2(20), id NUMBER, MEMBER FUNCTION get_id(name VARCHAR2) RETURN NUMBER); / -- --Creating an object table... CREATE TABLE employees OF employee_type; --Instantiating an object, using a constructor... INSERT INTO employees VALUES ( employee_type('JONES', 10042)); LONG, LONG RAW, NCLOB, NCHAR and NCHAR Varying are not allowed as datatypes in attributes of objects. REF (short for "reference") was also new in Oracle8. It is a reference to an object stored in a database table, instead of the object itself. REF types can occur in relational columns and also as datatypes of an object type. For example, a table employee_tab can have a column that is a REF to an object type employee_t itself: CREATE TYPE employee_t AS OBJECT( empname CHAR(20), empno INTEGER, manager REF employee_t); / CREATE TABLE employee_tab OF employee_t; C structures representing the NULL status of object types are generated by the Object Type Translator. You must use these generated structure types in declaring indicator variables for object types. Other Oracle types do not require special treatment for NULL indicators. Because object types have internal structure, NULL indicators for object types also have internal structure. A NULL indicator structure for a non-collection object type provides atomic (single) NULL status for the object type as a whole, as well as the NULL status of every attribute. OTT generates a C structure to represent the NULL indicator structure for the object type. The name of the NULL indicator structure is Object_typename_ind where Object_typename is the name of the C structure for the user-defined type in the database. The object cache is an area of memory on the client that is allocated for your program's use in interfacing with database objects. There are two interfaces to working with objects. The associative interface manipulates "transient" copies of the objects and the navigational interface manipulates "persistent" objects. Objects that you allocated in the cache with EXEC SQL ALLOCATE statements in Pro*C/C++ are transient copies of persistent objects in the Oracle database. As such, you can update these copies in the cache after they are fetched in, but in order to make these changes persistent in the database, you must use explicit SQL commands. This "transient copy" or "value-based" object caching model is an extension of the relational model, in which scalar columns of relational tables can be fetched into host variables, updated in place, and the updates communicated to the server. The associative interface manipulates transient copies of objects. Memory is allocated in the object cache with the EXEC SQL ALLOCATE statement. One object cache is created for each SQLLIB runtime context. Objects are retrieved by the EXEC SQL SELECT or EXEC SQL FETCH statements. These statements set values for the attributes of the host variable. If a NULL indicator is provided, it is also set. Objects are inserted, updated, or deleted using EXEC SQL INSERT, EXEC SQL UPDATE, and EXEC SQL DELETE statements. The attributes of the object host variable must be set before the statement is executed. Transactional statements EXEC SQL COMMIT and EXEC SQL ROLLBACK are used to write the changes permanently on the server or to abort the changes. You explicitly free memory in the cache for the objects by use of the EXEC SQL FREE statement. When a connection is terminated, Oracle implicitly frees its allocated memory. Use in these cases: You allocate space in the object cache with this statement. The syntax is: EXEC SQL [AT [:]database] ALLOCATE :host_ptr [[INDICATOR]:ind_ptr] ; Variables entered are: database (IN) a zero-terminated string containing the name of the database connection, as established previously through the statement: EXEC SQL CONNECT :user [AT [:]database]; If the AT clause AT is omitted, or if database is an empty string, the default database connection is assumed. host_ptr (IN) a pointer to a host structure generated by OTT for object types, collection object types, or REFs, or a pointer to one of the new C datatypes: OCIDate, OCINumber, OCIRaw, or OCIString. ind_ptr (IN) The indicator variable, ind_ptr, is optional, as is the keyword INDICATOR. Only pointers to struct-typed indicators can be used in the ALLOCATE and FREE statements. host_ptr and ind_ptr can be host arrays. The duration of allocation is the session. Any instances will be freed when the session (connection) is terminated, even if not explicitly freed by a FREE statement. For more details, see "ALLOCATE (Executable Embedded SQL Extension)" above statement to free all object cache memory for the specified database connection. When accessing objects using SQL, Pro*C/C++ applications manipulate transient copies of the persistent objects. This is a direct extension of the relational access interface, which uses SELECT, UPDATE and DELETE statements. In Figure 17-1, you allocate memory in the cache for a transient copy of the persistent object. with the ALLOCATE statement. The allocated object does not contain data, but it has the form of the struct generated by the OTT. person *per_p; ... EXEC SQL ALLOCATE :per_p; You can execute a SELECT statement to populate the cache. Or, use a FETCH statement or a C assignment to populate the cache with data. EXEC SQL SELECT ... INTO :per_p FROM person_tab WHERE ... Make changes to the server objects with INSERT, UPDATE or DELETE statements, as shown in the illustration. You can insert the data is into the table by the INSERT statement: EXEC SQL INSERT INTO person_tab VALUES(:per_p); Finally, free memory associated with the copy of the object with the FREE statement: EXEC SQL FREE :per_p; Use the navigational interface to access the same schema as the associative interface. The navigational interface accesses objects, both persistent and transient) by dereferencing REFs to objects and traversing ("navigating") from one object to another. Some definitions follow. Pinning an object is the term used to mean dereferencing the object, allowing the program to access it. Unpinning means indicating to the cache that the object is no longer needed. Dereferencing can be defined as the server using the REF to create a version of the object in the client. While the cache maintains the association between objects in the cache and the corresponding server objects, it does not provide automatic coherency. You have the responsibility to ensure correctness and consistency of the contents of the objects in the cache. Releasing an object copy indicates to the cache that the object is not currently being used. To free memory, release objects when they are no longer needed to make them eligible for implicit freeing. Freeing an object copy removes it from the cache and releases its memory area. Marking an object tells the cache that the object copy has been updated in the cache and the corresponding server object must be updated when the object copy is flushed. Un-marking an object removes the indication that the object has been updated. Flushing an object writes local changes made to marked copies in the cache to the corresponding objects in the server. The object copies in the cache are also unmarked at this time. Refreshing an object copy in the cache replaces it with the latest value of the corresponding object in the server. The navigational and associative interfaces can be used together. Use the EXEC SQL OBJECT statements, the navigational interface, to update, delete, and flush cache copies (write changes in the cache to the server). Use the navigational interface: Embedded SQL OBJECT statements are described below with these assumptions: EXEC SQL [AT [:]database] [FOR [:]count] OBJECT CREATE :obj [INDICATOR]:obj_ind [TABLE tab] [RETURNING REF INTO :ref] ; where tab is: {:hv | [schema.]table} Use this statement to create a referenceable object in the object cache. The type of the object corresponds to the host variable obj. When optional type host variables ( :obj_ind,:ref,:ref_ind) are supplied, they must all correspond to the same type. The referenceable object can be either persistent (TABLE clause is supplied) or transient (TABLE clause is absent). Persistent objects are implicitly pinned and marked as updated. Transient objects are implicitly pinned. The host variables are: obj (OUT) The object instance host variable, obj, must be a pointer to a structure generated by OTT. This variable is used to determine the referenceable object that is created in the object cache. After a successful execution, obj will point to the newly created object. obj_ind (OUT) This variable points to an OTT-generated indicator structure. Its type must match that of the object instance host variable. After a successful execution, obj_ind will be a pointer to the parallel indicator structure for the referenceable object. tab (IN) Use the table clause to create persistent objects. The table name can be specified as a host variable, hv, or as an undeclared SQL identifier. It can be qualified with a schema name. Do not use trailing spaces in host variables containing the table name. hv (IN) A host variable specifying a table. If a host variable is used, it must not be an array. It must not be blank-padded. It is case-sensitive. When an array of persistent objects is created, they are all associated with the same table. table (IN) An undeclared SQL identifier which is case-sensitive. ref (OUT) The reference host variable must be a pointer to the OTT-generated reference type. The type of ref must match that of the object instance host variable. After execution, ref contains a pointer to the ref for the newly created object. Attributes are initially set to null. Creating new objects for object views is not currently supported. Creating new objects for object views is not currently supported. EXEC SQL [AT [:]database] [FOR [:]count] OBJECT DEREF :ref INTO :obj [[INDICATOR]:obj_ind] [FOR UPDATE [NOWAIT]] ; Given an object reference, ref, the OBJECT DEREF statement pins the corresponding object or array of objects in the object cache. Pointers to these objects are returned in the variables obj and obj_ind. The host variables are: ref (IN) This is the object reference variable, which must be a pointer to the OTT-generated reference type. This variable (or array of variables) is dereferenced, returning a pointer to the corresponding object in the cache. obj (OUT) The object instance host variable, obj, must be a pointer to an OTT-generated structure. Its type must match that of the object reference host variable. After successful execution, obj contains a pointer to the pinned object in the object cache. obj_ind (OUT) The object instance indicator variable, obj_ind, must be a pointer to an OTT-generated indicator structure. Its type must match that of the object reference indicator variable. After successful execution, obj_ind contains a pointer to the parallel indicator structure for the referenceable object. FOR UPDATE If this clause is present, an exclusive lock is obtained for the corresponding object in the server. NOWAIT If this optional keyword is present, an error is immediately returned if another user has already locked the object. EXEC SQL [AT [:]database] [FOR [:]count] OBJECT RELEASE :obj ; This statement unpins the object in the object cache. When an object is not pinned and not updated, it is eligible for implicit freeing. If an object has been dereferenced n times, it must be released n times to be eligible for implicit freeing from the object cache. Oracle advises releasing all objects that are no longer needed. EXEC SQL [AT [:]database] [FOR [:]count] OBJECT DELETE :obj ; For persistent objects, this statement marks an object or array of objects as deleted in the object cache. The object is deleted in the server when the object is flushed or when the cache is flushed. The memory reserved in the object cache is not freed. For transient objects, the object is marked as deleted. The memory for the object is not freed. EXEC SQL [AT [:]database] [FOR [:]count] OBJECT UPDATE :obj ; For persistent objects, this statement marks them as updated in the object cache. The changes are written to the server when the object is flushed or when the cache is flushed. For transient objects, this statement is a no-op. EXEC SQL [AT [:]database] [FOR [:]count] OBJECT FLUSH :obj ; This statement flushes persistent objects that have been marked as updated, deleted, or created, to the server. See Figure 17-2 for an illustration of the navigational interface. Use the ALLOCATE statement to allocate memory in the object cache for a copy of the REF to the person object. The allocated REF does not contain data. person *per_p; person_ref *per_ref_p; ... EXEC SQL ALLOCATE :per_p; Populate the allocated memory by using a SELECT statement to retrieve the REF of the person object (exact format depends on the application): EXEC SQL SELECT ... INTO :per_ref_p; The DEREF statement is then used to pin the object in the cache, so that changes can be made in the object. The DEREF statement takes the pointer per_ref_p and creates an instance of the person object in the client-side cache. The pointer per_p to the person object is returned. EXEC SQL OBJECT DEREF :per_ref_p INTO :per_p; Make changes to the object in the cache by using C assignment statements, or by using data conversions with the OBJECT SET statement. Then you must mark the object as updated. See Figure 17-3. To mark the object in the cache as updated, and eligible to be flushed to the server: EXEC SQL OBJECT UPDATE :per_p; You send changes to the server by the FLUSH statement: EXEC SQL OBJECT FLUSH :per_p; You release the object: EXEC SQL OBJECT RELEASE :per_p; The statements in the next section are used to make the conversions between object attributes and C types. EXEC SQL [AT [:]database] OBJECT SET [ {'*' | {attr[, attr]} } OF] :obj [[INDICATOR]:obj_ind] TO {:hv [[INDICATOR]:hv_ind] [, :hv [INDICATOR]:hv_ind]]} ; Use this statement with objects created by both the associative and the navigational interfaces. This statement updates the attributes of the object. For persistent objects, the changes will be written to the server when the object is updated and flushed. Flushing the cache writes all changes made to updated objects to the server. The OF clause is optional. If absent, all the attributes of obj are set. The same result is achieved by writing: ... OBJECT SET * OF ... The host variable list can include structures that are exploded to provide values for the attributes. However, the number of attributes in obj must match the number of elements in the exploded variable list. Host variables and attributes are: attr The attributes are not host variables, but rather simple identifiers that specify which attributes of the object will be updated. The first attribute in the list is paired with the first expression in the list, and so on. The attribute must be one of either OCIString, OCINumber, OCIDate, or OCIRef. obj (IN/OUT) obj specifies the object to be updated. The bind variable obj must not be an array. It must be a pointer to an OTT-generated structure. obj_ind (IN/OUT) The parallel indicator structure that will be updated. It must be a pointer to an OTT-generated indicator structure. hv (IN) This is the bind variable used as input to the OBJECT SET statement. hv must be an int, float, OCIRef *, a one-dimensional char array, or a structure of these types. hv_ind (IN) This is the associated indicator that is used as input to the OBJECT SET statement. hv_ind must be a 2-byte integer scalar or a structure of 2-byte integer scalars. Using Indicator Variables: If a host variable indicator is present, then an object indicator must also be present. If hv_ind is set to -1, the associated field in the obj_ind is set to -1. The following implicit conversions are permitted: EXEC SQL [AT [:]database] OBJECT GET [ { '*' | {attr[, attr]} } FROM] :obj [[INDICATOR]:obj_ind] INTO {:hv [[INDICATOR]:hv_ind] [, :hv [[INDICATOR]:hv_ind]]} ; This statement converts the attributes of an object into native C types. The FROM clause is optional. If absent, all the attributes of obj are converted. The same result is achieved by writing: ... OBJECT GET * FROM ... The host variable list may include structures that are exploded to receive the values of the attributes. However, the number of attributes in obj must match the number of elements in the exploded host variable list. Host variables and attributes: attr The attributes are not host variables, but simple identifiers that specify which attributes of the object will be retrieved. The first attribute in the list is paired with the first host variable in the list, and so on. The attribute must represent a base type. It must be OCIString, OCINumber, OCIRef, or OCIDate. obj (IN) This specifies the object that serves as the source for the attribute retrieval. The bind variable obj must not be an array. hv (OUT) This is the bind variable used to hold output from the OBJECT GET statement. It can be an int, float, double, a one-dimensional char array, or a structure containing those types. The statement returns the converted attribute value in this host variable. hv_ind (OUT) This is the associated indicator variable for the attribute value. It is a 2-byte integer scalar or a structure of 2-byte integer scalars. Using Indicator Variables: If no object indicator is specified, it is assumed that the attribute is valid. It is a program error to convert object attributes to C types if the object is atomically NULL or if the requested attribute is NULL and no object indicator variable is supplied. It may not be possible to raise an Oracle error in this situation. If the object variable is atomically NULL or the requested attribute is NULL, and a host variable indicator (hv_ind) is supplied, then it is set to -1. If the object is atomically NULL or the requested attribute is NULL, and no host variable indicator is supplied, then an error is raised. The following implicit conversions are permitted:: An example is: char *new_format = "DD-MM-YYYY"; char *new_lang = "French"; char *new_date = "14-07-1789"; /* One of the attributes of the license type is dateofbirth */ license *aLicense; ... /* Declaration and allocation of context ... */ EXEC SQL CONTEXT OBJECT OPTION SET DATEFORMAT, DATELANG TO :new_format, :new_ lang; /* Navigational object obtained */ ... EXEC SQL OBJECT SET dateofbirth OF :aLicense TO :new_date; ... these precompiler options: This option determines which version of the object is returned by the EXEC SQL OBJECT DEREF statement. This gives you varying levels of consistency between cache objects and server objects. Use the EXEC ORACLE OPTION statement to set it inline. Permitted values are: If the object has been selected into the object cache in the current transaction, then return that object. If the object has not been selected, it is retrieved from the server. For transactions that are running in serializable mode, this option has the same behavior as VERSION=LATEST without incurring as many network round trips. This value can be safely used with most Pro*C/C++ applications. If the object does not reside in the object cache, it is retrieved from the database. If it does reside in the object cache, it is refreshed from the server. Use this value with caution because it will incur the greatest number of network round trips. Use it only when it is imperative that the object cache be kept as coherent as possible with the server-side buffer. If the object already resides in the object cache, then return that object. If the object does not reside in the object cache, retrieve it from the server. This value will incur the fewest number of network round trips. Use in applications that access read-only objects or when a user will have exclusive access to the objects. Use this precompiler option to set the pin duration used by subsequent EXEC SQL OBJECT CREATE and EXEC SQL OBJECT DEREF statements. Objects in the cache are implicitly unpinned at the end of the duration. Use with navigational interface only. You can set this option in the EXEC ORACLE OPTION statement. Permitted values are: Objects are implicitly unpinned when the transaction completes. Objects are implicitly unpinned when the connection is terminated. This precompiler option., is the name of the typefiles generated by OTT. These files are meant to be a read-only input to Pro*C/C++. The information in it, though in plain-text form, might be encoded, and might not necessarily be interpretable by you, the user. You can provide more than one INTYPE file as input to a single Pro*C/C++ precompilation unit. This option cannot be used inline in EXEC ORACLE statements. OTT generates C structure declarations for object types created in the database, and writes type names and version information to a file called the typefile. An object type may not necessarily have the same name as the C structure type or C++ class type that represents it. This could arise for the following reasons: Under these circumstances, it is impossible to infer from the structure or class declaration which object type it matches. This information, which is required by Pro*C/C++, is generated by OTT in the type file. ERRTYPE=filename Writes errors to the file specified, as well as to the screen. If omitted, errors are directed to the screen only. Only one ERRTYPE is allowed. As is usual with other single-valued command-line options, if you enter multiple values for ERRTYPE on the command line, the last one supersedes the earlier values. This option cannot be used inline in EXEC ORACLE statements. Object types and their attributes are represented in a C program according to the C binding of Oracle types. If the precompiler command-line option SQLCHECK is set to SEMANTICS or FULL, Pro*C/C++ verifies during precompilation that host variable types conform to the mandated C bindings for the types in the database schema. In addition, runtime checks are always performed to verify that Oracle types are mapped correctly during program execution. See also "SQLCHECK". Relational datatypes are checked in the usual manner. A relational SQL datatype is compatible with a host variable type if the two types are the same, or if a conversion is permitted between the two. Object types, on the other hand, are compatible only if they are the same type. They must When you specify the option SQLCHECK=SEMANTICS or FULL, during precompilation Pro*C/C++ logs onto the database using the specified userid and password, and verifies that the object type from which a structure declaration was generated is identical to the object type used in the embedded SQL statement. Pro*C/C++ gathers the type name, version, and possibly schema information for Object, collection Object, and REF host variables, for a type from the input INTYPE file, and stores this information in the code that it generates. This enables access to the type information for Object and REF bind variables at runtime. Appropriate errors are returned for type mismatches. Let us examine a simple object example. You create a type person and a table person_tab, which has a column that is also an object type, address: create type person as object ( lastname varchar2(20), firstname char(20), age int, addr address ) / create table person_tab of person; Insert data in the table, and proceed. Consider the case of how to change a lastname value from "Smith" to "Smythe", using Pro*C/C++. Run the OTT to generate C structures which map to person. In your Pro*C/C++ program you must include the header file generated by OTT. In your application, declare a pointer, person_p, to the persistent memory in the client-side cache. Then allocate memory and use the returned pointer: char *new_name = "Smythe"; person *person_p; ... EXEC SQL ALLOCATE :person_p; Memory is now allocated for a copy of the persistent object. The allocated object does not yet contain data. Populate data in the cache either by C assignment statements or by using SELECT or FETCH to retrieve an existing object: EXEC SQL SELECT VALUE(p) INTO :person_p FROM person_tab p WHERE lastname = 'Smith'; Changes made to the copy in the cache are transmitted to the server database by use of INSERT, UPDATE, and DELETE statements: EXEC SQL OBJECT SET lastname OF :person_p TO :new_name; EXEC SQL INSERT INTO person_tab VALUES(:person_p); Free cache memory in this way: EXEC SQL FREE :person_p; Allocate memory in the object cache for a copy of the REF to the object person. The ALLOCATE statement returns a pointer to the REF: person *person_p; person_ref *per_ref_p; ... EXEC SQL ALLOCATE :per_ref_p; The allocated REF contains no data. To populate it with data, retrieve the REF of the object: EXEC SQL SELECT ... INTO :per_ref_p; Then dereference the REF to put an instance of object in the client-side cache. The dereference command takes the per_ref_p and creates an instance of the corresponding object in the cache: EXEC SQL OBJECT DEREF :per_ref_p INTO :person_p; Make changes to data in the cache by using C assignments, or by using OBJECT GET statements: /* lname is a C variable to hold the result */ EXEC SQL OBJECT GET lastname FROM :person_p INTO :lname; ... EXEC SQL OBJECT SET lastname OF :person_p TO :new_name; /* Mark the changed object as changed with OBJECT UPDATE command */; EXEC SQL OBJECT UPDATE :person_p; EXEC SQL FREE :per_ref_p; To make the changes permanent in the database, use FLUSH: EXEC SQL OBJECT FLUSH :person_p; Changes have been made to the server; the object can now be released. Objects that are released are not necessarily freed from the object cache memory immediately. They are placed on a least-recently used stack. When the cache is full, the objects are swapped out of memory. Only the object is released; the REF to the object remains in the cache. To release the REF, use the RELEASE statement. for the REF. To release the object pointed to by person_p: EXEC SQL OBJECT RELEASE :person_p; Or, issue a transaction commit and all objects in the cache are released, provided the pin duration has been set appropriately. The following code example creates four object types: and one table:: and two tables: person_tab customer_tab The SQL file, navdemo1.sql, which creates the types and tables, and then inserts values into the tables, is: connect scott/tiger drop table customer_tab; drop type customer; drop table person_tab; drop type budoka; drop type location; create type location as object ( num number, street varchar2(60), city varchar2(30), state char(2), zip char(10) ); / create type budoka as object ( lastname varchar2(20), firstname varchar(20), birthdate date, age int, addr location ); / create table person_tab of budoka; create type customer as object ( account_number varchar(20), aperson ref budoka ); / create table customer_tab of customer; insert into person_tab values ( budoka('Seagal', 'Steven', '14-FEB-1963', 34, location(1825, 'Aikido Way', 'Los Angeles', 'CA', 45300))); insert into person_tab values ( budoka('Norris', 'Chuck', '25-DEC-1952', 45, location(291, 'Grant Avenue', 'Hollywood', 'CA', 21003))); insert into person_tab values ( budoka('Wallace', 'Bill', '29-FEB-1944', 53, location(874, 'Richmond Street', 'New York', 'NY', 45100))); insert into person_tab values ( budoka('Van Damme', 'Jean Claude', '12-DEC-1964', 32, location(12, 'Shugyo Blvd', 'Los Angeles', 'CA', 95100))); insert into customer_tab select 'AB123', ref(p) from person_tab p where p.lastname = 'Seagal'; insert into customer_tab select 'DD492', ref(p) from person_tab p where p.lastname = 'Norris'; insert into customer_tab select 'SM493', ref(p) from person_tab p where p.lastname = 'Wallace'; insert into customer_tab select 'AC493', ref(p) from person_tab p where p.lastname = 'Van Damme'; commit work;: /************************************************************************* * * This is a simple Pro*C/C++ program designed to illustrate the * Navigational access to objects in the object cache. * * To build the executable: * * 1. Execute the SQL script, navdemo1.sql in SQL*Plus * 2. Run OTT: (The following command should appear on one line) * ott intype=navdemo1.typ hfile=navdemo1.h outtype=navdemo1_o.typ * code=c user=scott/tiger * 3. Precompile using Pro*C/C++: * proc navdemo1 intype=navdemo1_o.typ * 4. Compile/Link (This step is platform specific) * *************************************************************************/ #include "navdemo1.h" #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sqlca.h> void whoops(errcode, errtext, errtextlen) int errcode; char *errtext; int errtextlen; { printf("ERROR! sqlcode=%d: text = %.*s", errcode, errtextlen, errtext); EXEC SQL WHENEVER SQLERROR CONTINUE; EXEC SQL ROLLBACK WORK RELEASE; exit(EXIT_FAILURE); } void main() { char *uid = "scott/tiger"; /* The following types are generated by OTT and defined in navdemo1.h */ customer *cust_p; /* Pointer to customer object */ customer_ind *cust_ind; /* Pointer to indicator struct for customer */ customer_ref *cust_ref; /* Pointer to customer object reference */ budoka *budo_p; /* Pointer to budoka object */ budoka_ref *budo_ref; /* Pointer to budoka object reference */ budoka_ind *budo_ind; /* Pointer to indicator struct for budoka */ /* These are data declarations to be used to insert/retrieve object data */ VARCHAR acct[21]; struct { char lname[21], fname[21]; int age; } pers; struct { int num; char street[61], city[31], state[3], zip[11]; } addr; EXEC SQL WHENEVER SQLERROR DO whoops( sqlca.sqlcode, sqlca.sqlerrm.sqlerrmc, sqlca.sqlerrm.sqlerrml); EXEC SQL CONNECT :uid; EXEC SQL ALLOCATE :budo_ref; /* Create a new budoka object with an associated indicator * variable returning a REF to that budoka as well. */ EXEC SQL OBJECT CREATE :budo_p:budo_ind TABLE PERSON_TAB RETURNING REF INTO :budo_ref; /* Create a new customer object with an associated indicator */ EXEC SQL OBJECT CREATE :cust_p:cust_ind TABLE CUSTOMER_TAB; /* Set all budoka indicators to NOT NULL. We * will be setting all attributes of the budoka. */ budo_ind->_atomic = budo_ind->lastname = budo_ind->firstname = budo_ind->age = OCI_IND_NOTNULL; /* We will also set all address attributes of the budoka */ budo_ind->addr._atomic = budo_ind->addr.num = budo_ind->addr.street = budo_ind->addr.city = budo_ind->addr.state = budo_ind->addr.zip = OCI_IND_NOTNULL; /* All customer attributes will likewise be set */ cust_ind->_atomic = cust_ind->account_number = cust_ind->aperson = OCI_IND_NOTNULL; /* Set the default CHAR semantics to type 5 (STRING) */; addr.num = 1893; strcpy((char *)addr.street, (char *)"Rumble Street"); strcpy((char *)addr.city, (char *)"Bronx"); strcpy((char *)addr.state, (char *)"NY"); strcpy((char *)addr.zip, (char *)"92510"); /* Convert native C types to OTS types */ EXEC SQL OBJECT SET :budo_p->addr TO :addr; acct.len = strlen(strcpy((char *)acct.arr, (char *)"FS926")); /* Convert native C types to OTS types - Note also the REF type */ EXEC SQL OBJECT SET account_number, aperson OF :cust_p TO :acct, :budo_ref; /* Mark as updated both the new customer and the budoka */ EXEC SQL OBJECT UPDATE :cust_p; EXEC SQL OBJECT UPDATE :budo_p; /* Now flush the changes to the server, effectively * inserting the data into the respective tables. */ EXEC SQL OBJECT FLUSH :budo_p; EXEC SQL OBJECT FLUSH :cust_p; /* Associative access to the REFs from CUSTOMER_TAB */ EXEC SQL DECLARE ref_cur CURSOR FOR SELECT REF(c) FROM customer_tab c; EXEC SQL OPEN ref_cur; printf("\n"); /* Allocate a REF to a customer for use below */ EXEC SQL ALLOCATE :cust_ref; EXEC SQL WHENEVER NOT FOUND DO break; while (1) { EXEC SQL FETCH ref_cur INTO :cust_ref; /* Pin the customer REF, returning a pointer to a customer object */ EXEC SQL OBJECT DEREF :cust_ref INTO :cust_p:cust_ind; /* Convert the OTS types to native C types */ EXEC SQL OBJECT GET account_number FROM :cust_p INTO :acct; printf("Customer Account is %.*s\n", acct.len, (char *)acct.arr); /* Pin the budoka REF, returning a pointer to a budoka object */ EXEC SQL OBJECT DEREF :cust_p->aperson INTO :budo_p:budo_ind; /* Convert the OTS types to native C types */ EXEC SQL OBJECT GET lastname, firstname, age FROM :budo_p INTO :pers; printf("Last Name: %s\nFirst Name: %s\nAge: %d\n", pers.lname, pers.fname, pers.age); /* Do the same for the address attributes as well */ EXEC SQL OBJECT GET :budo_p->addr INTO :addr; printf("Address:\n"); printf(" Street: %d %s\n City: %s\n State: %s\n Zip: %s\n\n", addr.num, addr.street, addr.city, addr.state, addr.zip); /* Unpin the customer object and budoka objects */ EXEC SQL OBJECT RELEASE :cust_p; EXEC SQL OBJECT RELEASE :budo_p; } EXEC SQL CLOSE ref_cur; EXEC SQL WHENEVER NOT FOUND DO whoops( sqlca.sqlcode, sqlca.sqlerrm.sqlerrmc, sqlca.sqlerrm.sqlerrml); /* Associatively select the newly created customer object */ EXEC SQL SELECT VALUE(c) INTO :cust_p FROM customer_tab c WHERE c.account_number = 'FS926'; /* Mark as deleted the new customer object */ EXEC SQL OBJECT DELETE :cust_p; /* Flush the changes, effectively deleting the customer object */ EXEC SQL OBJECT FLUSH :cust_p; /* Associatively select a REF to the newly created budoka object */ EXEC SQL SELECT REF(p) INTO :budo_ref FROM person_tab p WHERE p.lastname = 'Chan'; /* Pin the budoka REF, returning a pointer to the budoka object */ EXEC SQL OBJECT DEREF :budo_ref INTO :budo_p; /* Mark the new budoka object as deleted in the object cache */ EXEC SQL OBJECT DELETE :budo_p; /* Flush the changes, effectively deleting the budoka object */ EXEC SQL OBJECT FLUSH :budo_p; /* Finally, free all object cache memory and log off */ EXEC SQL OBJECT CACHE FREE ALL; EXEC SQL COMMIT WORK RELEASE; exit(EXIT_SUCCESS); } When the program is executed, the result is: Customer Account is AB123 Last Name: Seagal First Name: Steven Birthdate: 02-14-1963 Age: 34 Address: Street: 1825 Aikido Way City: Los Angeles State: CA Zip: 45300 Customer Account is DD492 Last Name: Norris First Name: Chuck Birthdate: 12-25-1952 Age: 45 Address: Street: 291 Grant Avenue City: Hollywood State: CA Zip: 21003 Customer Account is SM493 Last Name: Wallace First Name: Bill Birthdate: 02-29-1944 Age: 53 Address: Street: 874 Richmond Street City: New York State: NY Zip: 45100 Customer Account is AC493 Last Name: Van Damme First Name: Jean Claude Birthdate: 12-12-1965 Age: 32 Address: Street: 12 Shugyo Blvd City: Los Angeles State: CA Zip: 95100 Customer Account is FS926 Last Name: Chan First Name: Jackie Birthdate: 10-10-1959 Age: 38 Address: Street: 1893 Rumble Street City: Bronx State: NY Zip: 92510 Before Oracle8, Pro*C/C++ allowed you to specify a C structure as a single host variable in a SQL SELECT statement. In such cases, each member of the structure is taken to correspond to a single database column in a relational table; that is, each member represents a single item in the select list returned by the query. In Oracle8i and later versions, an object type in the database is a single entity and can be selected as a single item. This introduces an ambiguity with the Oracle7 notation: is the structure for a group of scalar variables, or for an object? Pro*C/C++ uses the following rule to resolve the ambiguity: A host variable that is a C structure is considered to represent an object type only if its C declaration was generated using OTT, and therefore its type description appears in a typefile specified in an INTYPE option to Pro*C/C++. All other host structures are assumed to be uses of the Oracle7 syntax, even if a datatype of the same name resides in the database. Thus, if you use new object types that have the same names as existing structure host variable types, be aware that Pro*C/C++ uses the object type definitions in the INTYPE file. This can lead to compilation errors. To correct this, you might rename the existing host variable types, or use OTT to choose a new name for the object type. The above rule extends transitively to user-defined datatypes that are aliased to OTT-generated datatypes. To illustrate, let emptype be a structure generated by OTT in a header file dbtypes.h and you have the following statements in your Pro*C/C++ program: #include <dbtypes.h> typedef emptype myemp; myemp *employee; The typename myemp for the variable employee is aliased to the OTT-generated typename emptype for some object type defined in the database. Therefore, Pro*C/C++ considers the variable employee to represent an object type. The above rules do not imply that a C structure having or aliased to an OTT-generated type cannot be used for fetches of non-object type data. The only implication is that Pro*C/C++ will not automatically expand such a structure -- the user is free to employ the "longhand syntax" and use individual fields of the structure for selecting or updating single database columns. The REF type denotes a reference to an object, instead of the object itself. REF types may occur in relational columns and also in attributes of an object type. The C representation for a REF to an object type is generated by OTT during type translation. For example, a reference to a user-defined PERSON type in the database may be represented in C as the type "Person_ref". The exact type name is determined by the OTT options in effect during type translation. The OTT-generated typefile must be specified in the INTYPE option to Pro*C/C++ and the OTT-generated header #included in the Pro*C/C++ program. This scheme ensures that the proper type-checking for the REF can be performed by Pro*C/C++ during precompilation. A REF type does not require a special indicator structure to be generated by OTT; a scalar signed 2-byte indicator is used instead. A host variable representing a REF in Pro*C/C++ must be declared as a pointer to the appropriate OTT-generated type. Unlike object types, the indicator variable for a REF is declared as the signed 2-byte scalar type OCIInd. As always, the indicator variable is optional, but it is a good programming practice to use one for each host variable declared. REFs reside in the object cache. However, indicators for REFs are scalars and cannot be allocated in the cache. They generally reside in the user stack. Prior to using the host structure for a REF in embedded SQL, allocate space for it in the object cache by using the EXEC SQL ALLOCATE command. After use, free using the EXEC SQL FREE or EXEC SQL CACHE FREE ALL commands. Memory for scalar indicator variables is not allocated in the object cache, and hence indicators are not permitted to appear in the ALLOCATE and FREE commands for REF types. Scalar indicators declared as OCIInd reside on the program stack. At runtime, the ALLOCATE statement causes space to be allocated in the object cache for the specified host variable. For the navigational interface, use EXEC SQL GET and EXEC SQL SET, not C assignments. Pro*C/C++ supports REF host variables in associative SQL statements and in embedded PL/SQL blocks. These OCI types are new C representations for a date, a varying-length zero-terminated string, an Oracle number, and varying-length binary data respectively. In certain cases, these types provide more functionality than earlier C representations of these quantities. For example, the OCIDate type provides client-side routines to perform DATE arithmetic, which in earlier releases required SQL statements at the server. The OCI* types appear as object type attributes in OTT-generated structures, and you use them as part of object types in Pro*C/C++ programs. Other than their use in object types, Oracle recommends that the beginner-level C and Pro*C/C++ user avoid declaring individual host variables of these types. An experienced Pro*C/C++ user may wish to declare C host variables of these types to take advantage of the advanced functionality these types provide. The host variables must be declared as pointers to these types, for example, OCIString *s. The associated (optional) indicators are scalar signed 2-byte quantities, declared, for example, OCIInd s_ind. Space for host variables of these types may be allocated in the object cache using EXEC SQL ALLOCATE. Scalar indicator variables are not permitted to appear in the ALLOCATE and FREE commands for these types. You allocate such indicators statically on the stack, or dynamically on the heap. De-allocation of space can be done using the statement EXEC SQL FREE, EXEC SQL CACHE FREE ALL, or automatically at the end of the session. Except for OCIDate, which is a structure type with individual fields for various date components: year, month, day, hour and so on., the other OCI types are encapsulated, and are meant to be opaque to an external user. In contrast to the way existing C types like VARCHAR are currently handled in Pro*C/C++, you include the OCI header file oci.h and employ its functions to perform DATE arithmetic, and to convert these types to and from native C types such as int, char, above, including the new object types, REF, Nested Table, Varying Array, NCHAR, NCHAR Varying and LOB types. The older Dynamic SQL method 4 is generally restricted to the Oracle types supported by Pro*C/C++ prior to release 8.0. It does allow host variables of the NCHAR, NCHAR Varying and LOB datatypes. Dynamic method 4 is not available for object types, Nested Table, Varying Array, and REF types. Instead, use ANSI Dynamic SQL Method 4 for all new applications, because it supports all datatypes introduced in Oracle8i.
http://docs.oracle.com/cd/A91202_01/901_doc/appdev.901/a89861/pc_17obj.htm
CC-MAIN-2016-40
en
refinedweb
The following diagram shows the SharePoint Foundation server architecture in relation to the collections and objects of the Microsoft.SharePoint.Administration namespace. .gif) The SPFarm object is the highest object within the SharePoint Foundation. For more information, see Server and Site Architecture: Object Model Overview. Nice article
https://blogs.msdn.microsoft.com/sharepointdev/2011/06/06/server-architecture-object-model-overview/
CC-MAIN-2016-40
en
refinedweb
#include <hallo.h> * Don Armstrong [Fri, Apr 14 2006, 02:02:17PM]: > On Fri, 14 Apr 2006, Torsten Marek wrote: > > >> - eog > > > > > > As said, Gnome bloat. Use gqview or pornview. > > > > Well, don't take pornview or you'll soon have a bug report about > > politically incorrect package names;-) > > Try feh. Much, much more powerful, and much, much nicer. Much, much, much? Where? Some eye-catchers (collage, HTTP client builtin) but not much for daily use for a desktop system: - no picture browser GUI - no picture management function - image quality not sufficiently adaptable to system's performance Eduard.
https://lists.debian.org/debian-devel/2006/04/msg00424.html
CC-MAIN-2016-40
en
refinedweb
/* Definition of target file data structures for GNU Make. Copyright (C) 1988,89,90,91,92,93,94,97 represents the info on one file that the makefile says how to make. All of these are chained together through `next'. */ struct file { struct file *next; char *name; char *hname; /* Hashed filename */ char *vpath; /* VPATH/vpath pathname */ struct dep *deps; entries for the same file. */ #if defined(__APPLE__) || defined(NeXT) || defined(NeXT_PDO) /* for NEXT_VPATH_FLAG support */ char *old_name; #endif /* File that this file was renamed to. After any time that a file could be renamed, call `check_renamed' (below). */ struct file *renamed; /* dependency of .PHONY. */ unsigned int intermediate:1;/* Nonzero if this is an intermediate file. */ /* Nonzero, for an intermediate file, means remove_intermediates should not delete it. */ unsigned int secondary */ }; /* Number of intermediate files entered. */ extern unsigned int num_intermediates; /* Current value for pruning the scan of the goal chain (toggle 0/1). */ extern unsigned int considered; extern struct file *default_goal_file, *suffix_file, *default_file; extern struct file *lookup_file PARAMS ((char *name)); extern struct file *enter_file PARAMS ((char *name)); extern void remove_intermediates PARAMS ((int sig)); extern void snap_deps PARAMS ((void)); extern void rename_file PARAMS ((struct file *file, char *name)); extern void rehash_file PARAMS ((struct file *file, char *name)); extern void file_hash_enter PARAMS ((struct file *file, char *name, unsigned int oldhash, char *oldname)); extern void set_command_state PARAMS ((struct file *file, int state)); extern void notice_finished_file PARAMS ((struct file *file)); #ifdef ST_MTIM_NSEC # define FILE_TIMESTAMP_STAT_MODTIME(st) \ FILE_TIMESTAMP_FROM_S_AND_NS ((st).st_mtime, \ (st).st_mtim.ST_MTIM_NSEC) # define FILE_TIMESTAMPS_PER_S \ MIN ((FILE_TIMESTAMP) 1000000000, \ (INTEGER_TYPE_MAXIMUM (FILE_TIMESTAMP) \ / INTEGER_TYPE_MAXIMUM (time_t))) #else # define FILE_TIMESTAMP_STAT_MODTIME(st) ((st).st_mtime) # define FILE_TIMESTAMPS_PER_S 1 #endif #define FILE_TIMESTAMP_FROM_S_AND_NS(s, ns) \ ((s) * FILE_TIMESTAMPS_PER_S \ + (ns) * FILE_TIMESTAMPS_PER_S / 1000000000) #define FILE_TIMESTAMP_DIV(a, b) ((a)/(b) - ((a)%(b) < 0)) #define FILE_TIMESTAMP_MOD(a, b) ((a)%(b) + ((a)%(b) < 0) * (b)) #define FILE_TIMESTAMP_S(ts) FILE_TIMESTAMP_DIV ((ts), FILE_TIMESTAMPS_PER_S) #define FILE_TIMESTAMP_NS(ts) \ (((FILE_TIMESTAMP_MOD ((ts), FILE_TIMESTAMPS_PER_S) * 1000000000) \ + (FILE_TIMESTAMPS_PER_S - 1)) \ / FILE_TIMESTAMPS_PER_now PARAMS ((void)); extern void file_timestamp_sprintf PARAMS ((char *p, FILE_TIMESTAMP ts)); /* Return the mtime of file F (a struct file *), caching it. The value is -1 -1 if the file does not exist. */ #define file_mtime_no_search(f) file_mtime_1 ((f), 0) extern FILE_TIMESTAMP f_mtime PARAMS ((struct file *file, int search)); #define file_mtime_1(f, v) \ ((f)->last_mtime ? (f)->last_mtime : f_mtime ((f),. If FILE_TIMESTAMP is unsigned, its maximum value is the same as ((FILE_TIMESTAMP) -1), so use one less than that, because -1 is used for non-existing files. */ #define NEW_MTIME \ (INTEGER_TYPE_SIGNED (FILE_TIMESTAMP) \ ? INTEGER_TYPE_MAXIMUM (FILE_TIMESTAMP) \ : (INTEGER_TYPE_MAXIMUM (FILE_TIMESTAMP) - 1)) #define check_renamed(file) \ while ((file)->renamed != 0) (file) = (file)->renamed /* No ; here. */
http://opensource.apple.com//source/gnumake/gnumake-104/make/filedef.h
CC-MAIN-2016-40
en
refinedweb
perlquestion vsespb <p> I. </p> <p> The only thing should be available for end user is one script to be ran from command line, and some .pod in the future. </p> <p> Examples of modules, I don't want to "provide" are: App::MtAws::Filter App::MtAws::GlacierRequest (total over 40). </p> <p> here is <a href="">the distribution</a> </p> <p> I have several issues with this. </p> <p> 1) I don't understand what different levels of "providing" a module exist. It seems that I don't want modules to be listed on CPAN pages, don't want it to appear in search, but I do want to "preserve" that namespace. </p> <p> 2) I have "no_index namespace" in my meta (both json and yml) (I've tried no_index directory also): </p> <code> "no_index" : { "namespace" : [ "App::MtAws" ] }, </code> <p> but it seems it doesn't work, because I see in my meta the following: </p> <code> "provides" : { "App::MtAws" : { "file" : "lib/App/MtAws.pm", "version" : "0.972" }, "App::MtAws::CheckLocalHashCommand" : { "file" : "lib/App/MtAws/CheckLocalHashCommand.pm" }, ... </code> <p> 3) Seems unneeded modules appear on <a href="">metacpan pages</a> but not on <a href="">CPAN pages</a> </p> <p> 4) When I upload new, non-dev version to PAUSE, I am getting the following email shortly: </p> <code> Status of this distro: Decreasing version number ================================================ The following packages (grouped by status) have been found in the distro: Status: Decreasing version number ================================= module: App::MtAws::CheckLocalHashCommand version: undef in file: lib/App/MtAws/CheckLocalHashCommand.pm status: Not indexed because lib/App/MtAws/CheckLocalHashCommand.pm in V/VS/VSESPB/App-MtAws-0.962.tar.gz has a higher version) </code> <p> I believe this is because I previously used M:B version, which reported "version=0" for packages without version: <a href="">example</a> </p> <p> and now I use M:B which reports no version in this case: <a href="">example</a> </p> <p> Question is how to fix those emails and what else this issue affects in practice (except sending warning over email)? </p> <p> 5) I there a difference (in theory and practice) between <code> package # hide from pause mymodule </code> <br> and <br> meta "no_index" <br> ? </p>
http://www.perlmonks.org/index.pl?displaytype=xml;node_id=1044576
CC-MAIN-2016-40
en
refinedweb
menu_requestname(3) UNIX Programmer's Manual menu_requestname(3) menu_requestname - handle printable menu request names #include <menu.h> const char *menu_request_name(int request); int menu_request_by_name(const char *name); The function menu_request_name returns the printable name of a menu request code. The function menu_request_by_name searches in the name-table for a request with the given name and returns its request code. Otherwise E_NO_MATCH is returned. menu_request_name returns NULL on error and sets errno to E_BAD_ARGUMENT. menu_request_by_name returns E_NO_MATCH on error..
http://mirbsd.mirsolutions.de/htman/sparc/man3/menu_requestname.htm
crawl-003
en
refinedweb
menu_pattern(3) UNIX Programmer's Manual menu_pattern(3) menu_pattern - get and set a menu's pattern buffer #include <menu.h> int set_menu_pattern(MENU *menu, const char *pattern); char *menu_pattern(const MENU *menu); Every menu has an associated pattern match buffer. As input events that are printable ASCII characters come in, they are appended to this match buffer and tested for a match, as described in menu_driver(3). NULL on error. The func- tion set_menu_pattern may return the following error codes: E_OK The routine succeeded. E_SYSTEM_ERROR System error occurred (see errno). E_BAD_ARGUMENT Routine detected an incorrect or out-of-range argument. E_NO_MATCH Character failed to match..
http://mirbsd.mirsolutions.de/htman/sparc/man3/menu_pattern.htm
crawl-003
en
refinedweb
CodeGuru Forums > Visual C++ & C++ Programming > C++ (Non Visual C++ Issues) > [RESOLVED] opening files with changing file paths PDA Click to See Complete Forum and Search --> : [RESOLVED] opening files with changing file paths Gulyman June 24th, 2008, 01:52 PM When I'm programing I move between the school computers and home and I have the program files on a data key. My computer and the school computers assign different drive letters to data keys. Is there a way to store the directory the program in executing form to a string and thus know what file path to use? It would also be helpful incase you moved the program around. Bluefox815 June 24th, 2008, 05:40 PM If you and the school both use Windows XP (I don't know how to use batch on other versions of windows), I use batch scripting very frequently in my computer (i.e. a quick, easy to use compiler for my .cpp files), so that may be something to look into. (a quick reference is to open up command prompt and type "help [command]" or "[command] /?" or "help" for a list of commands) I know it has nothing really to do with C++, but it isn't a bad Windows XP solution. example set: prog.cpp #include <iostream> using namespace std; int main(int argc, char** argv) { if (argc == 2) { cout << "Highest drive found is: " << argv[1] << endl; } return 0; } (note: first find out what the highest probable drive letter is, like F:, which normally occurs when only 2 devices are connected (A, C, and D are floppy drive, HDD, CD drive) NOTE: The below script is only useful if your data key is the latest device to be connected to windows and therefore receives the highest letter(closest to Z), as would be expected script.bat (REM = remark, or comment line) if exist G: goto g REM goes to label 'g' if drive G: is found if exist F: goto f if exist E: goto e :g prog G: REM launches prog.exe with G: as first argument, when prog.exe is in the same directory exit /b :f prog F: exit /b :e prog E: exit /b lastly, right-click a .bat file and click edit to change it. If you don't know how to make batch scripts, I can make one for you if you tell me: - folders on the data key - what you are doing with the files (reading, copying to main disk, etc.) - a file or folder that is directly inside your data key (like WINDOWS is in C:\) - all drive letters used on yours and the school's computer Gulyman June 24th, 2008, 07:31 PM I actually use Ubuntu/ Windows ME at home and 2000 at school. I don't feel that I'm competent enough to be integrating scripts into my compiler. I guess there isn't an input stream that you'd be able to pull the file path off of. I think I'll go ask Google again. looking at your examples, I'm wondering what the parameters you put in the function main() are for. The book I'm using to learn hasn't gone over that yet. I think I'll go ask Google about that to. Bluefox815 June 25th, 2008, 12:46 PM The parameters in main I used are for command line arguments, which are separated by spaces following the program name on the command line. if I typed "prog hello world" on the command line in the same directory as the below program (this is how it works in Windows). // prog.exe #include <iostream> using namespace std; // notice that argv is char** (char* argv[] also works, but is confusing) int main(int argc, char** argv) { // argc tells you how many arguments there are (always true (argc >= 1) it includes the program name) // argv[0] is always equal to the program name (idk why, could be useful if a user renamed the executable though) for (int i = 1; i < argc; i++) { cout << "Argument " << i << ": " << argv[i] << endl; } return 0; } the output would be: Argument 1: hello Argument 2: world and for "prog hello world" argc would equal 3 Gulyman June 25th, 2008, 08:17 PM That's cool. I wonder if you typed in "pwd | prog" pwd-print working directory(linux) if it would take the directory you're working in and use it for argv? Anyways I'm going to put this thread as resolved since no one else seems to be answering. Thanks for talking. Bluefox815 July 1st, 2008, 06:09 PM if you were to type "pwd.exe | prog" (pwd and pwd.exe are no different on Windows command line) you would get this result: argc = 3; argv[0] = "pwd" argv[1] = "|"; argv[2] = "prog"; S_M_A July 2nd, 2008, 04:50 PM If you have not found it out already... argv[0] contains the path and filename of your executable. Can't test and don't remember if this holds for both ME and 2000 but you can easily test it by excuting the following program#include <stdio.h> int main(int argc, char** argv) { printf( "%s\n", argv[0] ); } codeguru.com
http://forums.codeguru.com/archive/index.php/t-455896.html
crawl-003
en
refinedweb
XvSetPortAttribute(3XUNIX Programmer's ManuXvSetPortAttribute(3X) XvSetPortAttribute - sets an attribute of a video port #include <X11/extensions/Xvlib.h> XvSetPortAttribute(dpy, port, attribute, value) Display *dpy; XvPortID port; Atom attribute; int value; the attribute is to be used. attribute Identifies the port attribute to be set by this request. Can be one of the table entries under the column "String," below. value Identifies the value to which attribute is to be set. Can be one of the table entries under the column "Type," below. XvSetPortAttribute(3X) permits a client to set the port attribute to specified values. This request supports the following values: XFree86 Version 4.5.0 1 XvSetPortAttribute(3XUNIX Programmer's ManuXvSetPortAttribute(3X)). [Success] Returned if XvSetPortAttribute(3X) completed suc- cessfully. [XvBadExtension] Returned if the Xv extension is unavailable. [XvBadAlloc] Returned if XvSelectVideoNotify(3X) failed to allo- cate memory to process the request. [XvBadPort] Generated if the requested port does not exist. [XvBadEncoding] Generated if an encoding is specified that does not exist. [BadMatch] Generated if the requested attribute atom does not specify an attribute supported by the adaptor. XvGetPortAttribute(3X), XvSelectPortNotify.
http://mirbsd.mirsolutions.de/htman/sparc/man3/XvSetPortAttribute.htm
crawl-003
en
refinedweb
XvStopVideo(3X) UNIX Programmer's Manual XvStopVideo(3X) XvStopVideo - stop active video #include <X11/extensions/Xvlib.h> XvStopVideo(dpy, port, draw) Display *dpy; XvPortID port; Drawable draw; video is to be stopped. draw Specifies the drawable associated with the named port. XvStopVideo(3X) stops active video for the specified port and drawable. If the port is not processing video, or if it is processing video in a different drawable, the request is ignored. When video is stopped a XvVideoNotify(3X) event with detail XvStopped is generated for the associated draw- able. [Success] Returned if XvStopVideo(3X) completed successfully. [XvBadExtension] Returned if the Xv extension is unavailable. [XvBadAlloc] Returned if XvStopVideo(3X) failed to allocate memory to process the request. [XvBadPort] Generated if the requested port does not exist. [BadDrawable] XFree86 Version 4.5.0 1 XvStopVideo(3X) UNIX Programmer's Manual XvStopVideo(3X) Generated if the requested drawable does not exist. XvGetVideo(3X), XvPutVideo(3X), XvVideoNotifyEvent.
http://mirbsd.mirsolutions.de/htman/sparc/man3/XvStopVideo.htm
crawl-003
en
refinedweb
XvSelectVideoNotify(3UNIX Programmer's ManXvSelectVideoNotify(3X) XvSelectVideoNotify - enable or disable VideoNotify events #include <X11/extensions/Xvlib.h> XvSelectVideoNotify(dpy, drawable, onoff) register Display *dpy; Drawable drawable; Bool onoff;. drawable Defines the drawable in which video activity is to be reported. onoff Selects whether video notification is enabled or disabled. XvSelectVideoNotify(3X) enables or disables events to be reported for video activity in a drawable. [Success] Returned if XvSelectVideoNotify(3X) completed suc- cessfully. [XvBadExtension] Returned if the Xv extension is unavailable. [XvBadAlloc] Returned if XvSelectVideoNotify(3X) failed to allo- cate memory to process the request. [BadDrawable] Generated if the requested drawable does not exist. XvVideoNotify.
http://mirbsd.mirsolutions.de/htman/sparc/man3/XvSelectVideoNotify.htm
crawl-003
en
refinedweb
XvQueryExtension(3X)UNIX Programmer's Manual XvQueryExtension(3X) XvQueryExtension - return version and release of extension #include <X11/extensions/Xvlib.h> XvQueryExtension(dpy, p_version, p_release, p_request_base, p_event_base, p_error_base) Display *dpy; unsigned int *p_version, *p_release; unsigned int *p_request_base, *p_event_base, *p_error_base; p_version Pointer to where the current version number of the Xv video extension is written. p_release Pointer to where the release number of the Xv video extension is written. p_request_base Pointer to where the extension major request number is returned p_event_base Pointer to where the extension event base is returned p_error_base Pointer to where the extension error base is returned XvQueryExtension(3X) returns the version and release numbers for the Xv video extension currently loaded on the system. The extension major request number, event base, and error base are also returned. [Success] Returned if XvQueryExtension(3X) completed success- fully. [XvBadExtension] Returned if the Xv video extension is not available for the named display. [XvBadAlloc] Returned if XvQueryExtension(3X) failed to allocate memory to process the request..
http://mirbsd.mirsolutions.de/htman/sparc/man3/XvQueryExtension.htm
crawl-003
en
refinedweb
XvSelectPortNotify(3XUNIX Programmer's ManuXvSelectPortNotify(3X) XvSelectPortNotify - enable or disable XvPortNotify(3X) events #include <X11/extensions/Xvlib.h> XvSelectPortNotify(dpy, port, onoff) Display *dpy; XvPortID port; Bool onoff; for which PortNotify events are to be generated when its attributes are changed using XvSetPortAttribute(3X). onoff Specifies whether notification is to be enabled or disabled. XvSelectPortNotify(3X) enables or disables PortNotify event delivery to the requesting client. XvPortNotify(3X) events are generated when port attributes are changed using XvSetPortAttribute(3X). [Success] Returned if XvSelectPortNotify(3X) completed suc- cessfully. [XvBadExtension] Returned if the Xv extension is unavailable. [XvBadAlloc] Returned if XvSelectPortNotify(3X) failed to allo- cate memory to process the request. [XvBadPort] Generated if the requested port does not exist. XFree86 Version 4.5.0 1 XvSelectPortNotify(3XUNIX Programmer's ManuXvSelectPortNotify(3X) XvSetPortNotify(3X),.
http://mirbsd.mirsolutions.de/htman/sparc/man3/XvSelectPortNotify.htm
crawl-003
en
refinedweb
How to: Use a Background Thread to Search for Files The BackgroundWorker component replaces and adds functionality to the System.Threading namespace; however, the System.Threading namespace is retained for both backward compatibility and future use, if you choose. For more information, see BackgroundWorker Component Overview.. For information about threading in the .NET Framework, see Managed Threading.. If you use multithreading in your control for resource-intensive tasks, the user interface can remain responsive while a resource-intensive computation executes on a background thread. The following sample (DirectorySearcher) shows a multithreaded Windows Forms control that uses a background thread to recursively search a directory for files matching a specified search string and then populates a list box with the search result. The key concepts illustrated by the sample are as follows: DirectorySearcher starts a new thread to perform the search. The thread executes the ThreadProcedure method that in turn calls the helper RecurseDirectory method to do the actual search and to populate the list box. However, populating the list box requires a cross-thread call, as explained in the next two bulleted items. DirectorySearcher defines the AddFiles method to add files to a list box; however, RecurseDirectory cannot directly invoke AddFiles because AddFiles can execute only in the STA thread that created DirectorySearcher. The only way RecurseDirectory can call AddFiles is through a cross-thread call — that is, by calling Invoke or BeginInvoke to marshal AddFiles to the creation thread of DirectorySearcher. RecurseDirectory uses BeginInvoke so that the call can be made asynchronously. Marshaling a method requires the equivalent of a function pointer or callback. This is accomplished using delegates in the .NET Framework. BeginInvoke takes a delegate as an argument. DirectorySearcher therefore defines a delegate (FileListDelegate), binds AddFiles to an instance of FileListDelegate in its constructor, and passes this delegate instance to BeginInvoke. DirectorySearcher also defines an event delegate that is marshaled when the search is completed. namespace Microsoft.Samples.DirectorySearcher { using System; using System.IO; using System.Threading; using System.Windows.Forms; /// <summary> /// This class is a Windows Forms control that implements a simple directory searcher. /// You provide, through code, a search string and it will search directories on /// a background thread, populating its list box with matches. /// </summary> public class DirectorySearcher : Control { // Define a special delegate that handles marshaling // lists of file names from the background directory search // thread to the thread that contains the list box. private delegate void FileListDelegate(string[] files, int startIndex, int count); private ListBox listBox; private string searchCriteria; private bool searching; private bool deferSearch; private Thread searchThread; private FileListDelegate fileListDelegate; private EventHandler onSearchComplete; public DirectorySearcher() { listBox = new ListBox(); listBox.Dock = DockStyle.Fill; Controls.Add(listBox); fileListDelegate = new FileListDelegate(AddFiles); onSearchComplete = new EventHandler(OnSearchComplete); } public string SearchCriteria { get { return searchCriteria; } set { // If currently searching, abort // the search and restart it after // setting the new criteria. // bool wasSearching = Searching; if (wasSearching) { StopSearch(); } listBox.Items.Clear(); searchCriteria = value; if (wasSearching) { BeginSearch(); } } } public bool Searching { get { return searching; } } public event EventHandler SearchComplete; /// <summary> /// This method is called from the background thread. It is called through /// a BeginInvoke call so that it is always marshaled to the thread that /// owns the list box control. /// </summary> /// <param name="files"></param> /// <param name="startIndex"></param> /// <param name="count"></param> private void AddFiles(string[] files, int startIndex, int count) { while(count-- > 0) { listBox.Items.Add(files[startIndex + count]); } } public void BeginSearch() { // Create the search thread, which // will begin the search. // If already searching, do nothing. // if (Searching) { return; } // Start the search if the handle has // been created. Otherwise, defer it until the // handle has been created. if (IsHandleCreated) { searchThread = new Thread(new ThreadStart(ThreadProcedure)); searching = true; searchThread.Start(); } else { deferSearch = true; } } protected override void OnHandleDestroyed(EventArgs e) { // If the handle is being destroyed and you are not // recreating it, then abort the search. if (!RecreatingHandle) { StopSearch(); } base.OnHandleDestroyed(e); } protected override void OnHandleCreated(EventArgs e) { base.OnHandleCreated(e); if (deferSearch) { deferSearch = false; BeginSearch(); } } /// <summary> /// This method is called by the background thread when it has finished /// the search. /// </summary> /// <param name="sender"></param> /// <param name="e"></param> private void OnSearchComplete(object sender, EventArgs e) { if (SearchComplete != null) { SearchComplete(sender, e); } } public void StopSearch() { if (!searching) { return; } if (searchThread.IsAlive) { searchThread.Abort(); searchThread.Join(); } searchThread = null; searching = false; } /// <summary> /// Recurses the given path, adding all files on that path to /// the list box. After it finishes with the files, it /// calls itself once for each directory on the path. /// </summary> /// <param name="searchPath"></param> private void RecurseDirectory(string searchPath) { // Split searchPath into a directory and a wildcard specification. // string directory = Path.GetDirectoryName(searchPath); string search = Path.GetFileName(searchPath); // If a directory or search criteria are not specified, then return. // if (directory == null || search == null) { return; } string[] files; // File systems like NTFS that have // access permissions might result in exceptions // when looking into directories without permission. // Catch those exceptions and return. try { files = Directory.GetFiles(directory, search); } catch(UnauthorizedAccessException) { return; } catch(DirectoryNotFoundException) { return; } // Perform a BeginInvoke call to the list box // in order to marshal to the correct thread. It is not // very efficient to perform this marshal once for every // file, so batch up multiple file calls into one // marshal invocation. int startingIndex = 0; while(startingIndex < files.Length) { // Batch up 20 files at once, unless at the // end. // int count = 20; if (count + startingIndex >= files.Length) { count = files.Length - startingIndex; } // Begin the cross-thread call. Because you are passing // immutable objects into this invoke method, you do not have to // wait for it to finish. If these were complex objects, you would // have to either create new instances of them or // wait for the thread to process this invoke before modifying // the objects. IAsyncResult r = BeginInvoke(fileListDelegate, new object[] {files, startingIndex, count}); startingIndex += count; } // Now that you have finished the files in this directory, recurse for // each subdirectory. string[] directories = Directory.GetDirectories(directory); foreach(string d in directories) { RecurseDirectory(Path.Combine(d, search)); } } /// <summary> /// This is the actual thread procedure. This method runs in a background /// thread to scan directories. When finished, it simply exits. /// </summary> private void ThreadProcedure() { // Get the search string. Individual // field assigns are atomic in .NET, so you do not // need to use any thread synchronization to grab // the string value here. try { string localSearch = SearchCriteria; // Now, search the file system. // RecurseDirectory(localSearch); } finally { // You are done with the search, so update. // searching = false; // Raise an event that notifies the user that // the search has terminated. // You do not have to do this through a marshaled call, but // marshaling is recommended for the following reason: // Users of this control do not know that it is // multithreaded, so they expect its events to // come back on the same thread as the control. BeginInvoke(onSearchComplete, new object[] {this, EventArgs.Empty}); } } } } Using the Multithreaded Control on a Form The following example shows how the multithreaded DirectorySearcher control can be used on a form. namespace SampleUsage { using Microsoft.Samples.DirectorySearcher; using System; using System.Drawing; using System.Collections; using System.ComponentModel; using System.Windows.Forms; using System.Data; /// <summary> /// Summary description for Form1. /// </summary> public class Form1 : System.Windows.Forms.Form { private DirectorySearcher directorySearcher; private System.Windows.Forms.TextBox searchText; private System.Windows.Forms.Label searchLabel; private System.Windows.Forms.Button searchButton; public Form1() { // // Required for Windows Forms designer support. // InitializeComponent(); // // Add any constructor code after InitializeComponent call here. // } #region Windows Form Designer generated code /// <summary> /// Required method for designer support. Do not modify /// the contents of this method with the code editor. /// </summary> private void InitializeComponent() { this.directorySearcher = new Microsoft.Samples.DirectorySearcher.DirectorySearcher(); this.searchButton = new System.Windows.Forms.Button(); this.searchText = new System.Windows.Forms.TextBox(); this.searchLabel = new System.Windows.Forms.Label(); this.directorySearcher.Anchor = (((System.Windows.Forms.AnchorStyles.Top | System.Windows.Forms.AnchorStyles.Bottom) | System.Windows.Forms.AnchorStyles.Left) | System.Windows.Forms.AnchorStyles.Right); this.directorySearcher.Location = new System.Drawing.Point(8, 72); this.directorySearcher.SearchCriteria = null; this.directorySearcher.Size = new System.Drawing.Size(271, 173); this.directorySearcher.TabIndex = 2; this.directorySearcher.SearchComplete += new System.EventHandler(this.directorySearcher_SearchComplete); this.searchButton.Location = new System.Drawing.Point(8, 16); this.searchButton.Size = new System.Drawing.Size(88, 40); this.searchButton.TabIndex = 0; this.searchButton.Text = "&Search"; this.searchButton.Click += new System.EventHandler(this.searchButton_Click); this.searchText.Anchor = ((System.Windows.Forms.AnchorStyles.Top | System.Windows.Forms.AnchorStyles.Left) | System.Windows.Forms.AnchorStyles.Right); this.searchText.Location = new System.Drawing.Point(104, 24); this.searchText.Size = new System.Drawing.Size(175, 20); this.searchText.TabIndex = 1; this.searchText.Text = "c:\\*.cs"; this.searchLabel.ForeColor = System.Drawing.Color.Red; this.searchLabel.Location = new System.Drawing.Point(104, 48); this.searchLabel.Size = new System.Drawing.Size(176, 16); this.searchLabel.TabIndex = 3; this.ClientSize = new System.Drawing.Size(291, 264); this.Controls.AddRange(new System.Windows.Forms.Control[] {this.searchLabel, this.directorySearcher, this.searchText, this.searchButton}); this.Text = "Search Directories"; } #endregion /// <summary> /// The main entry point for the application. /// </summary> [STAThread] static void Main() { Application.Run(new Form1()); } private void searchButton_Click(object sender, System.EventArgs e) { directorySearcher.SearchCriteria = searchText.Text; searchLabel.Text = "Searching..."; directorySearcher.BeginSearch(); } private void directorySearcher_SearchComplete(object sender, System.EventArgs e) { searchLabel.Text = string.Empty; } } }
http://msdn.microsoft.com/en-us/library/3s8xdz5c(VS.80).aspx
crawl-003
en
refinedweb
1. Introduction In the previous articles you saw a Singlecall remote object and a Singleton remote object. In this article I will show you the usage of Generics in the remote object and how the server will register it and how the client will consume it. For the previous articles, from the web site's home page select the remoting section from the side bar and navigate. You will see the other good articles on this topic from other authors also. Let us first begin with the server. I suggest that you first read the basic article here. It will be easy to understand this article once you know the basics. Search Tags: Search the below tags in the downloaded application to know the Sequence of code changes. //Server 0 //Client 0 2. The Generic Interface Start a Visual C# console project called GenRemSrv. Once the project is started, add a generic interface. Our remote object will implement this interface. Below is the code: //Server 001: Generic Interface which has only one method takes and //generic type and return same kind of generic type public interface IGenericIface<T> { void AddData(T Data); string GetData(); } Note the usage of the Letter T. It indicates that the function accepts any data type. In our example we are going to use this interface for int as well as string data types. 3. Remote Class using Generic Interface Add a new class to the GenRemSrv Project and Name it InputKeeper<T>. Here, once again the T stands for some data type. Also note how the generic interface is inherited here by specifying the T substitution. Below is the code: //Server 002: Public class that implements MarshalbyRef and the Generic interface public class InputKeeper<T> : MarshalByRefObject, IGenericIface<int> , IGenericIface<string> Next the constructor and the variables required for this are coded. Below is the code for that: //Server 003: Variable declaration int CollectedInt; string CollectedString; //Server 004: Constructor public InputKeeper() CollectedInt = 0; CollectedString = ""; System.Console.WriteLine("Input Keeper Constructoed"); Finally the interface functions are implemented as shown below: /; In the above code, note that the Adddata function is implemented twice; once using the int data type and again using the string data type. As we derived the class from the generic interface that supports both the data types int and string (IGenericIface<int> , IGenericIface<string>) it becomes necessory to implement the interface generic function void AddData(T Data); twice by substituting the required data types. 4. Hosting the remote objects I hope you read my first article. I am not going to explain everything, which I already explained in the article here. As our remote object itself a generic object (InputKeeper<T>), we need to register the object resolving the type T. In our case, we are using two different types integer and string and so we need two registerations. The code below registers the InputKeeper for both the data types on the TCP channel identified by the port 14750. /(); 5. The client application Add a new visual C# windows application and Name the project Generic User. Use the File add new project without closing the server project so that both the projects are available under a solution. The form design is shown below: The first send button will contact the generic remote object for integer and second send button will contact the generic remote object for string. The data collected will be displayed on the multi-select list box. Also note that we have two generic objects on the server's remote pool and they are independent. Enter some integer on the left group box and click send button and type some other integer then click the send button again. Do the same for the string also. This is just for the test and data is persisted on each object in their contexts and we registered the object as singleton. Provide the reference for Remoting runtime and the server project as we already did in our first .Net remoting project. The reference to that article is given in the introduction section. 1) Include the following Namespace for the form code: //Client 001: Namespace inclution using System.Runtime.Remoting; using System.Runtime.Remoting.Channels; using System.Runtime.Remoting.Channels.Tcp; using GenRemSrv; 2) Client declared generic interfaces. One interface is for integer and the other one is for string. The integer interface is used by the left send button click and right send button will use string interface. //Client 002: Object declaration using the generic types private IGenericIface<int> intObject; private IGenericIface<string> stringObject; 3) In the Form Load event handler, after registering the communication channel, the remote objects are retrieved (Proxy) and strored in the interface class members declared in the previous step. /) The left and right send buttons will make a call to the relevant remote object (Generic object). Remember, the function exposed by our remote generic interface is AddData and GetData. The AddData for integer simply performs summation of supplied integer and AddData for string will simply append the given string input. Note that for simplicity I collected the data in string and integer variables in the server. Collecting the input in a variable of Type T should do the proper implementation. For simplicity I used CollectedInt and CollectedString. However, I hope this will explain how to use generic remote objects in remoting environment. The code below is(); Running the application is shown below: Note: The attached solution is created in VS2005. If you have latest version say yes to the conversion UI displayed. .NET Remoting - Generic Remote Objects and Generic Interfaces Prerequirement of REMOTING English is just a language Man. Forget it Good,your remoting technology very good!:-) My english is bad!Please forgive me! Thanks!
http://www.c-sharpcorner.com/uploadfile/6897bc/net-remoting-generic-remote-objects-and-generic-interfaces/
crawl-003
en
refinedweb
Hi, I 've been trying to compile a program that uses the following structure but i keep getting errors. What's wrong? struct node { char map[3][3]={{0,0,0},{0,0,0},{0,0,0}}; struct node *father; struct node *nextLev; nextLev=NULL; father=NULL; }; Thanks in advance. Bad structure? [C] Started by charles-eng, Feb 11 2012 05:27 PM 2 replies to this topic #1 Posted 11 February 2012 - 05:27 PM #2 Posted 11 February 2012 - 05:54 PM struct node { char map[3][3];// = {{0,0,0},{0,0,0},{0,0,0}}; struct node *father; struct node *nextLev; }; I don't think you can do initialization in the structure definition? Someone can correct me if I'm wrong. This compiles: #include <stdio.h> #include <stdlib.h> struct node { char map[3][3];// = {{0,0,0},{0,0,0},{0,0,0}}; struct node *father; struct node *nextLev; }; int main() { struct node n; int x; int y; for (x = 0; x < 3; x++) { for ( y = 0; y < 3; y++) { n.map[x][y] = 0; } } return 0; } #3 Posted 14 February 2012 - 01:36 PM Thanks, it was that. You cannot initialize structures in C. 1 user(s) are reading this topic 0 members, 1 guests, 0 anonymous users
http://forum.codecall.net/topic/68227-bad-structure-c/
crawl-003
en
refinedweb
Principles, Patterns, and Practices: The Strategy, Template Method, and Bridge Patterns The Strategy, Template Method, and Bridge Patterns One of the great benefits of object-oriented programming is polymorphism; i.e., the ability to send a message to an object without knowing the true type of the object. Perhaps no pattern illustrates this better than the Strategy pattern. To illustrate the Strategy pattern let's assume that we are working on a debug logger. Debug loggers are often very useful devices. Programmers can send messages to these loggers at strategic places within the code. If the system misbehaves in some way, the debug log can provide clues about what the system was doing internally at the time of the failure. In order to be effective, loggers need to be simple for programmers to use. Programmers aren't going to frequently use something that is inconvenient. You should be able to emit a log message with something no more complicated than: logger.log("My Message"); On the other hand, what we want to see in the log is quite a bit more complex. At very least we are going to want to see the time and date of the message. We'll also probably want to see the thread ID. Indeed, there may be a whole laundry list of system states that we want to log along with the message. So the logger needs to gather all of this peripheral information together, format it into a log message, and then add it to the growing list of logged messages. Where should the logged messages be stored? Sometimes we might like them stored in a text file. Sometimes we might want to see them added to a database table. Sometimes we might like them accumulated in RAM. The choices seem endless. However, the final destination of the logged messages has nothing to do with the format of the messages themselves. We have two algorithms: one formats the logged message, and the other records the logged message. These two algorithms are both in the flow of logging a message, but both can vary independently of each other. The formatter does not care where the message is recorded, and the recorder does not care about the format of the message. Whenever we have two connected but independent algorithms, we can use the Strategy pattern to connect them. Consider the following structure: Here the user calls the log method of the Logger class. The log method formats the message and then calls the record method of the Recorder interface. There are many possible implementations of the Recorder interface. Each does the recording in a different way. The structure of the Logger and Recorder is exemplified by the following unit test, which uses the Adapter pattern: public class LoggerTest extends TestCase { private String recordedMessage; protected String message; public void testLogger() throws Exception { Logger logger = new Logger(new Recorder() { public void record(String message) { recordedMessage = message; } }); message = "myMessage"; logger.log(message); checkFormat(); } private void checkFormat() { String datePattern = "\\d{2}/\\d{2}/\\d{4} \\d{2}:\\d{2}:\\d{2}.\\d{3}"; String messagePattern = datePattern + " " + message; if(!Pattern.matches(messagePattern, recordedMessage)) { fail(recordedMessage + " does not match pattern"); } } } As you can see, the Logger is constructed with an instance of an object that implements the Recorder interface. Logger does not care what that implementation does. It simply builds the string to be logged and then calls the record method. This is very powerful decoupling. It allows the formatting and recording algorithms to change independently of each other. Loggeris a simple class that simply formats the message and forwards it to the Recorder. public class Logger { private Recorder recorder; public Logger(Recorder recorder) { this.recorder = recorder; } public void log(String message) { DateFormat format = new SimpleDateFormat("MM/dd/yyyy kk:mm:ss.SSS"); Date now = new Date(); String prefix = format.format(now); recorder.record(prefix + " " + message); } } And Recorder is an even simpler interface. public interface Recorder { void record(String message); } The canonical form of the Strategy pattern is shown below. One algorithm (the context) is shielded from the other (the strategy) by an interface. The context is unaware of how the strategy is implemented, or of how many different implementations there are. The context typically holds a pointer or reference to the strategy object with which it was constructed. In our Logger example, the Logger is the context, the Recorder is the strategy interface, and the anonymous inner class within the unit test acts as one of the implemented strategies. If you have been an object-oriented programmer for any length of time, you have seen this pattern many times. Indeed, it is so common that some folks shake their heads and wonder why it even has a name. It's rather like giving the name "DO NEXT STATEMENT" to the fact that execution proceeds statement by statement. However, there is a good reason to give this pattern a name. It turns out that there is another pattern that solves the same problem in a slightly different way; and the two names help us differentiate between them. This second pattern is called Template Method, and we can see it by adding the next obvious layer of polymorphism to the Logger example. We already have one layer that allows us to change the way log messages are recorded. We could add another layer to allow us to change how log messages are formatted. Let's suppose, for instance, that we want to support two different formats. One prepends the time and date to the message as above; the other prepends only the time. Clearly, this is just another problem in polymorphism, and we could use the Strategy pattern once again. If we did, the design might look like this: Here we see two uses of the Strategy pattern. One provides polymorphic recording and the other provides polymorphic formatting. This is a common enough solution, but it is not the only solution. Indeed, we might have opted for a solution that looked more like this: Notice the format method of Logger. It is protected (that's what the # means) and it is abstract (that's what the italics mean). The log method of Logger calls its own abstract format method, which deploys to one of the derived classes. The formatted string is then passed to the record method of the Recorder. Consider the unit test below. It shows tests for both the TimeLogger and the TimeDateLogger. Notice that each test method creates the appropriate Logger derivative and passes a Recorder instance into it. import junit.framework.TestCase; import java.util.regex.Pattern; public class LoggerTest extends TestCase { private String recordedMessage; protected String message; private static final String timeDateFormat = "\\d{2}/\\d{2}/\\d{4} \\d{2}:\\d{2}:\\d{2}.\\d{3}"; private static final String timeFormat = "\\d{2}:\\d{2}:\\d{2}.\\d{3}"; private Recorder recorder = new Recorder() { public void record(String message) { recordedMessage = message; } }; public void testTimeDateLogger() throws Exception { Logger logger = new TimeDateLogger(recorder); message = "myMessage"; logger.log(message); checkFormat(timeDateFormat); } public void testTimeLogger() throws Exception { Logger logger = new TimeLogger(recorder); message = "myMessage"; logger.log(message); checkFormat(timeFormat); } private void checkFormat(String prefix) { String messagePattern = prefix + " " + message; if (!Pattern.matches(messagePattern, recordedMessage)) { fail(recordedMessage + " does not match pattern"); } } } The Logger has changed as follows. Notice the protected abstract format method. public abstract class Logger { private Recorder recorder; public Logger(Recorder recorder) { this.recorder = recorder; } public void log(String message) { recorder.record(format(message)); } protected abstract String format(String message); } TimeLoggerand TimeDateLoggersimply implement the format method appropriate to their type, as shown below: import java.text.*; import java.util.Date; public class TimeLogger extends Logger { public TimeLogger(Recorder recorder) { super(recorder); } protected String format(String message) { DateFormat format = new SimpleDateFormat("kk:mm:ss.SSS"); Date now = new Date(); String prefix = format.format(now); return prefix + " " + message; } } import java.text.*; import java.util.Date; public class TimeDateLogger extends Logger { public TimeDateLogger(Recorder recorder) { super(recorder); } protected String format(String message) { DateFormat format = new SimpleDateFormat("MM/dd/yyyy kk:mm:ss.SSS"); Date now = new Date(); String prefix = format.format(now); return prefix + " " + message; } } The canonical form of Template Method looks like this: The Context class has at least two functions. One (here called function) is generally public, and represents some high-level algorithm. The other function (here called subFunction) represents some lower-level algorithm called by the higher-level algorithm. The derivatives of Context implement subFunction in different ways. It should be clear how Strategy and Template Method solve the same problem. The problem is simply to separate a high-level algorithm from a lower-level algorithm in such a way that the two can vary independently. In the case of Strategy, this is solved by creating an interface for the lower-level algorithm. In the Template Method case, it is solved by creating an abstract method. Strategy is preferable to Template Method when the lower-level algorithm needs to change at run time. This can be accomplished with Strategy simply by swapping in an instance of a different derivative. Template Method is not so fortunate; once created, its lower-level algorithm is locked in. On the other hand, Strategy has a slight time and space penalty compared to Template Method, and is more complex to set up. So Strategy should be used when flexibility is important, and Template Method should be used when time and space efficiency and simplicity are more important. Could we have used Template Method to solve the whole Logger problem? Yes, but the result is not pleasant. Consider the following diagram. Notice that there is one derivative for each possible combination. This is the dreaded m x n problem. Given two polymorphic degrees of freedom (e.g., recording and format) the number of derivatives is the product of those degrees. This problem is common enough that the combined use of Strategy and Template Method to solve it (as we did in the previous example) is a pattern in and of itself, called Bridge. - Login or register to post comments - Printer-friendly version - 8466 reads
http://today.java.net/pub/a/today/2004/10/29/patterns.html
crawl-003
en
refinedweb
Conditional Marks a conditional method whose execution depends on a specified preprocessing identifier. Parameters Applies To Method declarations. Remarks The Conditional attribute is a multiuse attribute. Conditional is an alias for System.Diagnostics.ConditionalAttribute. Wherever a conditional method is called, the presence or absence of the preprocessing symbol specified by conditionalSymbol at the point of the call determines whether the call is included or omitted. If the symbol is defined, the call is included; otherwise, the call is omitted. Conditional methods provide a cleaner, more elegant alternative to enclosing the method call in #if conditionalSymbol...#endif preprocessor directives. A conditional method must be a method in a class or struct declaration and must have a return type of void (for other restrictions, see 17.4.2 The Conditional attribute). If a method has multiple Conditional attributes, a call to the method is included if at least one of the conditional symbols is defined (in other words, the symbols are logically ORed together). To achieve the effect of logically ANDing symbols, you can define serial conditional methods: Call IfAandB; if both A and B are defined, AandBPrivate will execute. Example // cs_attribute_conditional.cs #define DEBUG using System; using System.Diagnostics; public class Trace { [Conditional("DEBUG")] public static void Msg(string msg) { Console.WriteLine(msg); } } class Test { static void A( ) { Trace.Msg("Now in A."); B( ); } static void B( ) { Trace.Msg("Now in B."); } public static void Main( ) { Trace.Msg("Now in Main."); A( ); Console.WriteLine("Done."); } } Output If you compile the sample with first line omitted (or changed to #undef DEBUG), the output will be: Done.. See Also C# Attributes | Conditional Methods Tutorial | C# Preprocessor Directives
http://msdn.microsoft.com/en-us/library/4xssyw96(v=vs.71).aspx
crawl-003
en
refinedweb
Simple reusable user storing only uuid and manage JWT Token auth_uuidentication in remote Project description Simple reusable user storing only uuid and manage JWT Token auth_uuidentication in remote Documentation The full documentation is at. Quickstart Install django-simple-user: pip install simple_user Add it to your INSTALLED_APPS: INSTALLED_APPS = ( ... 'auth_uuid.apps.AuthConfig', ... ) Add django-simple-user’s URL patterns: from auth_uuid import urls as auth_uuid_urls urlpatterns = [ ... url(r'^', include(auth_uuid_urls)), ... ] Features - TODO Running Tests Does the code actually work? source <YOURVIRTUALENV>/bin/activate (myenv) $ pip install tox (myenv) $ tox Credits Tools used in rendering this package: History 0.1.0 (2018-10-04) - First release on PyPI. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/simple-user/0.5.0/
CC-MAIN-2019-51
en
refinedweb
private support for your internal/customer projects ... custom extensions and distributions ... versioned snapshots for indefinite support ... scalability guidance for your apps and Ajax/Comet projects ... development services for sponsored feature development This section gives an overview of the components of Jetty you typically configure using the mechanisms outlined in the previous section. Jetty Architecture describes the structure of a Jetty server, which is good background reading to understand configuration, and is vital if you want to change the structure of the server as set up by the default configurations in the Jetty distribution. However, for most purposes, configuration is a matter of identifying the correct configuration file and modifying existing configuration values. The Server instance is the central coordination object of a Jetty server; it provides services and life cycle management for all other Jetty server components. In the standard Jetty distribution, the core server configuration is in etc/jetty.xml file, but you can mix in other server configurations which can include: start.inior start.d/server.ini. etc/jetty.xmlfile is a Handler Collection containing a Context Handler Collection and the Default Handler. The Context Handler Collection selects the next handler by context path and is where deployed Context Handler and Web Application Contexts are added to the handler tree. The Default Handler handles any requests not already handled and generates the standard 404 page. Other configuration files may add handlers to this tree (for example, jetty-rewrite.xml, jetty-requestlog.xml) or configure components to hot deploy handlers (for example, jetty-deploy.xml). start.inior start.d/server.inifor controlling, among other things, the sending of dates and versions in HTTP responses. A Jetty Server Connector is a network end point that accepts connections for one or more protocols which produce requests and/or messages for the Jetty server. In the standard Jetty server distribution, several provided configuration files add connectors to the server for various protocols and combinations of protocols: http.ini, https.ini and jetty-http2.xml. The configuration needed for connectors is typically: jetty.http.port(or jetty.ssl.port) property, and if not found defaults to 8080 (or 8443 for TLS). jetty.hostproperty. HttpConfigurationinstance that contains common HTTP configuration that is independent of the specific wire protocol used. Because these values are often common to multiple connector types, the standard Jetty Server distribution creates a single HttpConfigurationin the jetty.xmlfile which is used via the XML Ref element in the specific connector files. Note Virtual hosts are not configured on connectors. You must configure individual contexts with the virtual hosts to which they respond. Note Prior to Jetty 9, the type of the connector reflected both the protocol supported (HTTP, HTTPS, AJP, SPDY), and the nature of the implementation (NIO or BIO). From Jetty 9 onwards there is only one prime Connector type ( ServerConnector), which is NIO based and uses Connection Factories to handle one or more protocols. A Jetty context is a handler that groups other handlers under a context path together with associated resources and is roughly equivalent to the standard ServletContext API. A context may contain either standard Jetty handlers or a custom application handler. Note The servlet specification defines a web application. In Jetty a standard web application is a specialized context that uses a standard layout and WEB-INF/web.xmlto instantiate and configure classpath, resource base and handlers for sessions, security, and servlets, plus servlets for JSPs and static content. Standard web applications often need little or no additional configuration, but you can also use the techniques for arbitrary contexts to refine or modify the configuration of standard web applications. Configuration values that are common to all contexts are: The contextPath is a URL prefix that identifies which context a HTTP request is destined for. For example, if a context has a context path /foo, it handles requests to /foo, /foo/index.html, /foo/bar/, and /foo/bar/image.png but it does not handle requests like /, /other/, or /favicon.ico. A context with a context path of / is called the root context. The context path can be set by default from the deployer (which uses the filename as the basis for the context path); or in code; or it can be set by a Jetty IoC XML that is either applied by the deployer or found in the WEB-INF/jetty-web.xml file of a standard web app context. WEB-INF/liband WEB-INF/classesdirectory and has additional rules about delegating classloading to the parent classloader. All contexts may have additional classpath entries added. javax.servlet.context.tempdiris used to pass the File instance that represents the assigned temporary directory for a web application. In an embedded server, you configure contexts by directly calling the ContextHandler API as in the following example: // // ======================================================================== // org.eclipse.jetty.server.Server; import org.eclipse.jetty.server.handler.ContextHandler; public class OneContext { public static Server createServer(int port) { Server server = new Server(port); // Add a single handler on context "/hello" ContextHandler context = new ContextHandler(); context.setContextPath("/hello"); context.setHandler(new HelloHandler()); // Can be accessed using server.setHandler(context); return server; } public static void main(String[] args) throws Exception { int port = ExampleUtil.getPort(args, "jetty.http.port", 8080); Server server = createServer(port); // Start the server server.start(); server.join(); } } You can create and configure a context entirely by IoC XML (either Jetty’s or Spring). The deployer discovers and hot deploys context IoC descriptors like the following which creates a context to serve the Javadoc from the Jetty distribution: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//DTD Configure//EN" ""> <!-- Configure a custom context for serving javadoc as static resources --> <Configure class="org.eclipse.jetty.server.handler.ContextHandler"> <Set name="contextPath">/javadoc</Set> <Set name="resourceBase"><SystemProperty name="jetty.home" default="."/>/javadoc/</Set> <Set name="handler"> <New class="org.eclipse.jetty.server.handler.ResourceHandler"> <Set name="welcomeFiles"> <Array type="String"> <Item>index.html</Item> </Array> </Set> <Set name="cacheControl">max-age=3600,public</Set> </New> </Set> </Configure> The servlet specification defines a web application, which when packaged as a zip is called WAR file (Web application ARchive). Jetty implements both WAR files and unpacked web applications as a specialized context that is configured by means of: WEB-INF/liband classes found in WEB-INF/classes. WEB-INF/web.xmldeployment descriptor which is parsed to define and configure init parameters, filters, servlets, listeners, security constraints, welcome files and resources to be injected. web.xmlformat deployment descriptor provided either by Jetty or in configuration configures the JSP servlet and the default servlet for handling static content. The standard web.xmlmay override the default web.xml. WEB-INF/libcan declare additional filters, servlets and listeners. WEB-INF/libcan declare additional init parameters, filters, servlets, listeners, security constraints, welcome files and resources to be injected. WEB-INF/jetty-web.xmlfile may contain Jetty IoC configuration to configure the Jetty specific APIs of the context and handlers. Because these configuration mechanisms are contained within the WAR file (or unpacked web application), typically a web application contains much of its own configuration and deploying a WAR is often just a matter of dropping the WAR file in to the webapps directory that is scanned by the Jetty deployer. If you need to configure something within a web application, often you do so by unpacking the WAR file and editing the web.xml and other configuration files. However, both the servlet standard and some Jetty features allow for other configuration to be applied to a web application externally from the WAR: web.xmlformat, to be set on a context (via code or IoC XML) to amend the configuration set by the default and standard web.xml. The web application standard provides no configuration mechanism for a web application or WAR file to set its own contextPath. By default the deployer uses conventions to set the context path: If you deploy a WAR file called foobar.WAR, the context path is /foobar; if you deploy a WAR file called ROOT.WAR the context path is /. However, it is often desirable to explicitly set the context path so that information (for example, version numbers) may be included in the filename of the WAR. Jetty allows the context Path of a WAR file to be set internally (by the WAR itself) or externally (by the deployer of the WAR). To set the contextPath from within the WAR file, you can include a WEB-INF/jetty-web.xml file which contains IoC XML to set the context path: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//DTD Configure//EN" ""> <Configure class="org.eclipse.jetty.webapp.WebAppContext"> <Set name="contextPath">/contextpath</Set> </Configure> Alternately, you can configure the classpath externally without the need to modify the WAR file itself. Instead of allowing the WAR file to be discovered by the deployer, an IoC XML file may be deployed that both sets the context path and declares the WAR file that it applies to: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//DTD Configure//EN" ""> <Configure class="org.eclipse.jetty.webapp.WebAppContext"> <Set name="war"><SystemProperty name="jetty.home" default="."/>/webapps/test.war</Set> <Set name="contextPath">/test</Set> </Configure> An example of setting the context path is included with the Jetty distribution in $JETTY_HOME/webapps/test.xml. Jetty is capable of deploying a variety of Web Application formats. This is accomplished via scans of the ${jetty.base}/webapps directory for contexts to deploy. A Context can be any of the following: .war"). {dir}/WEB-INF/web.xmlfile). The new WebAppProvider will attempt to avoid double deployments during the directory scan with the following heuristics: ".") are ignored ".d"are ignored foo/and foo.war), then the directory is assumed to be the unpacked WAR and only the WAR is deployed (which may reuse the unpacked directory) foo/and foo.xml), then the directory is assumed to be an unpacked WAR and only the XML is deployed (which may use the directory in its own configuration) foo.warand foo.xml), then the WAR is assumed to be configured by the XML and only the XML is deployed. Note In prior versions of Jetty there was a separate ContextDeployer that provided XML-based deployment. As of Jetty 9 the ContextDeployer no longer exists and its functionality has been merged with the new WebAppProvider to avoid double deployment scenarios. The authentication method and realm name for a standard web application may be set in the web.xml deployment descriptor with elements like: ... <login-config> <auth-method>BASIC</auth-method> <realm-name>Test Realm</realm-name> </login-config> ... This example declares that the BASIC authentication mechanism will be used with credentials validated against a realm called "Test Realm." However the standard does not describe how the realm itself is implemented or configured. In Jetty, there are several realm implementations (called LoginServices) and the simplest of these is the HashLoginService, which can read usernames and credentials from a Java properties file. To configure an instance of HashLoginService that matches the "Test Realm" configured above, the following $JETTY_BASE/etc/test-realm.xml IoC XML file should be passed on the command line or set in start.ini or start.d/server.ini. <?xml version="1.0"?><!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" ""> <Configure id="Server" class="org.eclipse.jetty.server.Server"> <!-- =========================================================== --> <!-- Configure Authentication Login Service --> <!-- Realms may be configured for the entire server here, or --> <!-- they can be configured for a specific web app in a context --> <!-- configuration (see $(jetty.home)/webapps/test.xml for an --> <!-- example). --> <!-- =========================================================== --> <Call name="addBean"> <Arg> <New class="org.eclipse.jetty.security.HashLoginService"> <Set name="name">Test Realm</Set> <Set name="config"><Property name="jetty.demo.realm" default="etc/realm.properties"/></Set> <Set name="hotReload">false</Set> </New> </Arg> </Call> <Get class="org.eclipse.jetty.util.log.Log" name="rootLogger"> <Call name="warn"><Arg>demo test-realm is deployed. DO NOT USE IN PRODUCTION!</Arg></Call> </Get> </Configure> This creates and configures the LoginService as an aggregate bean on the server. When a web application is deployed that declares a realm called "Test Realm," the server beans are searched for a matching Login Service.
https://www.eclipse.org/jetty/documentation/9.4.x/quickstart-config-what.html
CC-MAIN-2019-51
en
refinedweb
Hi, I have an issue I need to find a solution to: I need to click on a random directory : and afterwards click on a random file: How can I choose a random object with ranorex? Thx! Random object choosing Class library usage, coding and language questions. Re: Random object choosing Ranorex is built on .NET. You can use the .NET Random class to choose a number between 0 and 1 and then multiply by the number of items in the list. This will give you the zero-based index of the folder to pick. Then add 1 (XPath uses 1 based indexing) and store the result in a module variable. You can use that variable in your item's path like ".//element[$FolderIndex]". Shortcuts usually aren't... Re: Random object choosing I would like to add some hint on directories: First make sure, that the proper dll-s are used: First make sure, that the proper dll-s are used: using System; using System.IO;To count the numbers of directories in C:\ directory int countDir = System.IO.Directory.GetDirectories.("C:\").Length;To count the files in a directory: string targetDirectory = "C:\"; string [] fileEntries = Directory.GetFiles(targetDirectory ); int countFiles = filEntries.Length;
https://www.ranorex.com/forum/random-object-choosing-t6414.html
CC-MAIN-2019-51
en
refinedweb
Copy a string #include <string.h> char* strcpy( char* dst, const char* src ); libc Use the -l c option to qcc to link against this library. This library is usually included automatically. The strcpy() function copies the string pointed to by src (including the terminating NUL character) into the array pointed to by dst. The same pointer as dst. #include <stdio.h> #include <string.h> #include <stdlib.h> int main( void ) { char buffer[80]; strcpy( buffer, "Hello " ); strcat( buffer, "world" ); printf( "%s\n", buffer ); return EXIT_SUCCESS; } produces the output: Hello world
http://www.qnx.com/developers/docs/qnxcar2/topic/com.qnx.doc.neutrino.lib_ref/topic/s/strcpy.html
CC-MAIN-2019-51
en
refinedweb
This document explains how to find and use data reported from the New Relic Kubernetes integration. Find and use data To view the Kubernetes integration's dashboard: - Go to infrastructure.newrelic.com > Kubernetes. - Select the Kubernetes dashboard link to open the Kubernetes dashboard. - To create your own dashboards, go to insights.newrelic.com and create NRQL queries. Kubernetes data is attached to the following event types. - To learn more about using integration data, see Understand and use integration data. - To add Kubernetes metadata to New Relic APM custom attributes, see Add Kubernetes metadata to APM. Manage alerts You can be notified about alert violations for your Kubernetes data: - Create an alert condition To create an alert condition for the Kubernetes integration: - Go to infrastructure.newrelic.com > Settings > Alerts > Kubernetes, then select Create alert condition. - To filter the alert to Kubernetes entities that only have the chosen attributes, select Filter. - Select the threshold settings. For more on the Trigger an alert when... options, see Alert types. - Select an existing alert policy, or create a new one. - Select Create. When an alert condition's threshold is triggered, New Relic sends a notification to the policy's notification channels. - Use alert types and thresholds To use any of the available Kubernetes-specific alert criteria, select the Kubernetes alert type: In addition, you can create an alert condition for any metric collected by any New Relic integration you use, including the Kubernetes integration: - Select the alert type Integrations. - From the Select a data source dropdown, select a Kubernetes (K8s) data source. - Select alert notifications When an alert condition's threshold is triggered, New Relic sends a message to the notification channel(s) chosen in the alert policy. Depending on the type of notification, you may have the following options: - View the incident. - Acknowledge the incident. - Go to a chart of the incident data by selecting the identifier name. The entity identifier that triggered the alert appears near the top of the notification message. The format of the identifier depends on the alert type: Available pods are less than desired pods alerts: K8s:CLUSTER_NAME:PARENT_NAMESPACE:replicaset:REPLICASET_NAME CPU or memory usage alerts: K8s:CLUSTER_NAME:PARENT_NAMESPACE:POD_NAME:container:CONTAINER_NAME Here are some examples. - Pod alert notification example For Available pods are less than desired pods alerts, the ID of the replica set triggering the issue might look like this: k8s:beam-production:default:replicaset:nginx-deployment-1623441481 This identifier contains the following information: - Cluster name: beam-production - Parent namespace: default - ReplicaSet name: nginx-deployment-1623441481 - Container resource notification example For container CPU or memory usage alerts, the entity might look like this: k8s:beam-production:kube-system:kube-state-metrics-797bb87c75-zncwn:container:kube-state-metrics This identifier contains the following information: - Cluster name: beam-production - Parent namespace: kube-system - Pod namespace: kube-state-metrics-797bb87c75-zncwn - Container name: kube-state-metrics - Create alert conditions using NRQL Follow standard procedures to create alert conditions for NRQL queries. Kubernetes attributes and metrics The Kubernetes integration collects the following metrics and other attributes. For more on using integration data, see Find and use data. Node data Query the K8sNodeSample event in New Relic Insights for node data: Namespace data Query the K8sNamespaceSample event in New Relic Insights for namespace data: Deployment data Query the K8sDeploymentSample event in New Relic Insights for deployment data: Replica set data Query the K8sReplicasetSample event in New Relic Insights for replica set data: Pod data Query the K8sPodSample event in New Relic Insights for pod data: Cluster data Query the K8sClusterSample event in New Relic Insights to see cluster data: Container data Query the K8sContainerSample event in New Relic Insights for container data: Volume data Query the K8sVolumeSample event in New Relic Insights for volume data: Volume data is available for volume plugins that implement the MetricsProvider interface: - AWSElasticBlockStore - AzureDisk - AzureFile - Cinder - Flexvolume - Flocker - GCEPersistentDisk - GlusterFS - iSCSI - StorageOS - VsphereVolume Kubernetes metadata in APM-monitored applications By linking your applications with Kubernetes, the following attributes are added to application trace and distributed trace: nodeName containerName podName clusterName deploymentName namespaceName
https://docs.newrelic.com/docs/integrations/kubernetes-integration/understand-use-data/understand-use-data
CC-MAIN-2019-51
en
refinedweb
Julia theme for Sphinx based on ReadTheDocs theme. Project description Installation Download the package or add it to your requirements.txt file: $ pip install sphinx_julia_theme In your conf.py file: import sphinx_julia_theme html_theme = "sphinx_julia_theme" html_theme_path = [sphinx_julia_theme.get_html_theme_path()] Configuration You can configure different parts of the theme. Project-wide configuration The theme’s project-wide options are defined in the sphinx_julia_theme/theme.conf file of this repository, and can be defined in your project’s conf.py via html_theme_options. For example: html_theme_options = { 'collapse_navigation': False, 'display_version': False, 'navigation_depth': 3, } Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/sphinx_julia_theme/0.0.2/
CC-MAIN-2019-51
en
refinedweb
score:8 Looking through the code under DefinitelyTyped it appears that children is typed as ReactNode. Example: type Props = { children: ReactNode } const MyComponent: FunctionComponent<Props> = () => (React.Children.map(children, someMapingFunction); Note: The ReactNode type can be found in the React namespace: import React from 'react'; let someNode: React.ReactNode; score:0 Actually you don't have to specify children if you're using React.FunctionComponent. For example the following codes compile without error: const MyComponent: React.FC<{}> = props => { return props.children } score:0 The children is a prop like any other and can be of any type. It's only special insofar that child JSX elements are automatically mapped to the children prop. So, while it's usually declared as children?: React.ReactNode, you could declare it as a render prop, or even as a custom type like so: interface INameProps { children: { fistName: string, lastName: string } } const Name: React.FC<INameProps> = ({children}) => { return <div>{children.fistName} {children.lastName}</div>; } And then you can use it like so: <Name> { { fistName: "John", lastName: "Smith" } } </Name> Which is the same as: <Name children={ { fistName: "John", lastName: "Smith" } } /> score:1 It's not ReactNode but ReactElement<any, any> | null. interface FunctionComponent<P = {}> { (props: P, context?: any): ReactElement<any, any> | null; propTypes?: WeakValidationMap<P> | undefined; contextTypes?: ValidationMap<any> | undefined; defaultProps?: Partial<P> | undefined; displayName?: string | undefined; } Source: stackoverflow.com Related Query - what is the correct type for a React instance of a component in typescript - What is the correct typescript type for react children? - What is the correct type for React Click Event? - What is the correct type to use for this React form field hook? - What is the 'condition' type for react component children props? - What should the React children type be for components that only accept text and falsy? - How do I restrict the type of React Children in TypeScript, using the newly added support in TypeScript 2.3? - What is the TypeScript return type of a React stateless component? - What is the correct way of adding a dependency to react in your package.json for a react component - What is the react-app-env.d.ts in a react typescript project for - Typescript: What is the correct type for a Timeout? - How do you set the Typescript type for useRef hook in React Native? - What TypeScript type should React children be set to? - What is the Typescript type to accept any kind of React Component as prop? - What is the correct pattern for utility function when using React hooks? - React TypeScript | Correct Type for Event Handler Prop - Typescript and React: What is the proper type for an event handler that uses destructuring? - react create app, typescript unit test with mocha and chai what is the correct setup to support es6 modules? - What is the correct way to type a React HOC? - What's the correct way to do inheritance in TypeScript for React components? - What Typescript type to set for setState passed to the component - How to set the correct typescript type for setValue in react-hook-form? - What is the return type of a custom render for extending React Testing Library in test-utils using React/Typescript? - Correct Typescript type for return value of nested prop-injection-HOCs in React - What's the correct return type for a TypeScript HOC being used as a decorator? - What is the Typescript type for a route map object in hookrouter? - What is the correct way to type a React component class that has no props? - What would be the return type for React function - React useCallback hook: What are the correct dependencies for these handleChange & handleSubmit functions to prevent re rendering? - What is the proper typescript type for creating element More Query from same tag - React/Typescript - which type to use for router props and how to get query value from it - useState hook: update a state based on another just updated state - Get all values table row input boxes - function using initial state object instead of using the updated state react - Superagent keeps sending same image repeatedly - How to give a new REDUIX item a unique ID? - Redux and immutable js, I lost my state after trying to use immutable - Using styles from scss or css into Gatsby doesn't work - Can Yarn list all available scripts? - React.js: The difference between findDOMNode and getDOMNode - How do I make the Toast appear? - Phaser in Capacitor on Android is not resizing the canvas to fit the screen with Phaser.Scale.RESIZE - React.js pending 4 min before gets response from Express.js backand restful api - CSS grid, Day.js : How do I create a calendar that shows the current week's dates? - Make a switch buttom independent of the component - How to populate Observable<Object[]> from 2 API calls - How to find list of words in array of string using regex - Length of array in React isn't matching up to the number of elements it has - Accessing data returned from promise in Redux - React/material ui raisedbutton executing onTouchTap on init - Fields are not editable using React Hook Form - Edit mode doesn't show class attributes in React - Way to refresh an item list after adding/removing an item with React/redux - flow-type import types from external file - name is already bound - Get the values of the second select from the options of the first select in React - Trying to upload a PDF file in my form in React component but getting {} when I use setFormData - event prevent default in ReactJS when passing events to parent - How to draw a border around a Bootstrap form in ReactJS? - How to re-use type in React & TS application - Calling function after Animate.spring has finished
https://www.appsloveworld.com/reactjs/100/32/what-is-the-correct-typescript-type-for-react-children
CC-MAIN-2022-40
en
refinedweb
t_rcvudata(3nsl) [bsd man page] t_rcvudata(3NSL) Networking Services Library Functions t_rcvudata(3NSL) NAME t_rcvudata - receive a data unit SYNOPSIS #include <xti.h> int t_rcvudata(int fd, struct t_unitdata *unitdata, int *flags); by means of t_open(3NSL) or fcntl(2),. Subse- quent. RETURN VALUES Upon successful completion, a value of 0 is returned. Otherwise, a value of -1 is returned and t_errno is set to indicate an error. VALID STATES T_IDLE. ERRORSOUTSTATE A t_errno value that this routine can return under different circumstances than its XTI counterpart is TBUFOVFLW. It can be returned even when the maxlen field of the corresponding buffer has been set to zero._open(3NSL), t_rcvuderr(3NSL), t_sndudata(3NSL), attributes(5) SunOS 5.10 7 May 1998 t_rcvudata(3NSL)
https://www.unix.com/man-page/bsd/3NSL/t_rcvudata/
CC-MAIN-2022-40
en
refinedweb
Log generation Using pushParameterized Push logs to Loki with pushParameterized. This method generates batches of streams in a random fashion. This method requires three arguments: Javascript example code fragment: import loki from 'k6/x/loki'; const KB = 1024; const MB = KB * KB; const conf = loki.Config(""); const client = loki.Client(conf); export default () => { client.pushParameterized(2, 500 * KB, 1 * MB); }; Argument streams The first argument of the method is the desired amount of streams per batch. Instead of using a fixed amount of streams in each call, you can randomize the value to simulate a more realistic scenario. Javascript example code fragment: function randomInt(min, max) { return Math.floor(Math.random() * (max - min + 1) + min); }; export default () => { let streams = randomInt(2, 8); client.pushParameterized(streams, 500 * KB, 1 * MB); } Arguments minSize and maxSize The second and third argument of the method take the lower and upper bound of the batch size. The resulting batch size is a random value between the two arguments. This mimics the behaviour of a log client, such as Promtail or the Grafana Agent, where logs are buffered and pushed once a certain batch size is reached or after a certain size when no logs have been received. The batch size is not equal to the payload size, as the batch size only counts bytes of the raw logs. The payload may be compressed when Protobuf encoding is used. Log format xk6-loki can emit log lines in seven distinct formats. The label format of a stream defines the format of its log lines. - Apache common ( apache_common) - Apache combined ( apache_combined) - Apache error ( apache_error) - BSD syslog ( rfc3164) - Syslog ( rfc5424) - JSON ( json) - logfmt ( logfmt) Under the hood, the extension uses a fork the library flog for generating log lines. Labels xk6-loki uses the following label names for generating streams: By default, variable labels are not used. However, you can specify the cardinality (quantity of distinct label values) using the cardinality argument in the Config constructor. Javascript example code fragment: import loki from 'k6/x/loki'; const cardinality = { "app": 1, "namespace": 2, "language": 2, "pod": 5, }; const conf = loki.Config("", 5000, 1.0, cardinality); const client = loki.Client(conf); The total quantity of distinct streams is defined by the cartesian product of all label values. Keep in mind that high cardinality negatively impacts the performance of the Loki instance. Payload encoding Loki accepts two kinds of push payload encodings: JSON and Protobuf. While JSON is easier for humans to read, Protobuf is optimized for performance and should be preferred when possible. To define the ratio of Protobuf to JSON requests, the client configuration accepts values of 0.0 to 1.0. 0.0 means 100% JSON encoding, and 1.0 means 100% Protobuf encoding. The default value is 0.9. Javascript example code fragment: import loki from 'k6/x/loki'; const ratio = 0.8; // 80% Protobuf, 20% JSON const conf = loki.Config("", 5000, ratio); const client = loki.Client.
https://grafana.com/docs/enterprise-logs/latest/loki/clients/k6/log-generation/
CC-MAIN-2022-40
en
refinedweb
Using Machine Learning to Predict the Weather: Part 1 This is the first article of a multi-part series on using Python and Machine Learning to build models to predict weather temperatures based off data collected from Weather Underground. The series will be comprised of three different articles describing the major aspects of a Machine Learning project. The topics to be covered are: The data used in this series will be collected from Weather Underground's free tier API web service. I will be using the requests library to interact with the API to pull in weather data since 2015 for the city of Lincoln, Nebraska. Once collected, the data will need to be process and aggregated into a format that is suitable for data analysis, and then cleaned. The second article will focus on analyzing the trends in the data with the goal of selecting appropriate features for building a Linear Regression model using the statsmodels and scikit-learn Python libraries. I will discuss the importance of understanding the assumptions necessary for using a Linear Regression model and demonstrate how to evaluate the features to build a robust model. This article will conclude with a discussion of Linear Regression model testing and validation. The final article will focus on using Neural Networks. I will compare the process of building a Neural Network model, interpreting the results and, overall accuracy between the Linear Regression model built in the prior article and the Neural Network model. Weather Underground is a company that collects and distributes data on various weather measurements around the globe. The company provides a swath of API's that are available for both commercial and non-commercial uses. In this article, I will describe how to programmatically pull daily weather data from Weather Underground using their free tier of service available for non-commercial purposes. If you would like to follow along with the tutorial you will want to sign up for their free developer account here. This account provides an API key to access the web service at a rate of 10 requests per minute and up to a total of 500 requests in a day. Weather Underground provides many different web service API's to access data from but, the one we will be concerned with is their history API. The history API provides a summary of various weather measurements for a city and state on a specific day. The format of the request for the history API resource is as follows: API_KEY: The API_KEY that Weather Underground provides with your account YYYYMMDD: A string representing the target date of your request STATE: The two letter state abbreviation in the United States CITY: The name of the city associated with the state you requested To make requests to the Weather Underground history API and process the returned data I will make use of a few standard libraries as well as some popular third party libraries. Below is a table of the libraries I will be using and their description. For installation instructions please refer to the listed documentation. Let us get started by importing these libraries: from datetime import datetime, timedelta import time from collections import namedtuple import pandas as pd import requests import matplotlib.pyplot as plt Now I will define a couple of constants representing my API_KEY and the BASE_URL of the API endpoint I will be requesting. Note you will need to signup for an account with Weather Underground and receive your own API_KEY. By the time this article is published I will have deactivated this one. BASE_URL is a string with two place holders represented by curly brackets. The first {} will be filled by the API_KEY and the second {} will be replaced by a string formatted date. Both values will be interpolated into the BASE_URL string using the str.format(…) function. API_KEY = '7052ad35e3c73564' BASE_URL = "{}/history_{}/q/NE/Lincoln.json" Next I will initialize the target date to the first day of the year in 2015. Then I will specify the features that I would like to parse from the responses returned from the API. The features are simply the keys present in the history -> dailysummary portion of the JSON response. Those features are used to define a namedtuple called DailySummary which I'll use to organize the individual request's data in a list of DailySummary tuples. target_date = datetime(2016, 5, 16) features = ["date", "meantempm", "meandewptm", "meanpressurem", "maxhumidity", "minhumidity", "maxtempm", "mintempm", "maxdewptm", "mindewptm", "maxpressurem", "minpressurem", "precipm"] DailySummary = namedtuple("DailySummary", features) In this section I will be making the actual requests to the API and collecting the successful responses using the function defined below. This function takes the parameters url, api_key, target_date and days. def extract_weather_data(url, api_key, target_date, days): records = [] for _ in range(days): request = BASE_URL.format(API_KEY, target_date.strftime('%Y%m%d')) response = requests.get(request) if response.status_code == 200: data = response.json()['history']['dailysummary'][0] records.append(DailySummary( date=target_date, meantempm=data['meantempm'], meandewptm=data['meandewptm'], meanpressurem=data['meanpressurem'], maxhumidity=data['maxhumidity'], minhumidity=data['minhumidity'], maxtempm=data['maxtempm'], mintempm=data['mintempm'], maxdewptm=data['maxdewptm'], mindewptm=data['mindewptm'], maxpressurem=data['maxpressurem'], minpressurem=data['minpressurem'], precipm=data['precipm'])) time.sleep(6) target_date += timedelta(days=1) return records I start by defining a list called records which will hold the parsed data as DailySummary namedtuples. The for loop is defined so that it iterates over the loop for number of days passed to the function. Then the request is formatted using the str.format() function to interpolate the API_KEY and string formatted target_date object. Once formatted, the request variable is passed to the get() method of the requests object and the response is assigned to a variable called response. With the response returned I want to make sure the request was successful by evaluating that the HTTP status code is equal to 200. If it is successful then I parse the response's body into JSON using the json() method of the returned response object. Chained to the same json() method call I select the indexes of the history and daily summary structures then grab the first item in the dailysummary list and assign that to a variable named data. Now that I have the dict-like data structure referenced by the data variable I can select the desired fields and instantiate a new instance of the DailySummary namedtuple which is appended to the records list. Finally, each iteration of the loop concludes by calling the sleep method of the time module to pause the loop's execution for six seconds, guaranteeing that no more than 10 requests are made per minute, keeping us within Weather Underground's limits. Then the target_date is incremented by 1 day using the timedelta object of the datetime module so the next iteration of the loop retrieves the daily summary for the following day. Without further delay I will kick off the first set of requests for the maximum allotted daily request under the free developer account of 500. Then I suggest you grab a refill of your coffee (or other preferred beverage) and get caught up on your favorite TV show because the function will take at least an hour depending on network latency. With this we have maxed out our requests for the day, and this is only about half the data we will be working with. So, come back tomorrow where we will finish out the last batch of requests then we can start working on processing and formatting the data in a manner suitable for our Machine Learning project. records = extract_weather_data(BASE_URL, API_KEY, target_date, 500) Ok, now that it is a new day we have a clean slate and up to 500 requests that can be made to the Weather Underground history API. Our batch of 500 requests issued yesterday began on January 1st, 2015 and ended on May 15th, 2016 (assuming you didn't have any failed requests). Once again let us kick off another batch of 500 requests but, don't go leaving me for the day this time because once this last chunk of data is collected we are going to begin formatting it into a Pandas DataFrame and derive potentially useful features. # if you closed our terminal or Jupyter Notebook, reinitialize your imports and # variables first and remember to set your target_date to datetime(2016, 5, 16) records += extract_weather_data(BASE_URL, API_KEY, target_date, 500) Now that I have a nice and sizable records list of DailySummary named tuples I will use it to build out a Pandas DataFrame. The Pandas DataFrame is a very useful data structure for many programming tasks which are most popularly known for cleaning and processing data to be used in machine learning projects (or experiments). I will utilize the Pandas.DataFrame(...) class constructor to instantiate a DataFrame object. The parameters passed to the constructor are records which represent the data for the DataFrame, the features list I also used to define the DailySummary namedtuples which will specify the columns of the DataFrame. The set_index() method is chained to the DataFrame instantiation to specify date as the index. df = pd.DataFrame(records, columns=features).set_index('date') Machine learning projects, also referred to as experiments, often have a few characteristics that are a bit oxymoronic. By this I mean that it is quite helpful to have subject matter knowledge in the area under investigation to aid in selecting meaningful features to investigate paired with a thoughtful assumption of likely patterns in data. However, I have also seen highly influential explanatory variables and pattern arise out of having almost a naive or at least very open and minimal presuppositions about the data. Having the knowledge-based intuition to know where to look for potentially useful features and patterns as well as the ability to look for unforeseen idiosyncrasies in an unbiased manner is an extremely important part of a successful analytics project. In this regard, we have selected quite a few features while parsing the returned daily summary data to be used in our study. However, I fully expect that many of these will prove to be either uninformative in predicting weather temperatures or inappropriate candidates depending on the type of model being used but, the crux is that you simply do not know until you rigorously investigate the data. Now I can't say that I have significant knowledge of meteorology or weather prediction models, but I did do a minimal search of prior work on using Machine Learning to predict weather temperatures. As it turns out there are quite a few research articles on the topic and in 2016 Holmstrom, Liu, and Vo they describe using Linear Regression to do just that. In their article, Machine Learning Applied to Weather Forecasting, they used weather data on the prior two days for the following measurements. I will be expanding upon their list of features using the ones listed below, and instead of only using the prior two days I will be going back three days. So next up is to figure out a way to include these new features as columns in our DataFrame. To do so I will make a smaller subset of the current DataFrame to make it easier to work with while developing an algorithm to create these features. I will make a tmp DataFrame consisting of just 10 records and the features meantempm and meandewptm. tmp = df[['meantempm', 'meandewptm']].head(10) tmp Let us break down what we hope to accomplish, and then translate that into code. For each day (row) and for a given feature (column) I would like to find the value for that feature N days prior. For each value of N (1-3 in our case) I want to make a new column for that feature representing the Nth prior day's measurement. # 1 day prior N = 1 # target measurement of mean temperature feature = 'meantempm' # total number of rows rows = tmp.shape[0] # a list representing Nth prior measurements of feature # notice that the front of the list needs to be padded with N # None values to maintain the constistent rows length for each N nth_prior_measurements = [None]*N + [tmp[feature][i-N] for i in range(N, rows)] # make a new column name of feature_N and add to DataFrame col_name = "{}_{}".format(feature, N) tmp[col_name] = nth_prior_measurements tmp Ok so it appears we have the basic steps required to make our new features. Now I will wrap these steps up into a reusable function and put it to work building out all the desired features. def derive_nth_day_feature(df, feature, N): rows = df.shape[0] nth_prior_measurements = [None]*N + [df[feature][i-N] for i in range(N, rows)] col_name = "{}_{}".format(feature, N) df[col_name] = nth_prior_measurements Now I will write a loop to loop over the features in the feature list defined earlier, and for each feature that is not "date" and for N days 1 through 3 we'll call our function to add the derived features we want to evaluate for predicting temperatures. for feature in features: if feature != 'date': for N in range(1, 4): derive_nth_day_feature(df, feature, N) And for good measure I will take a look at the columns to make sure that they look as expected. df.columns Index(['meantempm', 'meandewptm', 'meanpressurem', 'maxhumidity', 'minhumidity', 'maxtempm', 'mintempm', 'maxdewptm', 'mindewptm', 'maxpressurem', 'minpressurem', 'precipm', ') Excellent! Looks like we have what we need. The next thing I want to do is assess the quality of the data and clean it up where necessary. As the section title says, the most important part of an analytics project is to make sure you are using quality data. The proverbial saying, "garbage in, garbage out", is as appropriate as ever when it comes to machine learning. However, the data cleaning part of an analytics project is not just one of the most important parts it is also the most time consuming and laborious. To ensure the quality of the data for this project, in this section I will be looking to identify unnecessary data, missing values, consistency of data types, and outliers then making some decisions about how to handle them if they arise. The first thing I want to do is drop any the columns of the DataFrame that I am not interested in to reduce the amount of data I am working with. The goal of the project is to predict the future temperature based off the past three days of weather measurements. With this in mind we only want to keep the min, max, and mean temperatures for each day plus all the new derived variables we added in the last sections. # make list of original features without meantempm, mintempm, and maxtempm to_remove = [feature for feature in features if feature not in ['meantempm', 'mintempm', 'maxtempm']] # make a list of columns to keep to_keep = [col for col in df.columns if col not in to_remove] # select only the columns in to_keep and assign to df df = df[to_keep] df.columns Index(['meantempm', 'maxtempm', 'mintempm', ') The next thing I want to do is to make use of some built in Pandas functions to get a better understanding of the data and potentially identify some areas to focus my energy on. The first function is a DataFrame method called info() which, big surprise… provides information on the DataFrame. Of interest is the "data type" column of the output. df.info() <class 'pandas.core.frame.DataFrame'> DatetimeIndex: 1000 entries, 2015-01-01 to 2017-09-27 Data columns (total 39 columns): meantempm 1000 non-null object maxtempm 1000 non-null object mintempm 1000 non-null object meantempm_1 999 non-null object meantempm_2 998 non-null object meantempm_3 997 non-null object meandewptm_1 999 non-null object meandewptm_2 998 non-null object meandewptm_3 997 non-null object meanpressurem_1 999 non-null object meanpressurem_2 998 non-null object meanpressurem_3 997 non-null object maxhumidity_1 999 non-null object maxhumidity_2 998 non-null object maxhumidity_3 997 non-null object minhumidity_1 999 non-null object minhumidity_2 998 non-null object minhumidity_3 997 non-null object maxtempm_1 999 non-null object maxtempm_2 998 non-null object maxtempm_3 997 non-null object mintempm_1 999 non-null object mintempm_2 998 non-null object mintempm_3 997 non-null object maxdewptm_1 999 non-null object maxdewptm_2 998 non-null object maxdewptm_3 997 non-null object mindewptm_1 999 non-null object mindewptm_2 998 non-null object mindewptm_3 997 non-null object maxpressurem_1 999 non-null object maxpressurem_2 998 non-null object maxpressurem_3 997 non-null object minpressurem_1 999 non-null object minpressurem_2 998 non-null object minpressurem_3 997 non-null object precipm_1 999 non-null object precipm_2 998 non-null object precipm_3 997 non-null object dtypes: object(39) memory usage: 312.5+ KB Notice that the data type of every column is of type "object". We need to convert all of these feature columns to floats for the type of numerical analysis that we hope to perform. To do this I will use the apply() DataFrame method to apply the Pandas to_numeric method to all values of the DataFrame. The error='coerce' parameter will fill any textual values to NaNs. It is common to find textual values in data from the wild which usually originate from the data collector where data is missing or invalid. df = df.apply(pd.to_numeric, errors='coerce') df.info() <class 'pandas.core.frame.DataFrame'> DatetimeIndex: 1000 entries, 2015-01-01 to 2017-09-27 Data columns (total 39 columns): meantempm 1000 non-null int64 maxtempm 1000 non-null int64 mintempm 1000 non-null int64 meantempm_1 999 non-null float64 meantempm_2 998 non-null float64 meantempm_3 997 non-null float64 meandewptm_1 999 non-null float64 meandewptm_2 998 non-null float64 meandewptm_3 997 non-null float64 meanpressurem_1 999 non-null float64 meanpressurem_2 998 non-null float64 meanpressurem_3 997 non-null float64 maxhumidity_1 999 non-null float64 maxhumidity_2 998 non-null float64 maxhumidity_3 997 non-null float64 minhumidity_1 999 non-null float64 minhumidity_2 998 non-null float64 minhumidity_3 997 non-null float64 maxtempm_1 999 non-null float64 maxtempm_2 998 non-null float64 maxtempm_3 997 non-null float64 mintempm_1 999 non-null float64 mintempm_2 998 non-null float64 mintempm_3 997 non-null float64 maxdewptm_1 999 non-null float64 maxdewptm_2 998 non-null float64 maxdewptm_3 997 non-null float64 mindewptm_1 999 non-null float64 mindewptm_2 998 non-null float64 mindewptm_3 997 non-null float64 maxpressurem_1 999 non-null float64 maxpressurem_2 998 non-null float64 maxpressurem_3 997 non-null float64 minpressurem_1 999 non-null float64 minpressurem_2 998 non-null float64 minpressurem_3 997 non-null float64 precipm_1 889 non-null float64 precipm_2 889 non-null float64 precipm_3 888 non-null float64 dtypes: float64(36), int64(3) memory usage: 312.5 KB Now that all of our data has the data type I want I would like to take a look at some summary stats of the features and use the statistical rule of thumb to check for the existence of extreme outliers. The DataFrame method describe() will produce a DataFrame containing the count, mean, standard deviation, min, 25th percentile, 50th percentile (or median), the 75th percentile and, the max value. This can be very useful information to evaluating the distribution of the feature data. I would like to add to this information by calculating another output column, indicating the existence of outliers. The rule of thumb to identifying an extreme outlier is a value that is less than 3 interquartile ranges below the 25th percentile, or 3 interquartile ranges above the 75th percentile. Interquartile range is simply the difference between the 75th percentile and the 25th percentile. # Call describe on df and transpose it due to the large number of columns spread = df.describe().T # precalculate interquartile range for ease of use in next calculation IQR = spread['75%'] - spread['25%'] # create an outliers column which is either 3 IQRs below the first quartile or # 3 IQRs above the third quartile spread['outliers'] = (spread['min']<(spread['25%']-(3*IQR)))|(spread['max'] > (spread['75%']+3*IQR)) # just display the features containing extreme outliers spread.ix[spread.outliers,] Assessing the potential impact of outliers is a difficult part of any analytics project. On the one hand, you need to be concerned about the potential for introducing spurious data artifacts that will significantly impact or bias your models. On the other hand, outliers can be extremely meaningful in predicting outcomes that arise under special circumstances. We will discuss each of these outliers containing features and see if we can come to a reasonable conclusion as to how to treat them. The first set of features all appear to be related to max humidity. Looking at the data I can tell that the outlier for this feature category is due to the apparently very low min value. This indeed looks to be a pretty low value and I think I would like to take a closer look at it, preferably in a graphical way. To do this I will use a histogram. %matplotlib inline plt.rcParams['figure.figsize'] = [14, 8] df.maxhumidity_1.hist() plt.title('Distribution of maxhumidity_1') plt.xlabel('maxhumidity_1') plt.show() Looking at the histogram of the values for maxhumidity the data exhibits quite a bit of negative skew. I will want to keep this in mind when selecting prediction models and evaluating the strength of impact of max humidities. Many of the underlying statistical methods assume that the data is normally distributed. For now I think I will leave them alone but it will be good to keep this in mind and have a certain amount of skepticism of it. Next I will look at the minimum pressure feature distribution. df.minpressurem_1.hist() plt.title('Distribution of minpressurem_1') plt.xlabel('minpressurem_1') plt.show() This plot exhibits another interesting feature. From this plot, the data is multimodal, which leads me to believe that there are two very different sets of environmental circumstances apparent in this data. I am hesitant to remove these values since I know that the temperature swings in this area of the country can be quite extreme especially between seasons of the year. I am worried that removing these low values might have some explanatory usefulness but, once again I will be skeptical about it at the same time. The final category of features containing outliers, precipitation, are quite a bit easier to understand. Since the dry days (ie, no precipitation) are much more frequent, it is sensible to see outliers here. To me this is no reason to remove these features. The last data quality issue to address is that of missing values. Due to the way in which I have built out the DataFrame, the missing values are represented by NaNs. You will probably remember that I have intentionally introduced missing values for the first three days of the data collected by deriving features representing the prior three days of measurements. It is not until the third day in that we can start deriving those features, so clearly I will want to exclude those first three days from the data set. Look again at the output from the last time I issued the info method. There is a column of output that listed the non-null values for each feature column. Looking at this information you can see that for the most part the features contain relatively few missing (null / NaN) values, mostly just the ones I introduced. However, the precipitation columns appear to be missing a significant part of their data. Missing data poses a problem because most machine learning methods require complete data sets devoid of any missing data. Aside from the issue that many of the machine learning methods require complete data, if I were to remove all the rows just because the precipitation feature contains missing data then I would be throwing out many other useful feature measurements. As I see it I have a couple of options to deal with this issue of missing data: Since I would rather preserve as much of the data as I can, where there is minimal risk of introducing erroneous values, I am going to fill the missing precipitation values with the most common value of zero. I feel this is a reasonable decision because the great majority of values in the precipitation measurements are zero. # iterate over the precip columns for precip_col in ['precipm_1', 'precipm_2', 'precipm_3']: # create a boolean array of values representing nans missing_vals = pd.isnull(df[precip_col]) df[precip_col][missing_vals] = 0 Now that I have filled all the missing values that I can, while being cautious not to negatively impact the quality, I would be comfortable simply removing the remaining records containing missing values from the data set. It is quite easy to drop rows from the DataFrame containing NaNs. All I have to do is call the method dropna() and Pandas will do all the work for me. df = df.dropna() Want to learn the tools, machine learning, and data analysis used in this tutorial? Here are a few great resources to get you started: In this article I have described the process of collecting, cleaning, and processing a reasonably good-sized data set to be used for upcoming articles on a machine learning project in which we predict future weather temperatures. While this is probably going to be the driest of the articles detaining this machine learning project, I have tried to emphasize the importance of collecting quality data suitable for a valuable machine learning experiment. Thanks for reading and I hope you look forward to the upcoming articles on this project. Looking for parts 2 and 3 of this series? Here ya go:
https://www.codevelop.art/using-machine-learning-to-predict-the-weather-part-1.html
CC-MAIN-2022-40
en
refinedweb
Encapsulation is nothing new to what we have read. It is the method of combining the data and functions. Why Encapsulation Encapsulation is necessary to keep the details about an object hidden from the users of that object. Details of an object are stored in its data members (member bables). This is the reason we make all the member variables of a class private and most of the member functions public. Member variables are made private so that these cannot be directly accessed from outside the class (to hide the details of any object of that class like how the data about the object is implemented) and so most member functions are made public to allow the users to access the data members through those functions. For example, we operate a washing machine through its power button. We switch on the power button, the machine starts and when we switch it off, the machine stops. We don't know what mechanism is going on inside it. That is encapsulation. (public, private, protected). For example, if a data member is declared private and we wish to make it directly accessible from anywhere outside the class, we just need to replace the specifier private by public. Let's see an example of Encapsulation. #include <iostream> using namespace std; class Rectangle { int length; int breadth; public: void setDimension(int l, int b) { length = l; breadth = b; } int getArea() { return length * breadth; } }; int main() { Rectangle rt; rt.setDimension(7, 4); cout << rt.getArea() << endl; return 0; } The member variables length and breadth are encapsulated inside the class Rectangle. Since we declared these private, so these variables cannot be accessed directly from outside the class. Therefore , we used the functions 'setDimension' and 'getArea' to access them. You know what you are but you don't know what you may be.
https://www.codesdope.com/cpp-encapsulation/
CC-MAIN-2022-40
en
refinedweb
Django, Background Processes, and Keeping Users in the Loop When you have out-of-band processing in a web app, how do you let users know that the status of a task has changed? Depending on your front-end client, there are a few different approaches you might take. In my last post, I talked about how a modern web app needs background worker processes. One way or another, you’ll have some things you need to do that are slower than you can do in a request/response cycle, and so you’ll want to handle them out of band. Have the API return a simple 202 ACCEPTED and move on with your life, right? Well, sometimes you want to tell users about the state of those background processes. You might want to say “it’s done!”, or “it’s failed!”, or even just to acknowledge that it’s taking a long time, but still going. And just saying “they can refresh the page” isn’t always enough. (Though, sometimes it is!) I’m going to talk about different ways you can do this. First I’ll talk about how I would do it using Celery. But django-channels provides some cool new options for handling background processes, so I’ll cover that too. CeleryCopy permalink to “Celery” Imagine we have a long-running background process, something that can take up to a minute under normal circumstances, maybe more under exceptional load. We make a Celery task to handle it, and now we want to let the user know what state it’s in. The most simple (or simplistic) approach can be to use the database to store state. You can do this with Celery’s database result backend, or a custom task state model that you periodically update. Imagine something like this: @app.task def my_task(some_arg): # some unique identifier, that you can recover outside the task: task_id = get_task_id_based_on_arg(some_arg) state, _ = TaskState.objects.get_or_create(task_id=task_id) total = len(some_arg) for i, elem in enumerate(some_arg): process_elem(elem) # Every 100 elements, update the percentage processed: if i % 100 == 0: state.percent_done = (i / float(total)) * 100 state.save() Then in your views, you can retrieve the appropriate TaskState and show how much has been processed. Sometimes that’s a good approach, but usually I think that’s pretty clunky. It can thrash the database, leave records lying around if things die halfway through, and still doesn’t give you a smooth experience; your user has to refresh to see updates. As an aside, it might be tempting to do something like this with the Django messages framework. However, adding and retrieving messages requires the request object. Even the pickle serializer can fail to serialize the request object. I would strongly recommend saving yourself the time and trouble, and using anything but messages for this. So what if you want real-time updates? What if that page refresh is bumming you out? A nice option is something like Pusher. They provide a service that you can push to from inside your app (in the request/response cycle, or in a background task) using a nice Python library, and some JS to get your users talking to their realtime websocket-y servers, to get those updates. Their JS library even includes sensible fallbacks for when websockets aren’t available. The one caution is that their prices take a curve that can be a bit steep for some situations; if developer time is cheaper than ongoing service costs, then it might be worth rolling your own websocket solution. Which brings us to our next section. Django-ChannelsCopy permalink to “Django-Channels” If you are using the newer Django Channels package for background tasks, this has the added benefit of making it possible for you to make and manage your own websockets connections. For fuller explanation of Channels itself, see the channels docs or Jacob Kaplan-Moss’s excellent blog post on the subject. I’ll give a brief overview here, though. I find it helps to think of Channels as a generalization of Django’s view system. Instead of a urls.py with a urlpatterns attribute, you have a routing.py with a channel_routing attribute. Instead of mapping paths to views, it maps channel types to consumers. Channel types can include well-known ones like websocket events, or ad-hoc ones like custom background tasks. (All your views and URLs can still be there in your project, untouched, too. This isn’t instead of all that, it’s in addition.) Because Channels operate outside the usual request/response cycle, sending a reply on a channel is a little harder. It can’t operate simply through a function’s return. Instead, you have to .send on Channels, or more flexibly, Groups. (A Group just allows you to send to multiple consumers at once, if necessary.) So, for our purposes, your channel_routing should have, at a minimum, these values: channel_routing = [ route("websocket.connect", websocket_connect), route("websocket.receive", websocket_receive), route("websocket.disconnect", websocket_disconnect), route("my-background-task", my_background_task), ] The first three are consumers for handling basic websocket operations. The last one is whatever long-running task you want to run in the background. You can then call the background task in a view: Channel('my-background-task').send(some_arguments) Be sure that there’s some stable way to identify the Group that you need to send to. It might be as simple as passing in the username of the logged-in user who kicked off the task, or it might be based on a process UUID that’s in the view’s path, or something else. Whatever it is, when the user’s browser makes a websocket connection on page load, you’ll want to add that reply channel to the Group: def websocket_connect(message): # Accept connection message.reply_channel.send({"accept": True}) Group(get_group_id_from(message)).add(message.reply_channel) On the front-end, you should have something like this: socket = new WebSocket("ws://" + window.location.host); socket.onmessage = show_some_toast_for(message); // Call onopen directly if socket is already open if (socket.readyState == WebSocket.OPEN) socket.onopen(); And now you can push messages to users yourself: def my_background_task(message): # ... Group(get_group_id_from(message)).send({ "text": some_status_update, }) # ... And the front-end JavaScript will receive it over the websocket. Display it in a toast or other style of your choosing, and you’re good to go! Have you tried out Channels yet? Do you have better ideas for what to do with websockets? Let us know via Twitter or through our handy contact form. (Header image from Tekniska Museet.)
https://www.oddbird.net/2017/04/17/async-notifications/
CC-MAIN-2022-40
en
refinedweb
Hey all, I'm trying to get an intro cutscene to play, something simple that just flies towards the starting position of the player and then gives control to the player. The intro just plays over and over again. I'm sure its something simple but I can't seem to figure it out for the life of me. I have this script attached to my cutscene camera public class CameraFunction : MonoBehaviour { public GameObject player; public void TriggerPlayer() { player.SetActive (true); this.gameObject.SetActive (false); } and I dragged the player into the player function of the camera function script. The cutscene will just play over and over again but won't give control to the player. Any help would be appreciated, thank you. And where do you call TriggerPlayer() method? That was the script given in the video provided by my professor (this is for a class). But the video was very unhelpful. Right now i've wrote this since the provided script didn't follow the rest of the tutorials given. public GameObject player; public GameObject playercam; // Use this for initialization void Start () { player.SetActive (true); playercam.SetActive (true); this.gameObject.SetActive (false); } So the player and playercam do get trigger to on after I hit play, but the cutscene does not play. I think it has something to do with the void Start(), so i'm trying to research the alternatives to void Start() to allow the cutscene to play before the SetActives are set to true. This is my first class doing this in Unity so I'm still learning the basics. Answer by iStronk · Sep 19, 2015 at 03:02 PM I figured it out! By replacing void Start() with void CameraPoint1() and then adding that event to the end of the animation it works just fine. I'm sure there are more efficient methods to. Empty Animator Controller completely freezes position and rotation after Timeline finishes (very basic cutscene setup: control is not returned to the player) 1 Answer I need Help with an animation for the intro 0 Answers How to change the camera background colour with player height. 1 Answer Mising shader UnityStandardAssets.ImageEffects.DepthOfField 2 Answers Making my camera follow player in MultiPlayer 4 Answers EnterpriseSocial Q&A
https://answers.unity.com/questions/1068985/how-to-get-an-intro-cinematic-to-play.html?sort=oldest
CC-MAIN-2022-40
en
refinedweb
The Raspberry Pi has some great add-on hardware, such as Pi Tops that fit directly on top of the Pi module and wired components. A good number of the wired Arduino designed parts now can also be used with Rasp PI’s. Some examples of this includes the HT16K33 and TM1637 seven segment displays. Nothing beats using real hardware to show Pi values and status, but if you’re missing the hardware or you’d like to duplicate a displayed value remotely, then a soft version of the hardware can be very useful. In this blog we’ll look at a three Python soft display examples, a seven-segment display, a LCD Keypad Top and a gauge. Seven Segment Display The tk_tools module is based on the Python tkinter module and it is has some cool components such as LEDs, Charts, Gauges and Seven Segment displays. The module is installed by: pip install tk_tools The tk_tools Seven Segment component can function like an Arduino TM1637 or HT16K33 display component. The tk_tools seven-segment display supports a height, digit_color and a background color. Below is a some example code that shows the Pi’s CPU temperature in the soft seven segment display. import tkinter as tk import tk_tools root = tk.Tk() root.title("CPU Temp") ss = tk_tools.SevenSegmentDigits(root, digits=5, background='black', digit_color='yellow', height=100) ss.grid(row=0, column=1, sticky='news') # Update the Pi CPU Temperature every 1 second def update_gauge(): # Get the Raspberry CPU Temp tFile = open('/sys/class/thermal/thermal_zone0/temp') # Scale the temp from milliC to C thetemp = int(float(tFile.read())/1000) ss.set_value(str(thetemp)) root.after(1000, update_gauge) root.after(500, update_gauge) root.mainloop() LCD Keypad The LCD Keypad I’ve used on a lot of my Pi Projects, (below is a PI FM radio example). Its supports 2 lines of text and it has 5 (or 6) buttons that can be used in your Python app. The standard Python Tkinter library can be used to create a custom LCD keypad display. For my example I tried to replicate the look-and-feel of the Pi Top that I had, but you could enhance or change it to meet your requirements. Below is an example that writes the button pushed to the 2 line label. import tkinter as tk def myfunc(action): print ("Requested action: ",action) Line1.config(text = "Requested action: \n" + action) root = tk.Tk() root.title("LCD Keypad Shield") root.configure(background='black') Line1 = tk.Label(root, text="ADC key testing \nRight Key OK ", fg = "white", bg = "blue", font = "Courier 45", borderwidth=4, relief="raised") Line1.grid(row = 0,column = 0, columnspan =15, rowspan = 2) selectB = tk.Button(root, width=10,text= "SELECT",bg='silver' , command = lambda: myfunc("SELECT"),relief="raised") selectB.grid(row = 3,column = 0) leftB = tk.Button(root, width=10,text= "LEFT", bg='silver' , command = lambda: myfunc("LEFT"),relief="raised") leftB.grid(row = 3,column = 1) rootB = tk.Button(root, width=10,text= "UP", bg='silver' , command = lambda: myfunc("UP"),relief="raised") rootB.grid(row = 2,column = 2) rightB = tk.Button(root, width=10,text= "DOWN", bg='silver' , command = lambda: myfunc("DOWN"),relief="raised") rightB.grid(row = 3,column = 3) bottomB = tk.Button(root, width=10,text= "RIGHT", bg='silver', command = lambda: myfunc("RIGHT"),relief="raised") bottomB.grid(row = 4,column = 2) rstB = tk.Button(root, width=10,text= "RST", bg='silver' , command = lambda: myfunc("RESET"),relief="raised") rstB.grid(row = 3,column = 4) root.mainloop() Gauge and Rotary Scale There aren’t any mainstream low cost gauges that are available for the Rasp Pi, but I wanted to show how to setup a soft gauge. The tk_tools gauge component is very similar to a speedometer. The rotary scale is more like a 180° circular meter. Both components support digital values, units and color scales. Below is a gauge example that reads the Pi CPU temperature every second. import tkinter as tk import tk_tools root = tk.Tk() root.title("CPU Temp") my_gauge = tk_tools.Gauge(root, height = 200, width = 400, max_value=70, label='CPU Temp', unit='°C', bg='grey') my_gauge.grid(row=0, column=0, sticky='news') def update_gauge(): # Get the Raspberry CPU Temp tFile = open('/sys/class/thermal/thermal_zone0/temp') # Scale the temp from milliC to C thetemp = int(float(tFile.read())/1000) my_gauge.set_value(thetemp) # update the gauges according to their value root.after(1000, update_gauge) root.after(500, update_gauge) root.mainloop() Final Thoughts There are a lot of soft hardware components that could be created. I found myself getting tripped up thinking : “What would be a good tkinter component and what should be a Web component”. This is especially true when looking at charting examples, or when I was looking a remote connections. 2 thoughts on “Simulate Raspberry Pi Hardware” I used the 7 segment display to create a scoreboard. Nice
https://funprojects.blog/2019/01/02/simulated-raspberry-pi-hardware/
CC-MAIN-2022-40
en
refinedweb
Liferay Screens typically receives entities from a Liferay instance as [String:AnyObject], where String is the entity’s attribute and AnyObject is the attribute’s value. Although you can use these dictionary objects throughout your Screenlet, it’s often easier to create a model class that converts each into an object that represents the corresponding Liferay entity. This is especially convenient for complex entities composed of many attribute-value pairs. Note that Liferay Screens already provides several model classes for you. to see them. At this point, you might be saying, “Ugh! I have complex entities and Screens doesn’t provide a model class for them! I’m just going to give up and watch football.” Fret not! Although we’d never come between you and football, creating and using your own model class is straightforward. Using the advanced version of the sample Add Bookmark Screenlet as an example, this tutorial shows you how to create and use a model class in your Screenlet. First, you’ll create your model class. Creating Your Model Class Your model class must contain all the code necessary to transform each [String:AnyObject] that comes back from the server into a model object that represents the corresponding Liferay entity. This includes a constant for holding each [String:AnyObject], and initializer that sets this constant, and a public property for each attribute value. For example, the sample Add Bookmark Screenlet adds a bookmark to a Liferay instance’s Bookmarks portlet. Since the Mobile SDK service method that adds the bookmark also returns it as [String:AnyObject], the Screenlet can convert it into an object that represents bookmarks. It does so with its Bookmark model class. This class extends NSObject and sets the [String:AnyObject] to the attributes constant via the initializer. This class also defines computed properties that return the attribute values for each bookmark’s name and URL: @objc public class Bookmark : NSObject { public let attributes: [String:AnyObject] public var name: String { return attributes["name"] as! String } override public var description: String { return attributes["description"] as! String } public var url: String { return attributes["url"] as! String } public init(attributes: [String:AnyObject]) { self.attributes = attributes } } Next, you’ll put your model class to work. Using Your Model Class Now that your model class exists, you can use model objects anywhere your Screenlet handles results. Exactly where depends on what Screenlet components your Screenlet uses. For example, Add Bookmark Screenlet’s Connector, Interactor, delegate, and Screenlet class all handle the Screenlet’s results. The steps here therefore show you how to use model objects in each of these components. Note, however, that your Screenlet may lack a Connector or delegate: these components are optional. Variations on these steps are therefore noted where applicable. Create model objects where the [String: AnyObject]results originate. For example, the [String: AnyObject]results in Add Bookmark Screenlet originate in the Connector. Therefore, this is where the Screenlet creates Bookmarkobjects. The following code in the Screenlet’s Connector ( AddBookmarkLiferayConnector) does this. The ifstatement following the service call casts the results to [String: AnyObject], calls the Bookmarkinitializer with those results, and stores the resulting Bookmarkobject to the public resultBookmarkInfovariable. Note that this is only the code that makes the service call and creates the Bookmarkobject. Click here to see the complete AddBookmarkLiferayConnectorclass: ... // Public property for the results public var resultBookmarkInfo: Bookmark? ...) // Creates Bookmark objects from the service call's results if let result = result as? [String: AnyObject] { resultBookmarkInfo = Bookmark(attributes: result) lastError = nil } ... } ... } If your Screenlet doesn’t have Connector, then your Interactor’s startmethod makes your server call and handles its results. Otherwise, the process for creating a Bookmarkobject from [String: AnyObject]is the same. Handle your model objects in your Screenlet’s Interactor. The Interactor processes your Screenlet’s results, so it must also handle your model objects. If your Screenlet doesn’t use a Connector, then you already did this in your Interactor’s startmethod as mentioned at the end of the previous step. If your Screenlet uses a Connector, however, then this happens in your Interactor’s completedConnectormethod. For example, the completedConnectormethod in Add Bookmark Screenlet’s Interactor ( AddBookmarkInteractor) retrieves the Bookmarkvia the Connector’s resultBookmarkInfovariable. This method then assigns the Bookmarkto the Interactor’s public resultBookmarkvariable. Note that this is only the code that handles Bookmarkobjects. Click here to see the complete AddBookmarkInteractorclass: ... // Public property for the results public var resultBookmark: Bookmark? ... // The completedConnector method gets the results from the Connector override public func completedConnector(c: ServerConnector) { if let addCon = (c as? AddBookmarkLiferayConnector), bookmark = addCon.resultBookmarkInfo { self.resultBookmark = bookmark } } If your Screenlet uses a delegate, your delegate protocol must account for your model objects. Skip this step if you don’t have a delegate. For example, Add Bookmark Screenlet’s delegate ( AddBookmarkScreenletDelegate) must communicate Bookmarkobjects. The delegate’s first method does this via its second argument: @objc public protocol AddBookmarkScreenletDelegate: BaseScreenletDelegate { optional func screenlet(screenlet: AddBookmarkScreenlet, onBookmarkAdded bookmark: Bookmark) optional func screenlet(screenlet: AddBookmarkScreenlet, onAddBookmarkError error: NSError) } Get the model object from the Interactor in your Screenlet class’s interactor.onSuccessclosure. You can then use the model object however you wish. For example, the interactor.onSuccessclosure in Add Bookmark Screenlet’s Screenlet class ( AddBookmarkScreenlet) retrieves the Bookmarkfrom the Interactor’s resultBookmarkproperty. It then handles the Bookmarkvia the delegate. Note that in this example, the closure is in the Screenlet class’s Interactor method that adds a bookmark ( createAddBookmarkInteractor). Be sure to get your model object wherever the interactor.onSuccessclosure is in your Screenlet class. Click here to see the complete AddBookmarkScreenlet: ... private func createAddBookmarkInteractor() -> Interactor { let interactor = AddBookmarkInteractor(screenlet: self, folderId: folderId, title: viewModel.title!, url: viewModel.URL!) // Called when the Interactor finishes successfully interactor.onSuccess = { if let bookmark = interactor.resultBookmark { self.addBookmarkDelegate?.screenlet?(self, onBookmarkAdded: bookmark) } } // Called when the Interactor finishes with an error interactor.onFailure = { error in self.addBookmarkDelegate?.screenlet?(self, onAddBookmarkError: error) } return interactor } ... Awesome! Now you know how to create and use a model class in your Screenlet. Related Topics Creating iOS List Screenlets Architecture of Liferay Screens for iOS
https://help.liferay.com/hc/ja/articles/360017900472-Creating-and-Using-Your-Screenlet-s-Model-Class-
CC-MAIN-2022-40
en
refinedweb
pyfix is a test framework for Python. ##Introduction pyfix is a framework used to write automated software tests. It is similar to tools like unittest in purpose but unlike most of the unit testing frameworks being around today it is not based on the xUnit design. pyfix can be used for different types of tests (including unit tests, integration tests, system tests, functional tests and even acceptance tests) although the primary targets are more technical tests (such as unit or integration tests). How to install it?How to install it? pyfix is available via the Cheeseshop so you can use easy_install or pip: $ pip install pyfix How to use it?How to use it? pyfix focusses on writing test functions. Each test a function that lives in module. Here is some trival example (the use of pyassert is not mandatory although it follows the same idea of having easy to read tests). from pyfix import test, run_tests from pyassert import assert_that @test def ensure_that_two_plus_two_equals_four (): assert_that(2 + 2).equals(4) @test def ensure_that_two_plus_three_equals_five (): assert_that(2 + 3).equals(5) if __name__ == "__main__": run_tests() If you execute this file you should see the following output: pyfix version 0.1.3. Running 2 tests. -------------------------------------------------------------------------------- Ensure that two plus three equals five: passed [0 ms] Ensure that two plus two equals four: passed [0 ms] -------------------------------------------------------------------------------- TEST RESULTS SUMMARY 2 tests executed 0 tests failed ALL TESTS PASSED Fixtures: Injecting valuesFixtures: Injecting values One of the main strengths of pyfix is the ability to inject parameters to tests. See this example: from pyfix import test, run_tests, given from pyassert import assert_that class Accumulator(object): def __init__ (self): self.sum = 0 def add (self, number=1): self.sum += number @test @given(accumulator=Accumulator) def ensure_that_adding_two_yields_two (accumulator): accumulator.add(2) assert_that(accumulator.sum).equals(2) if __name__ == "__main__": run_tests() pyfix will instantiate an Accumulator for you and inject it using the accumulator parameter. Note that there is nothing special about the Accumulator; it's a plain Python class. If you want to do some complex initialization and/ or clean up stuff, pyfix provides the Fixture interface which defines hooks for these lifecycle phases. from pyfix import test, run_tests, given, Fixture from pyassert import assert_that class Accumulator(object): def __init__ (self): self.sum = 0 def add (self, number=1): self.sum += number class InitializedAccumulator (Fixture): def provide (self): result = Accumulator() result.add(2) return [result] @test @given(accumulator=InitializedAccumulator) def ensure_that_adding_two_to_two_yields_four (accumulator): accumulator.add(2) assert_that(accumulator.sum).equals(4) if __name__ == "__main__": run_tests() Parameterized Tests: Providing more than one ValueParameterized Tests: Providing more than one Value As you might have noticed in the last example, the provide method from the Fixture returned a list and not just a single value. Every fixture can return more than one value. pyfix will use all values provided by all fixtures and calculate all valid permutations of parameter values and then invoke a single test method for each permutation. Using this feature it is easy to write parameterized tests. The simplest variant of a parameterized test is a test accepting one parameter that we provide a set of values for. pyfix provides the enumerate utility function to let you write such a test in an easy way: from pyfix import test, run_tests, given, enumerate from pyassert import assert_that KNOWN_PRIMES = [2, 3, 5, 7, 11, 13, 17, 19] def is_prime(number): return number in KNOWN_PRIMES @test @given(number=enumerate(2, 3, 5, 7, 11)) def is_prime_should_return_true_when_prime_is_given(number): assert_that(is_prime(number)).is_true() if __name__ == "__main__": run_tests() Please notice that this example is intended to demonstrate the test and not the implementation of is_prime which indeed is brain dead. If you run this module you should see an output like the following: pyfix version 0.1.3. Running 1 tests. -------------------------------------------------------------------------------- Is prime should return true when prime is given: number=2: passed [0 ms] number=3: passed [0 ms] number=5: passed [0 ms] number=7: passed [0 ms] number=11: passed [0 ms] -------------------------------------------------------------------------------- TEST RESULTS SUMMARY 5 tests executed in 0 ms 0 tests failed ALL TESTS PASSED Interceptors - Executing Code before and/ or after a test functionInterceptors - Executing Code before and/ or after a test function If you want to execute any code before or after a test function you can register an interceptor to do so: __author__ = "Alexander Metzner" from pyassert import assert_that from pyfix import test, before, after, run_tests def before_interceptor(): print "Starting test..." def after_interceptor(): print "Test stopped." @test @before(before_interceptor) @after(after_interceptor) def ensure_that_before_interceptor_is_executed(): assert_that(_BEFORE_EXECUTED).is_true() if __name__ == "__main__": run_tests() You can register as many before/ after interceptors as you want using multiple decorators or passing more than one value to a decorator. Release NotesRelease Notes Version 0.2.3 released 2012-10-01Version 0.2.3 released 2012-10-01 - Added temporary directory fixture Version 0.2.0 released 2012-09-26Version 0.2.0 released 2012-09-26 - Implemented beforeand afterdecorators - Test results contain a traceback in case of an exception Version 0.1.3 released 2012-09-18Version 0.1.3 released 2012-09-18 - Implemented enumerating fixtures like enumerate Version 0.1.2 released 2012-09-17Version 0.1.2 released 2012-09-17 - Renamed mainto run_tests Version 0.1.1 released 2012-09-14Version 0.1.1 released 2012-09-14 - Inital release LicenseLicense pyfix is published under the terms of the Apache License, Version 2. Additional LinksAdditional Links - pyfix on Cheesshop - pyassert - used for all unit tests in pyfix as well as in all examples - pybuilder - used to "build" pyfix
https://libraries.io/pypi/pyfix
CC-MAIN-2022-40
en
refinedweb
PyTeleBirr is mostly Telebirr With Python Examples | Documentation | Channel InstallationInstallation pip3 install -U pytelebirr or pip3 install -U git+ UsageUsage from pytelebirr import PyTeleBirr phone_no = "<YOUR_PHONE_NUMBER_STARTS_FROM_9>" # Example 91234567 passwd = "<YOUR_PASSWORD>" # To get Device id use # You have to use this function one time. from pytelebirr.utils import get_device_id device_id = get_device_id( phone=phone_no, passwd=passwd, d_id="<Your Mobile Device ID>" # to get this for android users use device id app for iphone users ¯\_(ツ)_/¯ ) # after calling this function verification code will be sent via sms check 127 # Code has been sent via sms # Enter The Code You Received : enter your code here # and you will get this message on terminal/cmd Your Device_id : ... # Initialize PyTelebirr telebirr = PyTeleBirr( device_id=device_id, phone_no=phone_no, passwd=passwd, ) # get your balance balance = telebirr.get_balance() # this returns dict balance['balance'] # 999999.00 # generate beautiful qr code # now you can custom your qr code size and background color and payment amount img_path = telebirr.generate_qrcode( amount=5, # 5 in birr size=200, # optional bg_color="ffffff" # color don't use # ) # this return image path # refresh token tokens will expire in 86400s after login telebirr.refresh_token() # this will refresh token # on payment received method you can pass callable telebirr.on_payment( on_payment=lambda m: print(m) ) # when payment received on_payment function will be called # to check if transaction exists # returns bool or dict telebirr.check_tx( "ABCDE" ) # if tx id exists will return dict elase false # this method can check all telebirr transaction so be careful # check if the tx id payment was sent to me telebirr.is_my_tx( "ABCDE" ) # returns bool if tx id was sent to me returns True else False # scan qr code # scan the receiver qr code and pass the content telebirr.scan_qr( "1234567890" ) # returns dict data of user including phone number ;) # send payment to user via qr code telebirr.send_payment( amount=5, phone="1234567890", content="123456789" # content of qr code ) # returns dict # get your token telebirr.get_token() # returns str your token FeaturesFeatures - Python solution. - Send payment via qr code and phone number - Checking balance - Generating beautiful qr code - you can custom your qr code - Checking transactions - Waiting for payment and call function
https://libraries.io/pypi/pytelebirr
CC-MAIN-2022-40
en
refinedweb
I am building a class library and using its default namespace as "System". There suppose I am creating a generic data structure say PriorityQueue and putting it under System.Collections.Generic namespace. Now when I am referencing that library from another project, I can't see PriorityQueue under "System.Collections.Generic" namespace anymore. Though the library is referenced I can not access any of the classes in it. Can anyone shed some light on it please. I know that if I change the namespace everything will be ok, but I want to create a seamless integration like .net framework itself with other project, so that one can refer the library and forget about its namespaces.
http://ansaurus.com/question/3032884-class-library-reference-problem
CC-MAIN-2019-30
en
refinedweb
Hi there fellas. This is an answer posted on stackoverflow by e-satis. The original link to the answer is given at the end. No credit goes to me. All of the credit goes to the original author. This answer is posted just because most of us are unaware of how decorators work in python and this answer solves that problem beautifully. Python’s functions are objects"* Functions references OK, still here? Now the fun part, you’ve seen that functions are objects and therefore: - can be assigned to a variable; - can be defined in another function. Well, that means that a function can return another function 🙂. Handcrafted decorators! Decorators demystified~ Eventually answering the question. Passing arguments to the decorated function # Decorating methods? Passing arguments to the decorator ‘labeled by the variable “ my_decorator“‘. 🙂. Let’s practice: a decorator to decorate a decorator? Best practices while using decorators - They are new as of Python 2.4, so be sure that’s what your code is running on. - Decorators slow down the function call. Keep that in mind. - You can not un-decorate a function. There are hacks to create decorators that can be removed but nobody uses them. So once a function is decorated, it’s done. For all the code. - Decorators wrap functions, which can make them hard to debug. Python 2.5 solves this last issue by providing the functools module including functools.wraps that copies the name, module and docstring of any wrapped function to it’s wrapper. Fun fact, functools.wraps is a decorator 🙂 # How can the decorators be useful?_futurama_quote() print get_random_fut. Source: Stackoverflow 10 thoughts on “All About Decorators in Python” Really nice article, the only issue I have are the code blocks in blog post. It is very inconvenient to scroll left and right to read the code/comments Thank you Ashish for replying. Sorry for the inconvenience. I will be changing the layout of the blog in the future once I buy a domain. Currently WordPress does not have good looking themes for programmers 😦 It is a bad habit to shadow builtin functions (like “type” in one of the examples). You could use something like “typ” or a word with similar meaning. I prefer to use the name “kind” in cases like this. I think you will be better of with editing the original answer so that more people can benefit from your idea. 🙂 I just did that! Thanks! 🙂 That’s great! Congratulations 🙂 def wrapped(): print () return func() I think there shouldn’t be a ‘return’. In the first a few listings, ‘func()’ is a part of wrapper. But after the ‘evil’ part, ‘func()’ becomes a return value of wrapper. It confuses me and I believe it’s a typo, isn’t it? I am just a beginner, correct me if I’m wrong, thanks. Great post. As I am regularly using plac to make my command line tools, I use annotatios daily. However, sometime I need the same being plain function to call and at the same time having plac annotated command line tool. So I was searching how to do that and your post has helped me a lot Typically I do: —-start— import plac @plac.annotations( name=”first name”, surname=”family name” ) def main(name, surname): “””have a fun with name and surname decorated””” print name, surname if __name__ == “__main__”: plac.call(main) — end — try to call it form command line:: $ python fun.py -h But for my celery project, I need my main to exist in simpe form and then in decoreated one. The trick was this one: –start– import plac def fun(name, surname): “””have a fun with name and surname decorated””” print name, surname main = plac.annotations( name=”first name”, surname=”family name” ) (fun) if __name__ == “__main__”: plac.call(main) — end — This way I can preserve my “fun” function in bare shape, which shall run a bit faster, when called from celery task. And at the same time I can produce command line version using plac decorator. Thanks for your post, it helped me a lto to get through passing decorator parameters. Awesome! The logging decorator is exactly what I’ve been looking for! Awesome article. Very very much helpful. And some good explaining you did right there.
https://pythontips.com/2013/10/10/all-about-decorators-in-python/
CC-MAIN-2019-30
en
refinedweb
This is the last in a series of posts about how my company built a replacement workflow platform for EPiServer. Why we chose to do this is explained here: I’ve covered why we had to build our own workflow. I’ve gone into how to disable the base Workflow tab and replace it with your own GuiPlugin named Workflow. I’ve explained from the top down how this all works from a user/admin perspective, I’ve shown what database fields need to be updated to reject pages – and I’ve given you a workaround to disable that pesky Save and Publish button. The only other critical component to this entire process is the ability to notify approvers whenever a page is marked “Ready to Publish”. To do this, we create a new class and hook into the DataFactory.Instance.CheckingInPage event like so: First – I create a new class : I named mine CustomPageActionControl.cs. I then designate it as a [PagePlugIn] and set up the Initialize method..this is where we hook CheckingInPage – in this instance by calling an additional class called NotifyApprovers. [PagePlugIn] public class CustomPageActionControl { public static void Initialize(int bitflags) { DataFactory.Instance.CheckingInPage += NotifyApprovers; } } (As a side note, this is really handy for doing all sorts of other things..most of our sites also hook DataFactory.Instance.SavedPage, PublishedPage, DeletingPage, MovingPage, etc – there are all sorts of things you can hook into: ) Anyway..our NotifyApprovers method performs thusly: The below should be cleaned up and made more DRY (i.e. take advantage of LINQ), but you get the idea.. private static void NotifyApprovers(object sender, PageEventArgs e) { String<tr><td><b>Title:</b></td><td>" + e.Page.PageName + "</td></tr><tr><td><b>Submitter:</b></td><td>" + UserProfileTools.GetFullName(e.Page.ChangedBy) + "</td></tr><tr><td><b>Version URL:</b></td><td>" + PageVersionUrl + "</td></tr><tr><td><b>Published URL:</b></td><td>" + ConfigurationManager.AppSettings["BaseUrl"] + e.Page.LinkURL + "</td></tr><tr><td><b>Necessary Action:</b></td><td>This page is ready for your approval. Please log into EPiServer and visit the <i>Ready to Publish URL</i> specified above. For your reference, the current published page version is also available by visiting the <i>Published Version URL</i>." + "</td></tr><tr><td><b>Content Approvers:</b></td><td>" + ApproverUserNames + "</td></tr></table>"; if (EmailList.Count() > 0) EmailTools.SendTemplatedEmail("EPiServer Page Approval Request –", EmailBody, EmailList, "[email protected]", "EPiServer"); } And that’s it! When an editor hits “Ready to Publish”, all of the approvers are automatically notified that they need to go in and check out the content. That sums up our custom workflow application. If you found it at all useful in any respect (or if you feel like pointing out my errors in logic and/or code) I’d love to hear from you in the comments below. 17track.net Jawaharlal Nehru Technological University, Kakinada is a public university, in Kakinada, East Godavari district, north of the Indian state of Andhra Pradesh. It is one of India's leading universities focusing on engineering for more JNTUK
https://world.episerver.com/blogs/Hans/Dates/2012/5/EPiServer-Workflow-Replacement--Step-6---Hooking-the-Ready-to-Publish-Event/
CC-MAIN-2019-30
en
refinedweb
I'm successfully setting up esp32's to connect to my internal Wifi network, and turning to the Pi I' attempting to do the same before progressing to socket comms. after installing the network module and attempting: import network station=network.WLAN(network.STA_IF) Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'module' object has no attribute 'WLAN' >>> having listed the help on 'network' it confirms no such attribute to the object. I'm surprised, this seems to be standard practice elsewhere. How do I set up a connection to WIFI under python3 or python2.7 if WLAN isn't included?
https://www.raspberrypi.org/forums/viewtopic.php?f=32&t=241846&p=1475558
CC-MAIN-2019-30
en
refinedweb
After listening to the latest Magic Read-along episode "You should watch this" (which you should go listen to now) I got caught up thinking about Brian's idea of an Endomorphism version of Kleisli composition for use with Redux, it's actually a very similar model to what I'm using in my event framework for event listeners so I figured I'd try to formalize the pattern and recognize some of the concepts involved. They talk about the idea of a Redux-reducer, which is usually of type s -> Action -> s, it takes a state and an action and returns a new state. He then re-arranged the arguments to Action -> s -> s. He then recognized this as Action -> Endo s (an Endo-morphism is just any function from one type to itself: a -> a). He would take his list of reducers and partially apply them with the Action, yielding a list of type Endo s where s is the state object the reducer operates over. At this point we can use the Monoid instance Endo has defined, so we foldmap with Endo to combine the list of reducers into a sort of pipeline where each function feeds into the next; the Endo instance of Monoid is just function composition over functions which return the same type as their input. This cleans up the interface of the reducers a fair amount, but what about an alternate kind of Endo which uses Kleisli composition instead of normal function composition? Kleisli composition often referenced as (>=>); takes two functions which return monads and composes them together using the underlying bind/flatmap of the Monad. The type of Kleisli composition is: (>=>) :: Monad m => (a -> m b) -> (b -> m c) -> a -> m c. If we could define a nice Endo-style monoid over this type then we could compose reducers like we did above, but also allow the functions to perform monadic effects (which is a bad idea in Redux, but there are other times this would be useful, imagine running a user through a pipeline of transformations which interact with a database or do some error handling). We can easily define this instance like so: import Control.Monad newtype KEndo m a = KEndo { getKEndo :: (a -> m a) } instance Monad m => Monoid (KEndo m a) where mempty = KEndo return (KEndo a) `mappend` (KEndo b) = KEndo (a >=> b) This is great, now if we have a list of functions of some type [User -> Writer Error User] or something we can use foldmap to combine them into a single function! It works like this: actions :: [User -> Writer Error User] actions = [...] pipeline :: User -> Writer Error User pipeline = getKEndo . foldMap KEndo $ actions The whole Kleisli Endo thing is a cool idea; but this thing has actually been done before! It's actually the same as the StateT state monad transformer from mtl; let's see how we can make the comparison. A generic Endo is of type s -> s, this is isomorphic to s -> ((), s), aka State s (). The trick is that the Kleisli Endo ( s -> m s or by isomorphism s -> m ((), s)) can actually be generalized over the () to s -> m (a, s) which incidentally matches runStateT :: StateT s m a -> s -> m (a, s) from mtl! So basically KEndo is isomorphic to StateT, but we'd still like a monoid instance for it, Gabriel shows a monoid over the IO monad in "Applied category theory and abstract algebra", the Monoid he shows actually generalizes to any monad as this instance: instance (Monad m, Monoid a) => Monoid (m a) where mempty = return mempty ma `mappend` mb = do a <- ma b <- mb return (a `mappend` b) So that means we can use this instance for StateT (which is a monad). Since () is a trivial monoid (where every mappend just returns ()) the simple case is State s () which was our KEndo of s -> ((), s) but now we have the Monoid instance, which behaves the same as the KEndo instance, so we don't need KEndo anymore. If we want to allow arbitrary effects we use the Transformer version: StateT s m () where m is a monad containing any additional effects we want. In addition to being able to add additional effects we also gain the ability to aggregate information as a monoid! If you decided you wanted your reducers to also aggregate some form of information, then they'd be: Monoid a => Action -> s -> (a, s), which is Action -> State s a, and if a is a monoid, then the monoid instance of State acts like Endo, but also aggregates the 'a's along the way! Lastly we recognize that in the case of the Redux Reducers, if we have a whole list of reducers: Action -> State s () then we can rephrase it as the ReaderT Monad: ReaderT Action (State s) (), which maintains all of the nice monoids we've set up so far, and becomes even more composable!?
https://chrispenner.ca/posts/kleisli-endo
CC-MAIN-2019-30
en
refinedweb
In this article, I want to take you through an example project that I built recently — a totally original type of visualization using the D3 library, which showcases how each of these components add up to make D3 a great library to learn. D3 stands for Data Driven Documents. It’s a JavaScript library that can be used to make all sorts of wonderful data visualizations and charts. If you’ve ever seen any of the fabulous interactive stories from the New York Times, you’ll already have seen D3 in action. You can also see some cool examples of great projects that have been built with D3 here. The learning curve is pretty steep for getting started with the library, since D3 has a few special quirks that you probably won’t have seen before. However, if you can get past the first phase of learning enough D3 to be dangerous, then you’ll soon be able to build some really cool stuff for yourself. There are three main factors that really make D3 stand out from any other libraries out there: Flexibility. D3 lets you take any kind of data, and directly associate it with shapes in the browser window. This data can be absolutely anything, allowing for a huge array of interesting use cases to create completely original visualizations. Elegance. It’s easy to add interactive elements with smooth transitions between updates. The library is written beautifully, and once you get the hang of the syntax, it’s easy to keep your code clean and tidy. Community. There’s a vast ecosystem of fantastic developers using D3 already, who readily share their code online. You can use sites like bl.ocks.org and blockbuilder.org to quickly find pre-written code by others, and copy these snippets directly into your own projects. The Project As an economics major in college, I had always been interested in income inequality. I took a few classes on the subject, and it struck me as something that wasn’t fully understood to the degree that it should be. I started exploring income inequality using Google’s Public Data Explorer … When you adjust for inflation, household income has stayed pretty much constant for the bottom 40% of society, although per-worker productivity has been skyrocketing. It’s only really been the top 20% that have reaped more of the benefits (and within that bracket, the difference is even more shocking if you look at the top 5%). Here was a message that I wanted to get across in a convincing way, which provided a perfect opportunity to use some D3.js, so I started sketching up a few ideas. Sketching Because we’re working with D3, I could more or less just start sketching out absolutely anything that I could think of. Making a simple line graph, bar chart, or bubble chart would have been easy enough, but I wanted to make something different. I find that the most common analogy that people tended to use as a counterargument to concerns about inequality is that “if the pie gets bigger, then there’s more to go around”. The intuition is that, if the total share of GDP manages to increase by a large extent, then even if some people are getting a thinner slice of pie, then they’ll still be better off. However, as we can see, it’s totally possible for the pie to get bigger and for people to be getting less of it overall. My first idea for visualizing this data looked something like this: The idea would be that we’d have this pulsating pie chart, with each slice representing a fifth of the US income distribution. The area of each pie slice would relate to how much income that segment of the population is taking in, and the total area of the chart would represent its total GDP. However, I soon came across a bit of a problem. It turns out that the human brain is exceptionally poor at distinguishing between the size of different areas. When I mapped this out more concretely, the message wasn’t anywhere near as obvious as it should have been: Here, it actually looks like the poorest Americans are getting richer over time, which confirms what seems to be intuitively true. I thought about this problem some more, and my solution involved keeping the angle of each arc constant, with the radius of each arc changing dynamically. Here’s how this ended up looking in practice: I want to point out that this image still tends to understate the effect here. The effect would have been more obvious if we used a simple bar chart: However, I was committed to making a unique visualization, and I wanted to hammer home this message that the pie can get bigger, whilst a share of it can get smaller. Now that I had my idea, it was time to build it with D3. Borrowing Code So, now that I know what I’m going to build, it’s time to get into the real meat of this project, and start writing some code. You might think that I’d start by writing my first few lines of code from scratch, but you’d be wrong. This is D3, and since we’re working with D3, we can always find some pre-written code from the community to start us off. We’re creating something completely new, but it has a lot in common with a regular pie chart, so I took a quick look on bl.ocks.org, and I decided to go with this classic implementation by Mike Bostock, one of the creators of D3. This file has probably been copied thousands of times already, and the guy who wrote it is a real wizard with JavaScript, so we can be sure that we’re starting with a nice block of code already. This file is written in D3 V3, which is now two versions out of date, since version 5 was finally released last month. A big change in D3 V4 was that the library switched to using a flat namespace, so that scale functions like d3.scale.ordinal() are written like d3.scaleOrdinal() instead. In version 5, the biggest change was that data loading functions are now structured as Promises, which makes it easier to handle multiple datasets at once. To avoid confusion, I’ve already gone through the trouble of creating an updated V5 version of this code, which I’ve saved on blockbuilder.org. I’ve also converted the syntax to fit with ES6 conventions, such as switching ES5 anonymous functions to arrow functions. Here’s what we’re starting off with already: I then copied these files into my working directory, and made sure that I could replicate everything on my own machine. If you want to follow along with this tutorial yourself, then you can clone this project from our GitHub repo. You can start with the code in the file starter.html. Please note that you will need a server (such as this one) to run this code, as under the hood it relies on the Fetch API to retrieve the data. Let me give you a quick rundown of how this code is working. The post Interactive Data Visualization with Modern JavaScript and D3 appeared first on SitePoint. Link:
https://jsobject.info/2018/06/12/interactive-data-visualization-with-modern-javascript-and-d3/
CC-MAIN-2019-30
en
refinedweb
Today we are pleased to announce that we’ve extended the range of Raygun to target Xamarin projects for Android and iOS developers. This is an extension of the existing Raygun4Net provider and is very simple to use. Once you integrate the Raygun4Net provider into your application and deploy it to the store, your Raygundashboard will collect lots of information to help you debug any exceptions that occur. This information includes the stack trace, OS version, phone model, device orientation and much more. How to use it Sign into your account at Raygun.com. If you don’t have an account yet, sign up now for a free 14 day trial. Create a new application, give it a name and change your notification settings if desired. After clicking the Create Application button, you’ll come to a page prompting you to select a framework. Select either Xamarin.iOS or Xamarin.Android to see the instructions of how to setup the Raygun4Net provider in your application. On this page, you’ll also see your application API key – you’ll be needing this soon. Following the instructions, you’ll see the next thing to do is install the Raygun4Net provider into your application. This can be done by searching for “Raygun4Net” in either NuGet or the Xamarin Component Store. For NuGet, right click the project in Visual Studio and select “Manage NuGet Packages”. For the Xamarin Component, double click the Components folder in Xamarin Studio, or right click the Components folder in Visual Studio and click “Get More Components”. Once installed, you simply need to use the static RaygunClient.Attach method to get the provider to send all unhandled exceptions to your Raygun dashboard. When calling this method, pass in your application API key mentioned above. For Android applications, the best place to call this is in the main/entry Activity of the application. For iOS apps, you can make this call in the Main entry point of the application. The RaygunClient is in the Mindscape.Raygun4Net namespace. RaygunClient.Attach("YOUR_APP_API_KEY"); And that’s all you need to do to get started. After setting this up, we recommend raising an exception in your application to check that things are wired up correctly. If all is well, the instruction page mentioned above will be replaced with the error dashboard displaying the exception. Since you are probably building both an Android and iOS application, repeat these steps to integrate Raygun4Net into your other project. Support for no internet When customers are using your application on the move, there are bound to be times when their device is not connected to the internet. If an exception occurs in your application when there isn’t an internet connection, rather than giving up, the Raygun4Net provider will temporarily store the exception message on the device and then report it to Raygun whenever it gets another chance. This way, you won’t miss out on any of those important error messages. If you haven’t got a Raygun account yet, sign up for the free trial now and get ready to have the most bug free mobile applications on the market. If you have any questions, feature requests or feedback, we’d love to hear from you. Happy error blasting.
https://raygun.com/blog/2013/08/new-raygun-provider-for-xamarin-developers/
CC-MAIN-2019-30
en
refinedweb
table of contents - testing 241-3 - stretch-backports 241-3~bpo9+1 - unstable 241-5 - experimental 242-1 NAME¶sd_bus_set_description, sd_bus_get_description, sd_bus_set_anonymous, sd_bus_set_trusted, sd_bus_set_allow_interactive_authorization, sd_bus_get_allow_interactive_authorization - Set or query properties of a bus object SYNOPSIS¶ #include <systemd/sd-bus.h> int sd_bus_set_description(sd_bus *bus, const char *description); int sd_bus_get_description(sd_bus *bus, const char **description); int sd_bus_set_anonymous(sd_bus *bus, int b); int sd_bus_set_trusted(sd_bus *bus, int b); int sd_bus_set_allow_interactive_authorization(sd_bus *bus, int b); int sd_bus_get_allow_interactive_authorization(sd_bus *bus); DESCRIPTION¶sd has been has been started. See the Authentication Mechanisms[1] section of the D-Bus specification for details. sd_bus_set_trusted() sets the "trusted" state on the bus object. If true, all connections on the bus are trusted and access to all privileged and unprivileged methods is granted. This function must be called before the bus has been started.. RETURN VALUE¶On success, these functions return 0 or a positive integer. On failure, they return a negative errno-style error code. ERRORS¶Returned errors may indicate the following problems: -EINVAL -ENOPKG -EPERM -ECHILD -ENOMEM NOTES¶These APIs are implemented as a shared library, which can be compiled and linked to with the libsystemd pkg-config(1) file. SEE ALSO¶systemd(1), sd-bus(3), sd_bus_default_user(3), sd_bus_default_system(3), sd_bus_open_user(3), sd_bus_open_system(3) NOTES¶ - 1. - Authentication Mechanisms - 2. - D-Bus - 3. - polkit
https://manpages.debian.org/stretch-backports/libsystemd-dev/sd_bus_set_trusted.3.en.html
CC-MAIN-2019-30
en
refinedweb
Let’s take a moment and think about what this programming to an interface and not an implementation can do for us. For instance, imagine that you want to test the following scenarios. 1. The service is unavailable and thus throws a System.Net.WebException (I’m guessing here, I have no idea what exception is actually thrown). You want to make sure that your application handles this in a graceful manner. 2. Your design buddy wants to make sure that all the icons he created for the difference weather types (cloudy, sunny etc) looks ok so she asks you to make sure that you find dates that satisfy those types. This is a pretty common predicament, that you need your application to be in a certain state for you or your colleagues to verify something. Sure, this can be accomplished by changing the code to do whatever it needs to do at the moment but it’s not a very solid approach. Since our service now only cares about the concept of a weather repository we can write implementations that are tailored to satisfy a specific state. For scenario one we want the code to throw an WebException. And no, launching the site and then quickly pulling the network cable to trigger the timeout is not a professional approach =) 1: public class WeatherRepositoryThrowsWebException : IWeatherRepository 2: { 3: public XElement Load() 4: { 5: throw new System.Net.WebException(); 6: } 7: } Scenario two would probably be quite a few classes but they would all look something like this 1: public class WeatherRepositorySunny : IWeatherRepository 2: { 3: public XElement Load() 4: { 5: return new XElement("root", 6: new XElement("time", 7: new XElement("temperature", new XAttribute("value", 20)), 8: new XElement("symbol", new XAttribute("number", 2)) 9: ) 10: ); 11: } 12: } So setting up state turns into choosing which class to use and not about changing existing code. This is now handled in our factory but there are much better tools to use for this, namely IoC-containers. For those of you who attended Fredrik Karlssons talk at EPiPS2010 you already know that EPi (probably) are including StructureMap as their IoC-container of choice in vNext. All the available containers can basically do the same thing (although they have another syntax) so nothing shown here is specific to StructureMap. The purpose of the container is to configure all our dependencies in one place so that we can keep that information away from our code. Let’s take a look at how this can typically be configured. The basic setup we do is that we tell StructureMap which concrete class we want to use for a certain interface (again, the concept of interface, not the type) which in it’s simplest form looks like this 1: For<IWeatherRepository>().Use<YrNoWeatherRepository>(); Pretty self explanatory wouldn’t you say? A common way to handle the setup for structuremap is creating a bootstrapper that’s executed as part of the application start event in global.asax (or in EPi’s case a event that’s fired after it has done all the necessary configuration). 1: public static class StructureMapBootstrapper 2: { 3: public static void Bootstrap() 4: { 5: ObjectFactory.Configure( 6: x => x.AddRegistry(new WeatherRegistry()) 7: ); 8: } 9: } 1: public class WeatherRegistry : Registry 2: { 3: public WeatherRegistry() 4: { 5: For<IWeatherRepository>().Use<YrNoWeatherRepository>(); 6: } 7: } So, now what we have that setup we simply asks the StructureMap container for the service and let it resolve all the dependencies. 1: ObjectFactory.GetInstance<WeatherService>() What StructureMap does here is that it looks at the greediest (that is, has most parameters) constructor for WeatherService. In our case it’s the constructor that takes a IWeatherRepository. So it looks through it’s configuration and fetches whatever we told it to use when it stumbles upon that interface. In our case it’s the concrete class YrNoWeatherRepository. In the example above our Service only had one dependency but it’s pretty common for our classes to have more dependencies than that. Imagine that the service in addition to the repository also had a IWeatherMapper and that the repository in it’s turn had a dependency to some IDataConfiguration. This is were the usage of a IoC-container and it’s auto-wiring really shines. Let’s say that we change the constructor of the YrNoWeatherRepository to 1: public YrNoWeatherRepository(IDataConfiguration config) 2: { 3: // code code code 4: } Another nice usage of an IoC-container would be our function that formats input from the user according to certain rules. We left the classes like this from our last refactoring 1: public void FormatAndSave(PageData pageData, string toFormatAndSave) 2: { 3: var formatters = new List<IMainBodyFormater>() 4: { 5: new MainBodyFormaterNewLine(), 6: new MainBodyFormaterAt(), 7: new MainBodyFormaterAuthenticated(), 8: new MainBodyFormaterNotAuthenticated() 9: }; 10: 11: var formatedText = new MainBodyFormater().Format(toFormatAndSave, formatters); 12: new EPiServerRepository().Save(pageData, formatedText); 13: } So we’re building a list of formatters (classes that implements IMainBodyFormater) and send it to the MainBodyFormatter. Let’s rewrite this to use constructor injecting into the MainBodyFormatter. 1: public MainBodyFormater(List<IMainBodyFormater> formatters) 2: { 3: this.formatters = formatters; 4: } Now we configure our formatters and StructureMap is smart enough to realize that since we have a dependency to an array of IMainBodyFormater it takes all classes that implements that interface that we’ve added and passes them in. 1: For<IMainBodyFormater>().Use<MainBodyFormaterNewLine>(); 2: For<IMainBodyFormater>().Use<MainBodyFormaterAt>(); 3: For<IMainBodyFormater>().Use<MainBodyFormaterAuthenticated>(); 4: For<IMainBodyFormater>().Use<MainBodyFormaterNotAuthenticated>(); 1: public void FormatAndSave(PageData pageData, string toFormatAndSave) 2: { 3: var formatedText = ObjectFactory.GetInstance<MainBodyFormater>().Format(toFormatAndSave); 4: new EPiServerRepository().Save(pageData, formatedText); 5: } We can even take this a step further and simply ask StructureMap to scan our assembly and add all classes that implements the interface 1: Scan(x => 2: { 3: x.TheCallingAssembly(); 4: x.AddAllTypesOf<IMainBodyFormater>(); 5: }); So now adding a new formatter becomes as simple as implementing an interface and placing it in an assembly that’s scanned (you can configure that to be a specific assembly, all assemblies or pretty much anything you want) and it will automatically get’s passed into the MainBodyFormatter. So you don’t have to change or worry at all about the infrastructure of your code to add features. You’ll want to try and limit your calls to ObjectFactory.GetInstance throughout your code and instead leverage the power of auto wiring. For instance, it’s quite possible to do this 1: public WeatherService() 2: : this(ObjectFactory.GetInstance<IWeatherRepository>()) 3: { 4: 5: } 6: 7: public WeatherService(IWeatherRepository weatherRepository) 8: { 9: this.weatherRepository = weatherRepository; 10: } The problem with this is that now your weather service is tightly coupled to StructureMap. So you’ve basically just traded one hard dependency for another. If you’re using MVC what you want to do is to use the ControllerFactory and control the creation of the controllers and handle all your dependency resolving there. When it comes to WebForms it get’s a bit trickier since you can’t control how your Page objects are initiated. There are workarounds using setter injections, I’ve blogged about that here. Naturally there are many more things an IoC-container can do for you than what I’ve shown here so I really encourage you to head over to their website and read more. Nice post again Stefan. Thanks! / Stefan Forsberg Thank for very much for this excellent series of posts. / Emil Lundin
https://world.episerver.com/blogs/Stefan-Forsberg/Dates/2010/6/Design-principles-and-testing--part-4/
CC-MAIN-2019-30
en
refinedweb
#include <netconfig.h> struct netconfig *getnetconfig(void *handlep); void *setnetconfig(void); int endnetconfig(void *handlep); struct netconfig *getnetconfigent(const char *netid); void freenetconfigent(struct netconfig *netconfigp); void nc_perror(const char *msg); char *nc_sperror(void); The library routines described on this page are part of the Network Selection component. They provide the application access to the system network configuration database, /etc/netconfig. In addition to the routines for accessing the netconfig database, Network Selection includes the environment variable NETPATH (see environ(5)) and the NETPATH access routines described in getnetpath(3NSL). getnetconfig() returns a pointer to the current entry in the netconfig database, formatted as a struct netconfig. Successive calls will return successive netconfig entries in the netconfig database. getnetconfig() can be used to search the entire netconfig file. getnetconfig() returns NULL at the end of the file. handlep is the handle obtained through setnetconfig(). A call to setnetconfig() has the effect of ``binding'' to or ``rewinding'' the netconfig database. setnetconfig() must be called before the first call to getnetconfig() and may be called at any other time. setnetconfig () need not be called before a call to getnetconfigent(). setnetconfig() returns a unique handle to be used by getnetconfig(). endnetconfig() should be called when processing is complete to release resources for reuse. handlep is the handle obtained through setnetconfig(). Programmers should be aware, however, that the last call to endnetconfig() frees all memory allocated by getnetconfig() for the struct netconfig data structure. endnetconfig() may not be called before setnetconfig(). getnetconfigent() returns a pointer to the struct netconfig structure corresponding to netid. It returns NULL if netid is invalid (that is, does not name an entry in the netconfig database). freenetconfigent() frees the netconfig structure pointed to by netconfigp (previously returned by getnetconfigent()). nc_perror() prints a message to the standard error indicating why any of the above routines failed. The message is prepended with the string msg and a colon. A NEWLINE is appended at the end of the message. nc_sperror() is similar to nc_perror() but instead of sending the message to the standard error, will return a pointer to a string that contains the error message. nc_perror() and nc_sperror() can also be used with the NETPATH access routines defined in getnetpath(3NSL). setnetconfig() returns a unique handle to be used by getnetconfig(). In the case of an error, setnetconfig() returns NULL and nc_perror() or nc_sperror() can be used to print the reason for failure. getnetconfig() returns a pointer to the current entry in the netconfig() database, formatted as a struct netconfig. getnetconfig() returns NULL at the end of the file, or upon failure. endnetconfig() returns 0 on success and −1 on failure (for example, if setnetconfig() was not called previously). On success, getnetconfigent() returns a pointer to the struct netconfig structure corresponding to netid; otherwise it returns NULL. nc_sperror() returns a pointer to a buffer which contains the error message string. This buffer is overwritten on each call. In multithreaded applications, this buffer is implemented as thread-specific data. See attributes(5) for descriptions of the following attributes: getnetpath(3NSL), netconfig(4), attributes(5), environ(5)
https://docs.oracle.com/cd/E36784_01/html/E36875/getnetconfigent-3nsl.html
CC-MAIN-2019-30
en
refinedweb
SocialPing is a project that enable posting directly from EPiServer to ALL your social networks like Twitter, Facebook and LinkedIn in one go. I have experienced that more and more customers is asking for ways to integrate their social media networks to their CMS solutions. After some research I found a lot of blog entries on how to build your own Twitter Workflow, how to build a gadget for reading updates from Facebook and Twitter but I couldn't find a way to publish to ALL my social networks. This was my motivation when I started working on a project that allows you to publish messages to all your social networks directly from EPiServer. The list currently contains 36 different networks – including Twitter, Facebook and LinkedIn. SocialPing (which is included as a reference in your EPiServer project) uses services from Ping.fm to update your status on your social media networks. Ping.fm provides several ways to update your status. This project uses the email service to do the update. You can download the source code for the project here: SocialPing page on EPiCode. 1. Create your account on Ping.fm After signing up, you have to assign the different social networks you want to hook your Ping.fm account to. The list contains 36 different social networks to choose from (Twitter, Facebook, LinkedIn, MySpace etc). 2. Grab your private Ping.fm email address After assigning different networks you have to grab your private Ping.fm email address. This email address is found under “Services/Tools” on the dashboard. You are going to use this email address in step 5 in this tutorial. 3. Add a new property in admin mode Now we have to define what to publish to the social networks when publish pages. We create a new property “PublishSocialMediaString” (string) to page type “News item”. Add this property to all page types you want to publish to your networks. I recommend to create a new tab for this property – just to separate this from the page content. 4. Copy folder “SocialPing” to the root of your EPiServer project The folder contains two files: “settings.config” and “log.txt”. The project is available for download here. 5. Modify “settings.config” Modify mail server settings and property name you created in step 3. Config.settings should look like this: <?xml version="1.0" encoding="utf-8" ?> <settings> <socialping server="smtp.google.com" port="25" username="myusername" password="mypassword" toaddress="[email protected]" fromaddress="[email protected]" propertytopublish="PublishSocialMediaString" logexceptions="true" /> </settings> 6. Change Web.config (optional) Change delivery method for SMTP to network and not IIS: <mailSettings> <smtp deliveryMethod="Network" /> </mailSettings> 7. Add reference to “SocialPing.dll” in your EPiServer project. 8. Modify Global file in your EPiServer project Now we have to attach a new PageEventHandler in Global.asax.cs. In this example we call method PostMessage when a page is published: using System.Web.Mail; using SocialPing; protected void Application_Start(Object sender, EventArgs e){ DataFactory.Instance.PublishedPage += new PageEventHandler(Instance_PublishedPage); } static void Instance_PublishedPage(object sender, PageEventArgs e){ StatusMessage statusMsg = new StatusMessage(); statusMsg.PostMessage(e.Page); } (The project is now using EPiServers initialization system to do this process auto-magically. No need to change global.asax) Finished! We’re now able to publish directly from EPiServer. Just to demonstrate we create a new page (News item), set a value for “PublishSocialMediaString” and click “Save and Publish”. And that’s it – your status is updated! A short URL to your page is created and appended on the status message: Twitter: Ping.fm is a free service which makes you update all your social networks in one go. Ping.fm offers a variety of ways to integrate and post to social networks. This article describes how to use ping.fm to update status messages using ping.fm’s mail service. SocialPing is a class library which includes methods for sending status messages to ping.fm. Ping.fm forwards the message to your social networks. - The project relies on Ping.fm and assumes that their services is up and running. - Your status is updated every time you publish a page. So if you publish a page ten times, you will have ten status updates in your twitter account. This should be handled by the plug-in. Now you have to leave the text field blank if you don't want to publish. You should be able to check (with a status field) if the current page is already posted to Ping.fm. - No character limitations. It doesn’t check number of characters before sending to Ping. The status messages should be limited to 140 characters (Twitter) including URL. A character counter should be visible in edit mode. - No status message is displayed. The editor don’t know if the message was sent and updated in social networks. - Current version uses txt-file for logging. Log4net should be used for logging. - Next version should include methods for retrieving your last status messages by using API calls to Ping.fm. - It should be possible to choose which network you want to post to. This version is posting to all networks registered with your Ping.fm account. - It should be possible to turn on and off posting with config file. - The plan is to create a epimodule for easier installation and setup. Read more about the project and download the source code on epicode. Runtime: EPiServer CMS 6, .NET 3.5 If you want to change and compile the source code: Visual Studio 2008 SP1. Your feedback is appreciated for further development of the project. Feel free to contact me: stian.grepstad /at/ makingwaves.no Cool stuff / Anders Hattestad Good stuff Stian I would how ever improve one thing. Instead of manually registering the published page event + send the StatusMessage in global.axax => Use EPiServers initialization system inside StatusPing.dll to do this process auto-magically. E.g [ModuleDependency(typeof(EPiServer.Web.InitializationModule))] public class StatusPingModule : IInitializableModule { public void Initialize(InitializationEngine context) { DataFactory.Instance.PublishedPage += Instance_PublishedPage; } static void Instance_PublishedPage(object sender, PageEventArgs e) { StatusMessage statusMsg = new StatusMessage(); statusMsg.PostMessage(e.Page); } public void Preload(string[] parameters) { } public void Uninitialize(InitializationEngine context) { DataFactory.Instance.PublishedPage -= Instance_PublishedPage; } } Great post Stian. Really looking forward to using this in a project. Thanks for the feedback Jarle! This is a great improvement to the project - it will make the setup even easier :) I have now changed (and released) the source code so it now uses EPiServers initialization system inside StatusPing.dll. No modifications to global.asax is necessary to make the plugin work now. Good stuff :)
https://world.episerver.com/blogs/Stian-Grepstad/Dates/2010/9/How-to-post-to-Twitter-and-Facebook-from-edit-mode-in-EPiServer--SocialPing/
CC-MAIN-2019-30
en
refinedweb
6 years, 3 months ago. Arduino + Mbed i2c? Hello. I am currently working on a project that requires me to send data from the mbed to the arduino using i2c... Does anyone have an idea of how i can do this? This is what i have tried so far. mbed master code #include "mbed.h" Serial rn42(p9,p10); DigitalOut myled(LED1); I2C i2c (p28,p27); int x; int main() { rn42.baud(115200); while (1) { if (rn42.readable()) { x = rn42.getc(); printf("%d\n",x); myled = !myled; i2c.start(); i2c.write(0xC0); i2c.write(x); i2c.stop(); } } } arduino slave code #include <Wire.h> byte y; void setup() { Serial.begin(115200); Wire.begin(0xC0); } void loop() { Wire.onReceive(receiveEvent); } void receiveEvent(int howMany) { y = Wire.read(); Serial.println (y); } Thank you very much. 2 Answers 1 year, 3 months ago. Be aware that mBed i2c addresses are shifted 1 bit to the left. So if you use the I2C address 0x0F (00001111) on the arduino, it will be 0x1E (00011110) on mBed 3 years, 7 months ago. I'm in the same boat. Arduino (Master) sending messages to mbed (slave or another master). Has this already been mastered? To post an answer, please log in.
https://os.mbed.com/questions/755/Arduino-Mbed-i2c/
CC-MAIN-2019-30
en
refinedweb
OLAF (Online Automated Forensics) is an automated digital forensics tool which uses computer vision and machine learning to automatically classify images and documents a user downloads while browsing the internet. OLAF monitors browsers and well known user folders for file-system activity and uses a two-stage detection process that first runs image object-detection algorithms to determine if a downloaded image is text-rich or is likely a photo containing human faces. Photographic images are sent to Azure Cognitive Services' Computer Vision API for analyzing and classifying the content including whether or not the photo may contain sexual or adult content. For images that are not photos, OLAF also runs OCR on the image to extract any text and sends this to Azure Cognitive Services' Text Analytics API to extract information regard things like the entities mentioned. OLAF captures the precise date and time an image artifact was created on a PC together with the artifact itself and attributes describing the content of the image which can then be used to provide forensic evidence for an investigation into violations of computer use policies. For PCs in places like libraries or schools or internet cafes, OLAF can help enforce an organization's computer use policies on intellectual property and piracy, adult or sexual images, hate speech or incitements to violence and terrorism, and other disallowed content downloaded by users. For businesses, OLAF can detect the infiltration of documents which may contain the intellectual property of competitors and put the organization in legal jeopardy. In this article, I will describe some of the steps and challenges in building a real-world image classification application in .NET. I will talk about using two popular open-source machine learning libraries - Tesseract OCR and Accord.NET as well as using a cloud-based machine learning service - the Computer Vision API from Azure Cognitive Services - to analyze and classify images. I won't talk much here about image classification algorithms or setting up the Azure Cognitive Services accounts, rather I will focus more on the process of putting together different image classification and machine learning software components and the design decisions and tradeoffs that arise in this process. Setting up an Azure Cognitive Services resource is pretty simple and can be done in five minutes from the Azure Portal: Like many Azure services, the Computer Vision API has a free tier of usage so anyone with a valid Azure subscription including the free Azure for Students subscription can try it out or run an application like OLAF. Of course technological issues aside, OLAF seems to be just a sophisticated way of spying on a user. Whether on a public PC or not, no one likes the idea of a program watching over their shoulder at what they are looking at online. But as far as technology is concerned, the most reliable position is that if something can possibly be done then someone is going to do it so worrying about the misuse of the things you create is largely pointless. It's way better to learn about how something like user-activity monitoring can be done and publishing it as open-source means other people can study it and verify that at least the information being collected about the user's activity is being used in a legitimate way. You can count at least ten commercial user monitoring applications available for purchase today and the number of commercial companies making proprietary user monitoring software is only growing. Open-source user monitoring software like OLAF can provide a somewhat more ethical and secure way of user monitoring where such monitoring does have a legitimate purpose and value. OLAF is a .NET application written in C# targeting .NET Framework 4.5+. Although envisoned as a desktop Windows application, nothing in the core libraries is desktop or Windows specific and OLAF can conceivably be run on other desktop platforms like Linux or on mobile phones using Xamarin. OLAF runs in the background monitoring the user's browser and the files he or she is downloading to specific folders. When a file download of an image is detected, OLAF runs different image classification algorithms on the artifact. First, it determines if the image likely contains text using the Tesseract OCR library. Then it determines if the image is likely to contain a human face using the Viola-Jones object detection framework. If the image is likely a photo with human faces, OLAF sends the image to the Azure Computer Vision API for more advanced classification. In this way, we can avoid calls to the Azure API until it is absolutely necessary. The video above shows a short demonstration of the basic features of OLAF. First, I navigate to and download an image of a sports car to my downloads folder. The image classification part of OLAF runs locally and determines that the image does not contain any human faces and the processing pipeline ends. 05:51:48<03> [DBG] Read 33935 bytes from "C:\\Users\\Allister\\Downloads\\hqdefault.jpg". 05:51:48<03> [DBG] Wrote 33935 bytes to "c:\\Projects\\OLAF\\data\\artifacts\\20190628_UserDownloads_636973760617841436\\3_hqdefault.jpg". 05:51:48<08> [DBG] "FileImages" consuming message 3. 05:51:48<08> [INF] Extracting image from artifact 3 started... 05:51:48<08> [DBG] Extracted image from file "3_hqdefault.jpg" with dimensions: 480x360 pixel format: Format24bppRgb hres: 96 yres: 96. 05:51:48<08> [INF] Extracting image from artifact 3 "completed" in 17.4 ms 05:51:48<08> [INF] "FileImages" added artifact id 4 of type "OLAF.ImageArtifact" from artifact 3. 05:51:48<09> [DBG] "TesseractOCR" consuming message 4. 05:51:48<08> [DBG] Pipeline ending for artifact 3. 05:51:48<09> [DBG] Pix has width: 480 height: 360 depth: 32 xres: 0 yres: 300. 05:51:48<09> [INF] Tesseract OCR (fast) started... 05:51:48<09> [INF] Artifact id 4 is likely a photo or non-text image. 05:51:49<09> [INF] Tesseract OCR (fast) "completed" in 171.4 ms 05:51:49<10> [DBG] "ViolaJonesFaceDetector" consuming message 4. 05:51:49<10> [INF] Viola-Jones face detection on image artifact 4. started... 05:51:49<10> [INF] Found 0 candidate face object(s) in artifact 4. 05:51:49<10> [INF] Viola-Jones face detection on image artifact 4. "completed" in 96.9 ms 05:51:49<11> [DBG] "MSComputerVision" consuming message 4. 05:51:49<11> [DBG] Not calling MS Computer Vision API for image artifact 4 without face object candidates. When I download an image of celebrity Rashida Jones, the image classification algorithm detects that the downloaded image has a human face and then sends the image to the Azure Computer Vision API which analyzes the image and adds tags with all the objects it detected in the image including a score on whether it thinks the image is adult content. 05:52:13<03> [DBG] Waiting a bit for file to complete write... 05:52:13<03> [DBG] Read 62726 bytes from "C:\\Users\\Allister\\Downloads\\rs_1024x759-171120092206-1024. Rashida-Jones-Must-Do-Monday.jl.112017.jpg". 05:52:13<03> [DBG] Wrote 62726 bytes to "c:\\Projects\\OLAF\\data\\artifacts\\20190628_UserDownloads_636973760617841436\\ 5_rs_1024x759-171120092206-1024.Rashida-Jones-Must-Do-Monday.jl.112017.jpg". 05:52:13<08> [DBG] "FileImages" consuming message 5. 05:52:13<08> [INF] Extracting image from artifact 5 started... 05:52:13<08> [DBG] Extracted image from file "5_rs_1024x759-171120092206-1024.Rashida-Jones-Must-Do-Monday.jl.112017.jpg" with dimensions: 1024x759 pixel format: Format24bppRgb hres: 72 yres: 72. 05:52:13<08> [INF] Extracting image from artifact 5 "completed" in 34.4 ms 05:52:14<08> [INF] "FileImages" added artifact id 6 of type "OLAF.ImageArtifact" from artifact 5. 05:52:14<08> [DBG] Pipeline ending for artifact 5. 05:52:14<09> [DBG] "TesseractOCR" consuming message 6. 05:52:14<09> [DBG] Pix has width: 1024 height: 759 depth: 32 xres: 0 yres: 300. 05:52:14<09> [INF] Tesseract OCR (fast) started... 05:52:14<09> [INF] Artifact id 6 is likely a photo or non-text image. 05:52:14<09> [INF] Tesseract OCR (fast) "completed" in 99.8 ms 05:52:14<10> [DBG] "ViolaJonesFaceDetector" consuming message 6. 05:52:14<10> [INF] Viola-Jones face detection on image artifact 6. started... 05:52:14<10> [INF] Found 1 candidate face object(s) in artifact 6. 05:52:14<10> [INF] Viola-Jones face detection on image artifact 6. "completed" in 173.6 ms 05:52:14<11> [DBG] "MSComputerVision" consuming message 6. 05:52:14<11> [INF] Artifact 6 is likely a photo with faces detected; analyzing using MS Computer Vision API. 05:52:14<11> [INF] Analyze image using MS Computer Vision API. started... 05:52:16<11> [INF] Analyze image using MS Computer Vision API. "completed" in 2058.9 ms 05:52:16<11> [INF] Image categories: ["people_portrait/0.7265625"] 05:52:16<11> [INF] Image properties: Adult: False/0.00276160705834627 Racy: False/0.00515600480139256 Description:["indoor", "person", "sitting", "holding", "woman", "box", "looking", "front", "man", "laptop", "table", "smiling", "shirt", "white", "large", "computer", "yellow", "young", "food", "refrigerator", "cat", "standing", "sign", "kitchen", "room", "bed"] 05:52:16<12> [DBG] "AzureStorageBlobUpload" consuming message 6. 05:52:16<12> [INF] Artifact id 6 not tagged for preservation. 05:52:16<12> [DBG] Pipeline ending for artifact 6. Azure Computer Vision is able to detect both 'racy' images and images which contain nudity. 23:35:11<03> [DBG] Waiting a bit for file to complete write... 23:35:11<03> [DBG] Read 7247 bytes from "C:\\Users\\Allister\\Downloads\\download (1).jpg". 23:35:11<03> [DBG] Wrote 7247 bytes to "c:\\Projects\\OLAF\\data\\artifacts\\20190628_UserDownloads_636973760617841436\\ 1_download (1).jpg". 23:35:11<08> [DBG] "FileImages" consuming message 1. 23:35:11<08> [INF] Extracting image from artifact 1 started... 23:35:11<08> [DBG] Extracted image from file "1_download (1).jpg" with dimensions: 194x259 pixel format: Format24bppRgb hres: 96 yres: 96. 23:35:11<08> [INF] Extracting image from artifact 1 "completed" in 14.6 ms 23:35:11<09> [DBG] "TesseractOCR" consuming message 2. 23:35:11<08> [INF] "FileImages" added artifact id 2 of type "OLAF.ImageArtifact" from artifact 1. 23:35:11<08> [DBG] Pipeline ending for artifact 1. 23:35:11<09> [DBG] Pix has width: 194 height: 259 depth: 32 xres: 0 yres: 300. 23:35:11<09> [INF] Tesseract OCR (fast) started... 23:35:11<09> [INF] Artifact id 2 is likely a photo or non-text image. 23:35:11<09> [INF] Tesseract OCR (fast) "completed" in 176.5 ms 23:35:11<10> [DBG] "ViolaJonesFaceDetector" consuming message 2. 23:35:11<10> [INF] Viola-Jones face detection on image artifact 2. started... 23:35:11<10> [INF] Found 1 candidate face object(s) in artifact 2. 23:35:11<10> [INF] Viola-Jones face detection on image artifact 2. "completed" in 86.0 ms 23:35:11<11> [DBG] "MSComputerVision" consuming message 2. 23:35:11<11> [INF] Artifact 2 is likely a photo with faces detected; analyzing using MS Computer Vision API. 23:35:11<11> [INF] Analyze image using MS Computer Vision API. started... 23:35:14<11> [INF] Analyze image using MS Computer Vision API. "completed" in 2456.2 ms 23:35:14<11> [INF] Image categories: ["people_/0.99609375"] 23:35:14<11> [INF] Image properties: Adult: False/0.0119723649695516 Racy: True/0.961882710456848 Description:["clothing", "person", "woman", "posing", "young", "smiling", "underwear", "carrying", "holding", "dress", "standing", "white", "water", "board", "suitcase", "wedding", "bed"] 23:35:14<12> [DBG] "AzureStorageBlobUpload" consuming message 2. The core internal design centers around an asynchronous messaging queue that is used by different parts of the application to communicate in an efficient way without blocking a particular thread. Since OLAF is designed to run continuously in the background and has to perform some potentially computationally intensive operations, performance is a key consideration of the overall design. The app is structured as a pipeline where artifacts generated by user activity are processed through OCR, image classification and other services. Each component of the pipeline is a C# class which inherits from the base OLAFApi class. public abstract class OLAFApi<TApi, TMessage> where TMessage : Message Each component declares the kind of queue message it is interested in processing or sending. For instance, the DirectoryChangesMonitor component is declared as: public class DirectoryChangesMonitor : FileSystemMonitor<FileSystemActivity, FileSystemChangeMessage, FileArtifact> This component listens to FileSystemChangeMessage queue messages and after preserving the file artifact places a FileArtifact message on the queue indicating that a file artifact is ready for processing by the other pipeline components. FileSystemChangeMessage FileArtifact Pipeline components are built on a set of base classes declared in the OLAF.Base project. There are 3 major categories of components: Activity Detectors, Monitors, and Services. Activity detectors interface with the operating system to detect user activity like file downloads. public abstract class ActivityDetector<TMessage>: OLAFApi<ActivityDetector<TMessage>, TMessage>, IActivityDetector where TMessage : Message Activity Detectors only place messages on the message queue and do not listen to any messages from the queue. For instance, the FileSystemActivity detector is declared as: FileSystemActivity public class FileSystemActivity : ActivityDetector<FileSystemChangeMessage>, IDisposable File-system actvity is detected via the standard .NET FileSystemWatcher class. When a file is created, a message is enqueued: private void FileSystemActivity_Created(object sender, FileSystemEventArgs e) { EnqueueMessage(new FileSystemChangeMessage(e.FullPath, e.ChangeType)); } which allows another component to process the actual file that was created. Activity detectors are designed to be lightweight and execute quickly since they represent OLAF's point of contact with the operating system and may be executed inside operating system hooks that need to execute quickly and without errors. Activity detectors simply place messages on the queue and return to idle allowing them to process a potentially large amount of activity notifications coming from the operating system. Actual processing of the messages and creation of the image artifacts is handled by the Monitor components. Monitors listen for activity detector messages on the queue and create the initial artifacts that will be processed through the pipeline based on the information supplied by the activity detectors. public abstract class Monitor<TDetector, TDetectorMessage, TMonitorMessage> : OLAFApi<Monitor<TDetector, TDetectorMessage, TMonitorMessage>, TMonitorMessage>, IMonitor, IQueueProducer where TDetector : ActivityDetector<TDetectorMessage> where TDetectorMessage : Message where TMonitorMessage : Message For instance, the DirectoryChangesMonitor class is declared as: DirectoryChangesMonitor public class DirectoryChangesMonitor : FileSystemMonitor<FileSystemActivity, FileSystemChangeMessage, FileArtifact> This monitor listens for FileSystemChange messages indicating a file was downloaded and first copies the file to an internal OLAF data folder so that the file is preserved even if the user deletes it afterwards. It then places a FileArtifact message on the queue indicating than an artifact is available for processing by the queue image classification and other services. FileSystemChange Services are the components that actually perform analysis on the image and other artifacts generated by user activity. public abstract class Service<TClientMessage, TServiceMessage> : OLAFApi<Service<TClientMessage, TServiceMessage>, TServiceMessage>, IService where TClientMessage : Message where TServiceMessage : Message For instance, the TesseractOCR service is declared as: TesseractOCR public class TesseractOCR : Service<ImageArtifact, Artifact> This service consumes an ImageArtifact but as it can produce different artifacts depending on the results of the OCR operation, it only uses a generic Artifact in its signature indicating the output queue message type. The BlobDetector service consumes image artifacts and also produces image artifacts. ImageArtifact Artifact BlobDetector public class BlobDetector : Service<ImageArtifact, ImageArtifact> Each service processes artifacts received in the queue and outputs an artifact enriched with any information that the service was able to add via text extraction or other machine learning applications. This artifact can then be further processed by other services in the queue. OLAF uses the popular Tesseract OCR library for detecting and extracting any text in an image the user downloads. We use the tesseract.net and also leptonica.net .NET libraries which wrap both Tesseract and the Leptonica image processing library which Tesseract uses. Although Azure's Computer Vision API also has OCR capabilities, using Tesseract locally saves the cost of calling the Azure service for each image artifact being processed. The TesseractOCR service processes image artifact as follows. First, we create a Leptonica image from the image artifact posted to the queue. protected override ApiResult ProcessClientQueueMessage(ImageArtifact message) { BitmapData bData = message.Image.LockBits( new Rectangle(0, 0, message.Image.Width, message.Image.Height), ImageLockMode.ReadOnly, message.Image.PixelFormat); int w = bData.Width, h = bData.Height, bpp = Image.GetPixelFormatSize(bData.PixelFormat) / 8; unsafe { TesseractImage.SetImage(new UIntPtr(bData.Scan0.ToPointer()), w, h, bpp, bData.Stride); } Pix = TesseractImage.GetInputImage(); Debug("Pix has width: {0} height: {1} depth: {2} xres: {3} yres: {4}.", Pix.Width, Pix.Height, Pix.Depth, Pix.XRes, Pix.YRes); Then we run the recognizer on the image: List<string> text; using (var op = Begin("Tesseract OCR (fast)")) { TesseractImage.Recognize(); ResultIterator resultIterator = TesseractImage.GetIterator(); text = new List<string>(); PageIteratorLevel pageIteratorLevel = PageIteratorLevel.RIL_PARA; do { string r = resultIterator.GetUTF8Text(pageIteratorLevel); if (r.IsEmpty()) continue; text.Add(r.Trim()); } while (resultIterator.Next(pageIteratorLevel)); If Tesseract recognizes less than seven text sections, then the service decides it is likely a photo or non-text image. Services automatically pass artifacts they receive to other services that are listening on the queue so in this case we don't have to do anything further. If there are more than seven text sections, then the service creates an additional TextArtifact and posts it to the queue. TextArtifact if (text.Count > 0) { string alltext = text.Aggregate((s1, s2) => s1 + " " + s2).Trim(); if (text.Count < 7) { Info("Artifact id {0} is likely a photo or non-text image.", message.Id); } else { message.OCRText = text; Info("OCR Text: {0}", alltext); } } else { Info("No text recognized in artifact id {0}.", message.Id); } op.Complete(); } message.Image.UnlockBits(bData); if (text.Count >= 7) { TextArtifact artifact = new TextArtifact(message.Name + ".txt", text); EnqueueMessage(artifact); Info("{0} added artifact id {1} of type {2} from artifact {3}.", Name, artifact.Id, artifact.GetType(), message.Id); } return ApiResult.Success; } The ViolaJonesFaceDetector service attempts to guess if an image artifact contains a human face. This service uses the Accord.NET machine learning framework for .NET. and the HaarObjectDetector class which implements the Viola-Jones object-detection algorithm for human faces. If the detector detects human faces in the image artifact, it adds this information to the artifact. ViolaJonesFaceDetector protected override ApiResult ProcessClientQueueMessage(ImageArtifact artifact) { if (artifact.HasOCRText) { Info("Not using face detector on text-rich image artifact {0}.", artifact.Id); return ApiResult.Success; } Bitmap image = artifact.Image; using (var op = Begin("Viola-Jones face detection on image artifact {0}.", artifact.Id)) { Rectangle[] objects = Detector.ProcessFrame(image); if (objects.Length > 0) { if (!artifact.DetectedObjects.ContainsKey(ImageObjectKinds.FaceCandidate)) { artifact.DetectedObjects.Add(ImageObjectKinds.FaceCandidate, objects.ToList()); } else { artifact.DetectedObjects [ImageObjectKinds.FaceCandidate].AddRange(objects); } } Info("Found {0} candidate face object(s) in artifact {1}.", objects.Length, artifact.Id); op.Complete(); } return ApiResult.Success; } The information on detected human faces is now available in the artifact to other services listening to the queue. The MSComputerVision service use the Azure Cognitive Services' Computer Vision API SDK which is available on NuGet. Like most other Azure SDKs, setting up and using this Azure API from .NET is very easy. Once we have a valid Computer Vision API key, we create an API client in our service's Init() method: MSComputerVision Init() public override ApiResult Init() { try { ApiKeyServiceClientCredentials c = new ApiKeyServiceClientCredentials(ApiAccountKey); Client = new ComputerVisionClient(c); Client.Endpoint = ApiEndpointUrl; return SetInitializedStatusAndReturnSucces(); } catch(Exception e) { Error(e, "Error creating Microsoft Computer Vision API client."); Status = ApiStatus.RemoteApiClientError; return ApiResult.Failure; } } If the artifact being processed does not have any detected face objects, then we do not call the Computer Vision API. This reduces the cost of calling the Azure API to only those artifacts we think are photos of human beings. protected override ApiResult ProcessClientQueueMessage(ImageArtifact artifact) { if (!artifact.HasDetectedObjects(ImageObjectKinds.FaceCandidate) || artifact.HasOCRText) { Debug("Not calling MS Computer Vision API for image artifact {0} without face object candidates.", artifact.Id); } else if (artifact.FileArtifact == null) { Debug("Not calling MS Computer Vision API for non-file image artifact {0}.", artifact.Id); } else { To call the API, we open a file stream and pass this to the API client which calls the API and returns a Task<ImageAnalysis> instance with the result of the asynchronous operation: Task<ImageAnalysis> Info("Artifact {0} is likely a photo with faces detected; analyzing using MS Computer Vision API.", artifact.Id); ImageAnalysis analysis = null; using (var op = Begin("Analyze image using MS Computer Vision API.")) { try { using (Stream stream = artifact.FileArtifact.HasData ? (Stream) new MemoryStream(artifact.FileArtifact.Data) : new FileStream(artifact.FileArtifact.Path, FileMode.Open)) { Task<ImageAnalysis> t1 = Client.AnalyzeImageInStreamAsync(stream, VisualFeatures, null, null, cancellationToken); analysis = t1.Result; op.Complete(); } } catch (Exception e) { Error(e, "An error occurred during image analysis using the Microsoft Computer Vision API."); return ApiResult.Failure; } } The image analysis instance contains the categories the Computer Vision API classified the image into: if (analysis.Categories != null) { Info("Image categories: {0}", analysis.Categories.Select (c => c.Name + "/" + c.Score.ToString())); foreach (Category c in analysis.Categories) { artifact.Categories.Add(new ArtifactCategory(c.Name, null, c.Score)); } } As well as scores indicating if the image was classified as racy or having adult content. Info("Image properties: Adult: {0}/{1} Racy: {2}/{3} Description:{4}", analysis.Adult.IsAdultContent, analysis.Adult.AdultScore, analysis.Adult.IsRacyContent, analysis.Adult.RacyScore, analysis.Description.Tags); artifact.IsAdultContent = analysis.Adult.IsAdultContent; artifact.AdultContentScore = analysis.Adult.AdultScore; artifact.IsRacy = analysis.Adult.IsRacyContent; artifact.RacyContentScore = analysis.Adult.RacyScore; analysis.Description = analysis.Description; } return ApiResult.Success; } All the information provided by the image classifiers of the Computer Vision API is added to the artifact and is now available to other services further down the queue for further processing. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
https://www.codeproject.com/Articles/5160679/Using-Image-Classification-and-Azure-Cognitive-Ser
CC-MAIN-2019-30
en
refinedweb
If the Properties view and Palette view are not opened, you can open the views by right-clicking the BPEL editor and selecting the Show in Properties or Show Palette in Palette view options. Then you should have the view like this: In the Palette view, you can drag a BPEL element to the BPEL editor and drop it in the place you want. In the Properties view, you can view the information on every element in the BPEL process. The contents of the Properties view is automatically updated as elements are selected in the BPEL editor. The table below describes the tabs shown in the Properties view: In order to see how a simple BPEL process works in action, you should do some steps as below: Modify two variables of the process: Click on the details tab of the input variable, and then click the Browse... button. Then choose string primitive from the list of Matches. Select xsd as a namespace in the popup menu. Add an Assign element between the receiveInput element and replyOutput element. Select the Assign element in the BPEL editor in order to get the properties information of it in the Properties view. Set its name in the Description tab as assignHelloMesg. In the Details section of Properties view, you should click New to add a copy sub-element to the element. Assign "Variable to Variable"(input:string to output). At this time, an "initializer" popup dialog appears. Click Yes in the dialog. Then you should click New once more and select Expression to Variable (assign concat($input,' World') ) to result:string.
https://help.eclipse.org/2019-06/topic/org.eclipse.bpel.help/html/GettingStarted/CreatingAndEditing/editingProcessFile.html
CC-MAIN-2019-30
en
refinedweb
The Beginning Peter ・3 min read For my first post I decided I should give some background as to how I got to where I am today! The Start I started at a small engineering firm about 3 years ago, with only some minor coding knowledge. While coding was supposed to be a portion of my position, I became more interested in moving our data visualization dashboard forward. The initial stack was pretty simple, data fetching from .csv, some minor database interaction through a small php backend. The frontend was initially built using DC.js and jQuery, with no webpack or build/toolchain. We quickly began to stress the limits of what DC could do perfomantly. Thus began my first major project: writing a custom charting library. The First Project First I needed to learn how to create a library that could be referenced under a common namespace. So I read the d3.js source code! Went in and copied the umd header section and started going. Then it was callbacks, how to get functions to update something in a constructed function (hindsight should have made it a class, but hindsight is 20/20, we're also not at the end). The initial code was pretty ugly and not DRY even in the slightest. No base to extend from. The only upside is we got away from the limitations of DC.js, and cleared up quite a bit of rendering jank. The downside: still importing jQuery(30kb), d3.js (70kb) and at it's most lean my library @ 70kb. And that was only the dependencies. The rest of our code was over 200kb. This led to the next step: removing jQuery. The removal of dependencies and the start of webpack jQuery was relatively easy to remove. d3 does many of the same things jQ does. One dependency gone! Now I had seen webpack mentioned before, but up to this point we only had minimal use for it, and as it complicated our build chain, it took a bit for us to adopt it. Once we got it figured out, it was life changing. Still not using npm at this point, but it was a start. lithtml We then moved to using lithtml. I had though of moving to a ui framework for a bit, at at the time this was the least daunting option. From here I started my second project, a quasi-framework using lithtml as the render agent. After struggling with implementation I decided, this has already been done, why not use an established library. So I started messing around with React. But not wanting to get bound up with its ties to Facebook, I opted for preact! preact + TypeScript Once we started to use preact, I opted to use ts along side, for proper prop checking. This also came with properly using npm, and a fairly in depth webpack config. At this point we had fully transitioned to preact, but the charting library had gone pretty much untouched, with some preact wrappers for proper integration. Then came my most recent and now open source project preact-charts! preact-charts and OSS contributing This started out as creating a small library for us to use internally, but I open sources it as all the current chart libs were react based, with no pure preact equivalents. It's still a WIP, but its stable, and currently supports both preact 8 and the upcoming preact X. Then I got into supporting the libraries I used. Started with some bug reports, that slowly turned into pull requests. I now enjoy helping out with the preact typescript definitions where I can! Hopefully in the future I will continue to engage more with the dev community, and help out on many more projects along the way. Want to give a shout out to the preact team! All are super friendly and willing to help make your pull requests as awesome as possible and help out on slack. Without their support, I would not have the confidence to help out where I can! Thanks for stopping by ❤️ What Advice Would You Give Your 20-year-old Self? If you could go back in time, what advice would you give your 20-year-old self? This project gave me an A++ in college 💯🎓 & this is prolly my last post 😭 Liyas Thomas - JReply v0.1.0 (Release 1.0) delay notice. PDS OWNER CALIN (Calin Baenen) - The Building an Indie Business Podcast- Product Update: iTunes API Edition Alex - Congratulations on your career in software development. You’ve made such a wonderful choice. But you already knew that (based off of the infectious excitement of your article). I enjoyed reading your article. I also started in the industry with only “minor coding knowledge,” and now over a decade later I’ve got over 100 articles on software development and career advice in my wordpress/dev.to queue. I never thought I’d have so much to share. Isn’t life wonderful? :) Thanks for being a part of our community and for contributing to open source. Side note: I love that your library is in TypeScript. Isn’t TS the best? TS is wonderful. Finally switched our node backend over to it a few weeks ago. I don't think I would ever not use it again if given the choice. I agree. I’ll be writing about some advanced techniques in a future article.
https://dev.to/pmkroeker/the-beginning-3a7h
CC-MAIN-2019-30
en
refinedweb
Playing mind-reader here, best guess is that what Guido had in mind is that thing = """...x`expr`y...""" would be equivalent to temp = expr thing = """...x%sy...""" % str(temp) where expr is any expression legal on the RHS of a single-target assignment statement. If so, nesting is implied. However, backticks already have a meaning outside of strings, so precise rules aren't obvious (least not to me!). > ... > I hadn't gotten around to figuring out [Perl's] format and write. ... > But my text is not so easily formatted. Nothing more I can say about that without a concrete example, short of writing a Perl format/write tutorial <wink>. But don't sell 'em short on the basis of a simplest-possible example! E.g., a powerful "advanced" technique is to "eval" dynamically-created formats ... > [back to python] > ... > But can you establish an arbitrary NameSpace while evaluating > the backticked expressions? I _am_ certain that Guido had just simple uses in mind, like print """`n` items @ $`price` ea. = $`n * price`""" Very handy for what it's good for, and clumsy outside that. Still, Python lets you force arbitrary namespaces on any expression, and presumably the same mechanisms would work for this too; e.g., print eval('"""`n` items @ $`price` ea. = $`n * price`"""', other_globals, other_locals) > > but then macros have very tricky problems to solve, and I'm betting > > you don't need all that hair. > Yes, but most of the problems have to do with providing the right > name space for evaluations, in the macro expansion function, in > the resulting macro expansion, and later. Python should be able get > that problem solved from the outset, given the lessons from the past. I thought you had a text-formatting problem here, else I wouldn't have suggested ABC-like backticks as a possible solution! If you're looking to mimic prefectly-general macros that fiddle, e.g., Python _code_, string-based anything is a rotten approach. Lisp's macros, and even the std C preprocessor, are token-based, and know the lexical rules of their languages inside-out. Building a Python code-fiddler out of ABC-style backticks would be kinda like writing the Lisp reader in Fortran, except not as pleasant <0.5 grin>. > But to really support macros, Python would also need to provide an > accessible representation of parsed expressions, just as Lisp has always > provided, so that macro arguments, or subexpressions of them, could be > any Python code. For efficiency, macro expansion should happen at > compile time too. Macros are great, but it doesn't look like the > current design of Python could be stretched to accommodate them. Well, in a misspent youth, I implemented entire languages out of macros. The older I get, the less I like them; yet another sublanguage with yet another set of excruciatingly tricky rules, and _usually_ all in support of things that "should be" done instead in the base language. E.g., C++ is the clearest recent example that comes to mind of how much a language can be improved by eliminating perceived needs for macros. But to each his own. Every now & again, Guido has threatened to make pieces of Python's parser accessible from Python, and if that's carried far enough you probably _could_ get a pleasant facility for parsing & transforming code. Not sure why you'd want to, though <wink>. if-this-keeps-up-i'm-gonna-ask-for-a-DO-loop-ly y'rs - tim Tim Peters [email protected] not speaking for Kendall Square Research Corp
https://legacy.python.org/search/hypermail/python-1994q2/0332.html
CC-MAIN-2021-43
en
refinedweb
table of contents NAME¶ getcwd, getwd, get_current_dir_name - get current working directory SYNOPSIS¶ #include <unistd.h> char *getcwd(char *buf, size_t size); char *getwd(char *buf); char *get_current_dir_name(void); get_current_dir_name():¶¶ On success, these functions return a pointer to a string containing the pathname of the current working directory. In the case of getcwd() and getwd() this is the same value as buf. On failure, these functions return NULL, and errno is set to indicate the error. The contents of the array pointed to by buf are undefined on error. ERRORS¶ -name of the working directory, including the terminating null byte. You need to allocate a bigger array and try again. ATTRIBUTES¶ For an explanation of the terms used in this section, see attributes(7). CONFORMING TO¶¶ Under Linux, these functions make use of the getcwd() system call (available since Linux 2.1.92). On older systems. C library/kernel differences¶¶¶ pwd(1), chdir(2), fchdir(2), open(2), unlink(2), free(3), malloc(3) COLOPHON¶ This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
https://manpages.debian.org/unstable/manpages-dev/getcwd.3.en.html
CC-MAIN-2021-43
en
refinedweb
Segmentation fault when .prof file not writeable To reproduce: $ echo 'main = return ()' > Test.hs $ touch Test.prof $ chmod -w Test.prof $ ghc -prof Test.hs [1 of 1] Compiling Main ( Test.hs, Test.o ) Linking Test ... $ ./Test +RTS -hr{} -hc Can't open profiling report file Test.prof Segmentation fault (core dumped) The warning is ok (maybe it should be an error?), but it shouldn't segfault. Running ./Test +RTS -hr works fine.\~ghc/master/users-guide/profiling.html\#rts-options-for-heap-profiling: -hc: Breaks down the graph by the cost-centre stack which produced the data. -hr⟨cc⟩: Restrict the profile to closures with retainer sets containing cost-centre stacks with one of the specified cost centres at the top. Bug exists since dbef766c (2002): - you can now restrict a heap profile to certain retainer sets, but still display by cost centre (or type, or closure or whatever). Because it didn't update this code+comment introduced in db61851c (2001): // The following line was added by Sung; retainer/LDV profiling may need // two output files, i.e., <program>.prof/hp. if (RtsFlags.ProfFlags.doHeapProfile == HEAP_BY_RETAINER) RtsFlags.ProfFlags.doHeapProfile = 0; Another relevant commit a4e17de6: Author: simonmar <unknown> Date: Wed Nov 28 15:42:26 2001 +0000 [project @ 2001-11-28 15:42:26 by simonmar] Don't need the .prof file when LDV-profiling.
https://gitlab.haskell.org/ghc/ghc/-/issues/11489
CC-MAIN-2021-43
en
refinedweb
Originally posted on dev. Let’s start this post with a simple function in Javascript. function App(){ console.log('Hello World'); // logs 'Hello World' } App(); In the above code snippet, the function call on line no 5 calls the App function which outputs ‘Hello World’ in the console. Let’s React! React is simply Javascript. A component defined in React is just a Javascript function. Consider the React component below. function App() { return ( <h1> Hello World </h1> ); } This component renders <h1> with a text ‘Hello World’ in the Webpage. To reiterate, A component defined in React is just a Javascript function Just compare our plain JS code with this react code: // JS function App(){ return 'Hello World'; } // React function App() { return ( <h1> Hello World </h1> ); } Now you would have these questions: - This is just a function declaration. Where it is being called? - If this is a Javascript function then, how HTML is being returned from the function? Is this even valid? - Also, Why is it called rendering? Let’s answer all these questions. 1. Where it is being called? The function App() would actually be rendered by ReactDOM from react-dom package. import ReactDOM from "react-dom"; import App from "./App"; const rootElement = document.getElementById("root"); ReactDOM.render(<App />, rootElement); The Function App is called here with angle brackets like <App /> the returned HTML is rendered by ReactDOM into the rootElement. Read More about ReactDOM.render() on the react docs This rootElement can be any valid HTML DOM. Usually, we prefer to go with an empty <div> with the id root. <div id="root"></div> You should be careful, this should be an empty element because when the rendering occurs, this div’s children would be replaced with the tag h1 with text ‘Hello World’ inserted automatically by React (ReactDOM) <div id="root"> <h1 class="App">Hello World</h1> </div> 2. How HTML is being returned from the function? Is this even valid? To start off, the HTML like thing that is returned from our App function is called JSX. function App() { return ( <h1> Hello World </h1> ); } Technically this JSX is just a transpiled Javascript function call (yes it sounds scary). This HTML like thing is converted to Javascript by a transpiler called babel and, Finally the App would be sent to JS engine like below code that is just pure javascript. function App() { return ( React.createElement("h1", null, "Hello World") ); } And this is the reason to import React in the module even though we don’t explicitly use it. import React from 'react'; function App() { return ( <h1>Hello World</h1> ); } React.createElement is top level API provided by react package to create the corresponding element in Virtual DOM. createElement returns React elements, which are plain JS objects that describe the intended structure of the UI. // This JSX syntax: return <h1>Hello World</h1> // is converted to this call: return React.createElement("h1", null, "Hello World") // and that becomes this element object: {type: 'h1', props: null, children: ["Hello World"]} “Babel compiles JSX down to React.createElement() calls.” – React Docs You can play around with Babel and its transpiled code on the live babel repl. To get to know about JSX, head-over to JSX on react docs. Also, it is now worth pointing out that with React worked with Babel to introduce new JSX transform which enables users to write JSX without even importing React. Starting with React 17, babel now automatically imports ‘react’ when needed. After the new JSX transform, our App component would compile from this // No import 'React' from 'react' needed function App() { return ( <h1>Hello World</h1> ); } to this import { jsx as _jsx } from "react/jsx-runtime"; function App() { return ( _jsx("h1", { children: "Hello World" }); ); } React core team is making these set of changes gradually to remove the need of forwardRef in the future. And to the most important question, 3. Why is it called Rendering ? In short, Rendering in Web refers to the appearance of something. On a broader picture, the terminology rendering on the web can mean a lot of things like painting, server-rendering, etc. For our understanding, Let’s keep it short, Render refers to appearance of a element, or a set of elements (component) on a webpage. From the React docs it is clear that React is A JavaScript library for building user interfaces React helps us build user interfaces, not only on the Web. It helps us render something on screens that can be presented to the user. A revisit to the example of ReactDOM API: ReactDOM.render(<App />, rootElement); The ReactDOM renders our <App /> into the <div> that we specified. High level overview of the the rendering process: React would create a Virtual DOM in memory which is very similar to the real DOM and renders our <h1> tag in the Virtual DOM, this virtual DOM would be synced with real DOM and during this process <h1> tag is added to real DOM. This process is called Reconciliation The costliest operation on Web is DOM painting and manipulation. If you are wondering this is too much boilerplate, why can’t we just simply write HTML files and include Javascript and CSS to make it more presentable? Yes! You are right, We can easily build a website with plain HTML, JS, and CSS and still make it cool. No one can deny it. Where our React shines is, it will drastically simplify how we render and re-render our elements by providing set of Declarative APIs Declarative : You just tell what you need and don’t need on DOM and React would take care of that for you! With the APIs provided by react, we can create Web Applications which are highly ⚡️ interactive, 🔥 performant and 🚀 responsive If you want some examples, all these following websites are built with 🚀 React Also, keep in mind that, For the end-users, it is all just HTML, CSS, and JavaScript! Originally published in the blog React is Just Javascript
https://learningactors.com/react-is-just-javascript/
CC-MAIN-2021-43
en
refinedweb
[PATCH] clocksource: register persistent clock for arm arch_timer From: Lei Wen Date: Wed Apr 02 2014 - 07:05:31 EST ] Since arm's arch_timer's counter would keep accumulated even in the low power mode, including suspend state, it is very suitable to be the persistent clock instead of RTC. While read_persistent_clock calling place shall be rare, like only suspend/resume place? So we shall don't care for its performance very much, so use direclty divided by frequency should be accepted for this reason. Actually archtimer's counter read performance already be very good, since it is directly access from core's bus, not from soc, so this is another reason why we choose use divide here. Final reason for why we don't use multi+shift way is for we may not call read_persistent_clock for long time, like system long time not enter into suspend, so that the accumulated cycle difference value may larger than we used for calculate the multi+shift, thus precise would be highly affected in such corner case. Signed-off-by: Lei Wen <leiwen@xxxxxxxxxxx> --- I am not sure whether it is good to add something like generic_persistent_clock_read in the new added kernel/time/sched_clock.c? Since from arch timer's perspective, all it need to do is to pick the suspend period from the place where sched_clock being stopped/restarted. Any idea for make the persistent clock reading as one generic function, like current sched_clock do? drivers/clocksource/arm_arch_timer.c | 31 +++++++++++++++++++++++++++++++ 1 file changed, 31 insertions(+) diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c index 57e823c..5eaa34a 100644 --- a/drivers/clocksource/arm_arch_timer.c +++ b/drivers/clocksource/arm_arch_timer.c @@ -23,6 +23,7 @@ #include <linux/sched_clock.h> #include <asm/arch_timer.h> +#include <asm/mach/time.h> #include <asm/virt.h> #include <clocksource/arm_arch_timer.h> @@ -52,6 +53,8 @@ struct arch_timer { #define to_arch_timer(e) container_of(e, struct arch_timer, evt) static u32 arch_timer_rate; +static u32 persistent_multi; +static u32 persistent_div; enum ppi_nr { PHYS_SECURE_PPI, @@ -68,6 +71,31 @@ static struct clock_event_device __percpu *arch_timer_evt; static bool arch_timer_use_virtual = true; static bool arch_timer_mem_use_virtual; +static void get_persistent_clock_multi_div(void) +{ + u32 i, tmp; + + tmp = arch_timer_rate; + persistent_multi = NSEC_PER_SEC; + for (i = 0; i < 9; i ++) { + if (tmp % 10) + break; + tmp /= 10; + persistent_multi /= 10; + } + + persistent_div = tmp; +} + +static void arch_timer_persistent_clock_read(struct timespec *ts) +{ + u64 ns; + + ns = arch_timer_read_counter() * persistent_multi; + do_div(ns, persistent_div); + *ts = ns_to_timespec(ns); +} + /* * Architected system timer support. */ @@ -631,6 +659,9 @@ static void __init arch_timer_common_init(void) arch_timer_banner(arch_timers_present); arch_counter_register(arch_timers_present); arch_timer_arch_init(); + + get_persistent_clock_multi_div(); + register_persistent_clock(NULL, arch_timer_persistent_clock_read); } static void __init arch_timer_init(struct device_node *np) -- 1.8.3.2 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at Please read the FAQ at ]
http://lkml.iu.edu/hypermail/linux/kernel/1404.0/01060.html
CC-MAIN-2021-43
en
refinedweb
Hi all, It would appear that Axes.hist() does not handle large input values the way I was expecting it to. For example: ··· ----------------------------------------------------------------- import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_subplot(111) # Plot as expected: single bar in the center: #result = ax.hist([1.0e+14], 5) # Plot remains completely empty: result = ax.hist([1.0e+16], 5) print "result:", result plt.show() ----------------------------------------------------------------- My hypothesis is that the large value in y is causing the bin interval size in x to become infinitesimally small, but is it conceptually wrong of me to expect a histogram for such large values to still work? If so, what would be a workaround? I don't control the data I am trying to plot, and sometimes there's yes, only a single value, and yes, it's that large... (All this is done with matplotlib 1.1.0 on Debian stable (v6.0.x) for Python 2.6.6. uname: Linux miranda 2.6.32-5-686 #1 SMP Mon Oct 3 04:15:24 UTC 2011 i686 GNU/Linux). Any help/advice will be much appreciated. -- Leo Breebaart <[email protected]...>
https://discourse.matplotlib.org/t/large-values-in-histograms-not-showing/16323
CC-MAIN-2021-43
en
refinedweb
Hey Coders! If you are a react developer then you might have already heard about the latest version of React- React 18 Alpha. The team is still working on the update and there is still a lot to come, so in this article let's see what's happening in the version and breakdown it into simple. The first thing that comes to our mind every time there is a version update is the latest set of changes will break anything with your current setup, or whether you will have to learn new completely unrelated concepts? The answer is no, we will be able to adopt React 18 without rewrites and try the new features at our own pace. React 18 – what can we expect? 1.out-of-the-box improvements (including automatic batching), 2.new streaming server renderer with built-in support for React.lazy, 3.other concurrent features such as startTransition, useDeferredValue, 4.new root API. This release is more focused on User Experience and internal architecture changes, including adaptation to concurrent features. However, the most important, new addition in React 18 seems to be the concurrent rendering and the related concurrent mode. 1. Automatic batching React 18 adds out-of-the-box performance improvements by doing more batching by default, removing the need to manually batch updates in application or library code. But, what is batching? Batching is when React groups multiple state updates into a single re-render for better performance. In simple words, batching (grouping) means multiple state updates are combined into a single render. Whenever you are using setState to change a variable inside any function, instead of making a render at each setState, React instead collects all setStates and then executes them together. This is known as batching.> ); } This is great for performance because it avoids unnecessary re-renders. However, React didn't use to be consistent about when it performed batching. This was because React used to only batch updates during browser events (like a click), but here we’re updating the state after the event has already been handled (in a fetch callback): function App() { const [count, setCount] = useState(0); const [flag, setFlag] = useState(false); function handleClick() { fetchSomething().then(() => { // React 17 and earlier does NOT batch these because // they run *after* the event in a callback, not *during* it setCount(c => c + 1); // Causes a re-render setFlag(f => !f); // Causes a re-render }); } return ( <div> <button onClick={handleClick}>Next</button> <h1 style={{ color: flag ? "blue" : "black" }}>{count}</h1> </div> ); } What if I don’t want to batch? Usually, batching is safe, but some codes may depend on reading something from the DOM immediately after a state change. For those use cases, you can use ReactDOM.flushSync() to opt-out of batching: import { flushSync } from 'react-dom'; // Note: react-dom, not react function handleClick() { flushSync(() => { setCounter(c => c + 1); }); // React has updated the DOM by now flushSync(() => { setFlag(f => !f); }); // React has updated the DOM by now } 2. Server-Side Rendering Server-side rendering is a way of rendering the JS data to HTML on the server to save computation on the frontend. This results in a faster initial page load in most cases. React performs Server Side Rendering in 4 sequential steps: - On the server, data is fetched for each component. - On the server, the entire app is rendered to HTML and sent to the client. - On the client, the JavaScript code for the entire app is fetched. - On the client, the JavaScript connects React to the server-generated HTML, which is known as Hydration. In the trivial version(till React 17), SSR had to load the entire page before it could start hydrating the page. This changes in React18, now we can break React components into smaller chunks using . Streaming HTML <Suspense fallback={<Spinner />}> {children} </Suspense> By wrapping the component in , we tell React that it doesn’t need to wait for comments to start streaming the HTML for the rest of the page. Instead, React will send the placeholder (a spinner) instead. When the data for the comments is ready on the server, React will send additional HTML into the same stream, as well as a minimal inline script tag to put that HTML in the "right place". Selective Hydration Before React 18, hydration couldn't start if the complete JavaScript code for the app hadn't loaded in. For larger apps, this process can take a while. lets you hydrate the app before the child components have loaded in. By wrapping components in , you can tell React that they shouldn’t block the rest of the page from streaming—and even hydration. This means that you no longer have to wait for all the code to load in order to start hydrating. React can hydrate parts as they’re being loaded. 3. startTransition One important use case for startTransition could be when a user starts typing in a search box. The input value has to be immediately updated while the search results could wait a few milliseconds(as expected by the user). This API provides a way to differentiate between quick updates and delayed updates. The delayed update(i.e. transition of one UI view to another) is termed as Transition Updates. For urgent updates like typing, hover, clicking, we call props/functions usually like this : setText(input) For non-urgent or heavy UI updates, we can wrap it in a startTransition API as : startTransition(() => { setText(input); }); 4.The New Root API We usually create a Root level DOM like this and append the React App. This has now been deprecated and is now called "Legacy Root API" import React from 'react'; import ReactDOM from 'react-dom'; const container = document.getElementById('root') ReactDOM.render(<App />, container); Instead, a new Root API is introduced in React18, which looks like this : import React from 'react'; import ReactDOM from 'react-dom'; import App from 'App' const container = document.getElementById('root') const root = ReactDOM.createRoot(container) root.render(<App />) React18 will ship with both Legacy Root API and the New Root API to maintain a smooth transition of React 17(or older) apps to React 18. Wrapping-Up So to summarize, the features that React 18 brings are: - Concurrency control with the Transition API, - Automatic Batching of function calls and events to improve in-app performance, and - Much faster page loads for SSR with Suspense. React 18 docs React 18 discussions Thank you so much for reading this article! I hope this was useful to you in some way. Happy Coding💜 Discussion (3) If correct, please change: from 1.out-of-the-box improvements (including automatic bathing), to 1.out-of-the-box improvements (including automatic batching), You’re right. Thank you for catching that💜 simple yet effective explaining, Thank you very much for putting this :) ...
https://practicaldev-herokuapp-com.global.ssl.fastly.net/codewithtee/are-you-ready-for-react-18-4ip1
CC-MAIN-2021-43
en
refinedweb
In this article, we will learn how to develop and run a python-Django app in less than 5 minutes. Prerequisite: Python3. Install the virtual environment. You may proceed without a virtual environment too, but in the long run, the virtual environment is going to be very helpful. $ pip install virtualenv Create a virtual environment. $ virtualenv -p /usr/bin/python3 helloworld_VE $ source helloworld_VE/bin/activate $ pip install django $ django-admin startproject myproject $ cd myproject manage.pyand a directory with the same name as of your project. $ python manage.py startapp helloworld helloworlddirectory. create a new file urls.py. Add the below lines to this file and save it. from django.conf.urls import url from . import views urlpatterns = [ url(r'^$', views.index, name='index'), ] Open views file and save the below code in it. from django.shortcuts import render from django.http import HttpResponse def index(request): return HttpResponse("Hello World. First Django Project. PythonCircle.Com") Include the helloworld app's. this command to make your project available for everyone on the network. $ python manage.py runserver 0.0.0.0:8000 So this was a basic tutorial to set up a Django app in less than 5 minutes. You can refer to GitHub code and video. Code on Github: Github URL : Video:
https://pythoncircle.com/post/26/hello-word-in-django-how-to-start-with-django/
CC-MAIN-2021-43
en
refinedweb
The QModemCallBarring class implements the call barring settings for AT-based modems. More... #include <QModemCallBarring> Inherits QCallBarring. The QModemCallBarring class implements the call barring settings for AT-based modems. This class uses the AT+CLCK command from 3GPP TS 27.007. QModemCallBarring implements the QCallBarring telephony interface. Client applications should use QCallBarring instead of this class to access the modem's call barring settings. See also QCallBarring. Construct a new modem call barring handler for service. Destroy this modem call barring handler. Convert type into its two-letter 3GPP TS 27.007 string form. This function is virtual to allow for the possibility of modems that support more call barring types than those specified in 3GPP TS 27.007. Returns an empty string if type is not supported. See also QCallBarring::BarringType.
https://doc.qt.io/archives/qtextended4.4/qmodemcallbarring.html
CC-MAIN-2021-43
en
refinedweb
[Terry Reedy] I, on the other hand, having never used either, find the difference in printed ids in def f(): pass f, id(f) (<function f at 0x00868158>, 8814936) at least mildly disturbing. Do you only need to do such matching for complex objects that get the <type name at 0x########> representation? [Michael Hudson] This hardly seems worth discussing :) Then it's a topic for me <wink>! It's a pointer. Pointers are printed in hex. It's Just The Way It Is. I don't know why. Actually, the "0x00868158" above is produced by C's %p format operator. So, in fact, ANSI C is probably why it is The Way It Is. repr starts with %p, but %p is ill-defined, so Python goes on to ensure the result starts with "0x". C doesn't even say that %p produces hex digits, but all C systems we know of do(*), so Python doesn't try to force that part. As to "why hex?", it's for low-level debugging. For example, stack, register and memory dumps for binary machines almost always come in some power-of-2 base, usually hex, and searching for a stored address is much easier if it's shown in the same base. OTOH, id(Q) promises to return an integer that won't be the same as the id() of any other object over Q's lifetime. CPython returns Q's memory address, but CPython never moves objects in memory, so CPython can get away with returning the address. Jython does something very different for id(), because it must -- the Java VM may move an object in memory. Python doesn't promise to return a postive integer for id(), although it may have been nicer if it did. It's dangerous to change that now, because some code does depend on the "32 bit-ness as a signed integer" accident of CPython's id() implementation on 32-bit machines. For example, code using struct.pack(), or code using one of ZODB's specialized int-key BTree types with id's as keys. Speaking of which, current ZODB has a positive_id() function, used to format id()'s in strings where a sign bit would get in the way. (*) The %p in some C's for early x86 systems, using "segment + offset" mode, stuck a colon "in the middle" of the pointer output, to visually separate the segment from the offset. The two parts were still shown in hex, though.
https://mail.python.org/archives/list/[email protected]/message/PPSZ2CRGNXGWDXGINF6OBKRGQABO367P/
CC-MAIN-2021-43
en
refinedweb
finally - Used with exceptions, a block of code that will be executed no matter if there The finally keyword is used in conjunction with the try keyword and except keywords. Regardless of the exception, the code in the finally block will always run. Example Python def divide(n, d): try: result = n / d except: print("Oops, dividing by 0!") result = float('inf') finally: print(f'Result = {result}') print('6 / 2:') divide(6, 2) print('10 / 0:') divide(10, 0) Output 6 / 2: Result = 3.0 10 / 0: Oops, dividing by 0! Result = inf Notice how the code in the finally block is always run.
https://reference.codeproject.com/python3/keywords/python-finally-keyword
CC-MAIN-2021-43
en
refinedweb
Autoimpute is a Python package for analysis and implementation of Imputation Methods! View our website to explore Autoimpute in more detail. Autoimpute. Autoimputeat a couple of PyData conferences! MiceImputer! Thanks to @gjdv for the help on this issue! MiceImputer Joseph Kearney – @kearnz Shahid Barkat - @shabarka Arnab Bose (Advisor) - @bosearnab See the Authors page to get in touch! Autoimputeis now registered with PyPI! Download with pip install autoimpute. pipcached an older version, try pip install --no-cache-dir --upgrade autoimpute. Development git clone -b dev --single-branch cd autoimpute python setup.py install Most machine learning algorithms expect clean and complete datasets, but real-world data is messy and missing. Unfortunately, handling missing data is quite complex, so programming languages generally punt this responsibility to the end user. By default, R drops all records with missing data - a method that is easy to implement but often problematic in practice. For richer imputation strategies, R has multiple packages to deal with missing data ( MICE, Amelia, TSImpute, etc.). Python users are not as fortunate. Python's scikit-learn throws a runtime error when an end user deploys models on datasets with missing records, and few third-party packages exist to handle imputation end-to-end. Therefore, this package aids the Python user by providing more clarity to the imputation process, making imputation methods more accessible, and measuring the impact imputation methods have in supervised regression and classification. In doing so, this package brings missing data imputation methods to the Python world and makes them work nicely in Python machine learning projects (and specifically ones that utilize scikit-learn). Lastly, this package provides its own implementation of supervised machine learning methods that extend both scikit-learn and statsmodels to mutiply imputed datasets. pandas DataFrames daskDataFrames Autoimpute is designed to be user friendly and flexible. When performing imputation, Autoimpute fits directly into scikit-learn machine learning projects. Imputers inherit from sklearn's BaseEstimator and TransformerMixin and implement fit and transform methods, making them valid Transformers in an sklearn pipeline. Right now, there are three Imputer classes we'll work with: from autoimpute.imputations import SingleImputer, MultipleImputer, MiceImputer si = SingleImputer() # pass through data once mi = MultipleImputer() # pass through data multiple times mice = MiceImputer() # pass through data multiple times and iteratively optimize imputations in each column SingleImputer, MultipleImputer, MiceImputer MiceImputer, MultipleImputer, SingleImputer MiceImputerdoes the most work, while the SingleImputerdoes the least MiceImputer, but you can swap in the MultipleImputeror SingleImputeras well Imputations can be as simple as: # simple example using default instance of MiceImputer imp = MiceImputer() # fit transform returns a generator by default, calculating each imputation method lazily imp.fit_transform(data) Or quite complex, such as: # create a complex instance of the MiceImputer # Here, we specify strategies by column and predictors for each column # We also specify what additional arguments any `pmm` strategies should take imp = MiceImputer( n=10, strategy={"salary": "pmm", "gender": "bayesian binary logistic", "age": "norm"}, predictors={"salary": "all", "gender": ["salary", "education", "weight"]}, imp_kwgs={"pmm": {"fill_value": "random"}}, visit="left-to-right", return_list=True ) # Because we set return_list=True, imputations are done all at once, not evaluated lazily. # This will return M*N, where M is the number of imputations and N is the size of original dataframe. imp.fit_transform(data) Autoimpute also extends supervised machine learning methods from scikit-learn and statsmodels to apply them to multiply imputed datasets (using the MiceImputer under the hood). Right now, Autoimpute supports linear regression and binary logistic regression. Additional supervised methods are currently under development. As with Imputers, Autoimpute's analysis methods can be simple or complex: from autoimpute.analysis import MiLinearRegression # By default, use statsmodels OLS and MiceImputer() simple_lm = MiLinearRegression() # fit the model on each multiply imputed dataset and pool parameters simple_lm.fit(X_train, y_train) # get summary of fit, which includes pooled parameters under Rubin's rules # also provides diagnostics related to analysis after multiple imputation simple_lm.summary() # make predictions on a new dataset using pooled parameters predictions = simple_lm.predict(X_test) # Control both the regression used and the MiceImputer itself mice_imputer_arguments = dict( n=3, strategy={"salary": "pmm", "gender": "bayesian binary logistic", "age": "norm"}, predictors={"salary": "all", "gender": ["salary", "education", "weight"]}, imp_kwgs={"pmm": {"fill_value": "random"}}, visit="left-to-right" ) complex_lm = MiLinearRegression( model_lib="sklearn", # use sklearn linear regression mi_kwgs=mice_imputer_arguments # control the multiple imputer ) # fit the model on each multiply imputed dataset complex_lm.fit(X_train, y_train) # get summary of fit, which includes pooled parameters under Rubin's rules # also provides diagnostics related to analysis after multiple imputation complex_lm.summary() # make predictions on new dataset using pooled parameters predictions = complex_lm.predict(X_test) Note that we can also pass a pre-specified MiceImputer (or MultipleIputer) to either analysis model instead of using mi_kwgs. The option is ours, and it's a matter of preference. If we pass a pre-specified MiceImputer, anything in mi_kwgs is ignored, although the mi_kwgs argument is still validated. from autoimpute.imputations import MiceImputer from autoimpute.analysis import MiLinearRegression # create a multiple imputer first custom_imputer = MiceImputer(n=3, strategy="pmm", return_list=True) # pass the imputer to a linear regression model complex_lm = MiLinearRegression(mi=custom_imputer, model_lib="statsmodels") # proceed the same as the previous examples complex_lm.fit(X_train, y_train).predict(X_test) complex_lm.summary() For a deeper understanding of how the package works and its available features, see our tutorials website. numpy>= 1.15.4 scipy>= 1.2.1 pandas>= 0.20.3 statsmodels>= 0.9.0 scikit-learn>= 0.20.2 xgboost>= 0.83 pymc3>= 3.5 seaborn>= 0.9.0 missingno>= 0.4.1 A note for Windows Users: ‘can’t pickle fortran objects’when sampling using multiple chains. cores=1in pm.sample. This should be a last resort, as it means posterior sampling will use 1 core only. Not using multiprocessing will slow down bayesian imputation methods significantly. Distributed under the MIT license. See LICENSE for more information. Guidelines for contributing to our project. See CONTRIBUTING for more information. Adapted from Contributor Covenant, version 1.0.0. See Code of Conduct for more information.
https://www.programcreek.com/python/?project_name=kearnz%2Fautoimpute
CC-MAIN-2021-43
en
refinedweb
Windows Store apps have a variety of tricks up their sleeve for engaging the user even when they are not running. Tiles are a perfect example because they provide at-a-glance information to the user from the start screen. Tiles can provide images, text, or a combination of both and support queuing multiple notifications. Tiles are defined by various pre-built XML templates. The catalog of tiles is available online in the tile template catalog and can be enumerated via TileTemplateType as defined in the Windows.UI.Notifications namespace. The sample apps for Windows 8 from Microsoft include a helper that allows you to create tiles but it requires instantiating specific classes. I wanted to create a more straightforward and fluent method for defining and setting tiles. It turns out you can get the template for any tile type by calling GetTemplateContent on the TileUpdateManager class with the TileTemplateType you are interested in. Here’s a fast way to get the XmlDocument: this.xml = TileUpdateManager.GetTemplateContent(templateType); With the template I can easily inspect how many lines of text and images it is capable of supporting: this.TextLines = this.xml.GetElementsByTagName("text").Count; this.Images = this.xml.GetElementsByTagName("image").Count; Now I can add text and raise an exception if the amount of text the tile can hold is exceeded: public BaseTile AddText(string text, uint id = 0) { if (string.IsNullOrWhiteSpace(text)) { throw new ArgumentException("text"); } if (id == 0) { id = this.textIndex++; } if (id >= this.TextLines) { throw new ArgumentOutOfRangeException("id"); } var elements = this.xml.GetElementsByTagName("text"); var node = elements.Item(id); if (node != null) { node.AppendChild(this.xml.CreateTextNode(text)); } return this; } Notice that the method returns the class instance itself. This sets the class up to support a fluent interface with multiple commands chained together. The same logic is used to add images. When you allow your app to provide both a wide and traditional square tile, you can update both formats with a single call. Therefore, you should be able to merge the definitions of multiple tiles: public BaseTile WithTile(BaseTile otherTile) { var otherBinding = this.xml.ImportNode( otherTile.xml.GetElementsByTagName("visual")[0].LastChild, true); this.xml.GetElementsByTagName("visual")[0].AppendChild(otherBinding); return this; } Now we can grab the template for a tile, specify text and images, and combine square and rectangular tiles together. The next step is to actually set the tile for the app: public void Set() { TileUpdateManager.CreateTileUpdaterForApplication().Update( new TileNotification(this.xml)); } I then added an extension method that takes in the type of the tile and returns an instance of my helper class that contains all of the methods described in this post. Putting it all together, the code to set a wide and square tile for an example app I’m building that enumerates all tiles on the system to display the XML looks like this: // set some default tiles TileTemplateType.TileWideText03.GetTile() .AddText("Tile Explorer") .WithTile(TileTemplateType.TileSquareText03.GetTile() .AddText("Tile Explorer") .AddText("A WinRT Example") .AddText("by Jeremy Likness")) .Set(); This certainly makes it easier to update tiles. A complete helper will also include similar methods for badges and notifications of course … that’s all part of the work I’m doing. Full source and examples will be available in my upcoming book and I’ll share more details when I have them.
https://csharperimage.jeremylikness.com/2013/05/a-fluent-approach-to-windows-store-tiles.html
CC-MAIN-2021-43
en
refinedweb
ETK + Enhance, quick and dirty ETK is a gui toolkit similar to gtk and is a part of Enlightenment Foundation Libraries or EFL. The syntax is very similar to GTK with only minor differences. Infact you could even use Glade to design GUI with ETK. For this you need a small but nifty tools called ‘enhance’ also part of E. Glade btw is a tool which removes the need of manipulating each and every widget manually. It does all these in an efficient manner leaving the developer to concentrate only on callbacks. I am very new to ETK+Enhance and dont know which is better. EWL or ETK. If someone could shed light on this, I would be grateful. Myself, being a novice found the lack of documentation for enhance a bit unnerving. But still I managed to get hold of the concepts. So here is a short tutorial about using ‘enhance’ to parse your .glade files and use them in your etk source codes. Comments are welcome. I assume some knowledge in ETK or atleast GTK…Lets dig in.. Now create a glade file with the name of project1.glade. Create a simple window with a single button in it. Now in the property window of the button in the Signal tab, add the following information Signal->Clicked, Handler-> , in our case “button_callback” The following code is the famous hello world program using ETK and glade for creating the GUI.. #include "Etk.h" #include "Enhance.h" Enhance *en; void button_callback(Etk_Object *obj, void *data) { printf("Whee THis works\n"); } int main(int argc, char *argv[]) { enhance_init(); etk_init(&argc, &argv); en=enhance_new(); enhance_file_load(en, "window1", "project1.glade"); etk_main(); enhance_free(en); enhance_shutdown(); etk_shutdown(); return 0; } Ok now a step by step description of whats up #include "Etk.h" #include "Enhance.h" Enhance *en; We are just including header files that is required for our program..duh! Anyway the third line, we are creating a pointer for the Enhance datatype. ‘en’ will be used while loading our .glade file void button_callback(Etk_Object *obj, void *data) { printf("Whee THis works\n"); } The above function is the callback function. When we click the “button” of our program the line within the printf would output to the console. If you dont understand what I am tallking about visit the gtk website and download the tutorial to get an idea of what callbacks and widgets are. enhance_init(); etk_init(&argc, &argv); We are initialising enhance and etk.. en=enhance_new(); Remember the pointer ‘en’ of datatype Enhance. en now becomes a new instance of enhance.. enhance_file_load(en, "window1", "project1.glade"); The above line will load the widget “window1″ from the “project1.glade” file and “en” will point th them. You should load each widget, for example, you may have more than one window in your .glade file. The remaining lines are self-explainatory..:D Save the file as hello.c, in the directory containing project1.glade..and compile the program gcc -g hello.c -o hello `enhance-config --libs --cflags` `pkg-config --libs --cflags` If all went well, you will be left with the executable called “hello”.. run it ./hello You will get a small window with the button with E17 theming containing the label you had specified while creating the gui with glade. Click on the button and you will see the text “Whee THis works” output on the console everytime it is clicked. Reference: E Developers Portal Glade and GTK+ If you find something wrong, which is mostly likely, please comment about them. I am learning ETK/enhance myself. Filed under: FOSS and Linux, Tips | 7 Comments It depends on preference. ETK is similar to GTK. So, if you like GTK you’ll probably like ETK. If, like the EWL developers, you don’t like the GTK API that much you might want to look at EWL. With EWL we started from a blank slate and are attempting to make a simple and consistent API. EWL makes extensive use of inheritance and callback methods to reduce code duplication inside the core code. You can get some info on EWL at wiki.edevelop.org/index.php/EWL there are a set of introductory tutorials to get people started. (Note, as disclaimer, I’m an EWL developer.) cool gcc -g hello.c -o hello `enhance-config –libs –cflags` `etk-config –libs –cflags` etk-config doesn’t exist, now the right command with etk-config is: gcc -g hello.c -o hello `enhance-config –libs –cflags` `pkg-config –libs –cflags etk` @arcano: thanks, updated the post hello, since i’m studying C/C++/Enlightenment programming and also build systems too(cmake in that case), i’ve made the smallest possible (i presume) CMakeLists.txt able to compile that example: # CMakeLists.txt project(helloEtk) include_directories(/usr/include/etk) add_executable(helloEtk helloEtk.c) target_link_libraries(helloEtk exml etk ecore_file edje ecore_x ecore_fb curl evas ecore enhance) but i have a question: if i modify the source to call all that free/shutdown stud inside the callback my console don’t returs to me; why the process don’t quit? i’m using SlackE17, any idea if it’s a bug? @Sombriks: Yo, I haven’t been really following changes to EFL for a while, maybe the post is too outdated…However you could hop onto #edevelop at freenode if you haven’t already…its a pretty friendly channel…:D If you are having problems loading dynamic callbacks, use the -rdynamic flag for gcc
http://sudharsh.wordpress.com/2006/11/10/etk-enhance-quick-and-dirty/
crawl-002
en
refinedweb
- How to get the updates - Photoshop 2015.0.1 Update (08/03/2015) Photoshop 2015.0.1 Update (08/03/2015) 08/03/2015 – Today we released Photoshop CC 2015 update version 2015.0.1 (Mac and Windows), resolving the following issues: - Crash when Be3D printer profile is present in presets (win only) - Crash on launch “VulcanControl dylib” (mac only) - Unable to read key state in JavaScript - BlackMagicDesign ATEM Switcher plugin crashes in Photoshop CC 2015 when documents have more than one layer - Crashes when extension uses script UI - Crash in specific cases when Open window is invoked (mac only) - Crash on zoom (win only) - Fixed drawing for borders of white artboards drawn on a white background - Fixed drawing of borders while dragging and aligning artboards - Fixed artboard matte color extending inside artboard 1 pixel - Fixed typo in Artboards to PDF dialog - Duplicate Layer command puts new layers at top of artboard stack, not above source layer like it should - Properly disable the artboard canvas color context menu when it does not work with the current GPU and color mode settings - Crash on launch due to “librarylookupfile” (win only) - English text copied from Acrobat to Photoshop is in Chinese - Application hang while opening certain JPEG2000 files - Canvas/document area draws partially or completely black in a Retina/standard display config after disconnect/reconnect 2nd display - Crash when closing an image (win only) - Artifacts using Healing Brush tool on transparent layer with “Current & Below” enabled with soft brush - Healing brush failing on individual channel - Some customers prefer the texture and color rendition of the old healing brush algorithm compared to the new real-time algorithm - Welcome… menu item was unintentionally removed if user selects “Do Not Show…” checkbox, closes and re-launches - Filter Gallery gives error on Mac OS X 10.10.3 and 10.11 in Chinese Languages - Create Layer (from effect) reverses layer z-order - Fixed issue where customers in the UK, Canada and Mexico may not be able to purchase images form Adobe Stock if they accessed Stock through Photoshop’s Search on Adobe Stock menu - Move too set to Auto Select not releasing layers selected during drag - Layers panel incorrectly scrolling when adding or deleting layers - Crash when adding a spot color channel after adding an asset from the Libraries panel - Crash PlugPlug crash in [HtmlEngineMonitor closeWindow] - Crash copy and pasting a shape (esp the Line tool) - Direct Selection & Path Selection Tool +/- shortcuts not working correctly - Crash on Scroll while using Pen Tool (win only) - Performance problems zooming and panning while rulers are showing - Selection disappearing at different zoom levels (win only) - Selection redraw issue when dragging (win only) - Converting a video layer, generated by image sequence import, to Smart Object crashes - Fixed issue where Welcome dialog would be empty in some cases How to get the updates - Look for the update in the Creative Cloud application and click “Update”. (Sign out and back in to the Creative Cloud desktop application if you don’t see the update) - If you don’t have the Creative Cloud application running, start Photoshop 2015 and choose Help > Updates. - The Adobe Application Manager will launch. Select Adobe Photoshop CC 2015: Awesome, thanks Jeff and team – I know this has been a lot of work to get all these updates out quickly!! ScriptUI is still messed up Dialog Are scaled when the should not be scaled. For my Photoshop Prefernces ScriptUI does not trigget events that should be triggered keydown is still not triggered this blog stated that was fixed bug I reported was fixed??? Scale UI setting is set to 100% yet Script UI scales my displat 200% and some dialof do not fit on screen. Photostoshop display requirement is a display 1024×768 or bigger when Scaled My surface Pro 3 Display 2160×1440 scaled 200% makes that display int ao 1080×720 display that does not meet Photoshop Requirement Still Photoshop SCale UI Auto setting will scale my Display 200% setting it to 100% partle solver that problem Photoshop doen not scale its UI most of the time but always scale ScriptUI Dialogs. Script dialogs are not located where they should be. ScriptUI does not trigget events the should be triggered keydown is still not triggered this blog stated that that was fixed. After a reboot the script bug I reportes is now fixed in my machine ???? I take it back The bug is not fixed somehow I opeb CC 2014 where its not a bug. Still fails in CC 2015. function NumericEditKeyboardHandler (event) { try { var keyIsOK = KeyIsNumeric (event) || KeyIsDelete (event) || KeyIsLRArrow (event) || KeyIsTabEnterEscape (event); if (! keyIsOK) { // Bad input: tell ScriptUI not to accept the keydown event event.preventDefault(); /* Notify user of invalid input: make sure NOT to put up an alert dialog or do anything which requires user interaction, because that interferes with preventing the ‘default’ action for the keydown event */ app.beep(); } } catch (e) { ; // alert (“Ack! bug in NumericEditKeyboardHandler: ” + e); } } // key identifier functions function KeyIsNumeric ( event ) { return ( event.keyName >= ‘0’ ) && ( event.keyName <= '9' ) && ! KeyHasModifier ( event ); } function KeyHasModifier ( event ) { return event.shiftKey || event.ctrlKey || event.altKey || event.metaKey; } function KeyIsDelete (event) { // Shift-delete is ok return (event.keyName == 'Backspace') && ! (event.ctrlKey); } function KeyIsLRArrow (event) { return ((event.keyName == 'Left') || (event.keyName == 'Right')) && ! (event.altKey || event.metaKey); } function KeyIsTabEnterEscape (event) { return event.keyName == 'Tab' || event.keyName == 'Enter' || event.keyName == 'Escape'; } function createDialog( ) { var dlg = new Window( 'dialog', 'Example Dialog' ); dlg.maskSt = dlg.add( 'edittext', undefined, '' ); dlg.maskSt.preferredSize.width = 40; dlg.maskSt.addEventListener ('keydown', NumericEditKeyboardHandler ); dlg.btnPnl = dlg.add( 'panel', undefined, 'Process' ); dlg.btnPnl.orientation = "row"; dlg.btnPnl.alignment = "right"; dlg.btnPnl.okBtn = dlg.btnPnl.add( 'button', undefined, 'Ok', { name:'ok' }); dlg.btnPnl.cancelBtn = dlg.btnPnl.add( 'button', undefined, 'Cancel', { name:'cancel' }); return dlg; }; function initializeDialog( w ) { w.maskSt.addEventListener ('keydown', NumericEditKeyboardHandler ); w.maskSt.onChanging = function() { // range check if needed if( Number(this.text) 100 ){ alert(‘Out of range’); // handle however you like this.text = ”; } } w.btnPnl.okBtn.onClick = function ( ) { this.parent.parent.close( 1 ); }; w.btnPnl.cancelBtn.onClick = function ( ) { this.parent.parent.close( 2 ); }; }; runDialog = function( w ) { return w.show( ); }; var win = createDialog(); initializeDialog( win ); runDialog( win ); Thanks a lot … The About Photoshop doesn’t make it clear that the update installed. It still shows as CC 2015.0.0 when I was expecting to see CC 2015.0.1. I do know all the bugs I reported have been FIXED! thanks. B Another place to confirm the version is Help>System Info… It clearly shows in the Creative Cloud App that I had to update Photoshop 2 times yesterday. Which tells me there was a bug in the 1st update. Hi B. Moore – No, that’s not it… It doesn’t have anything to do with bugs. There have actually been 3 separate and different updates to Photoshop CC 2015 in the last two days – a large update for the main program, and two much smaller updates for the Export Assets and Adobe Preview CC components within Photoshop… You can see them given here. Not exactly sure why Adobe is now unbundling the Photoshop CC updates into various pieces (small components of the program), but it would be good to know… Jeff, can please help elucidate? Thanks! So…the channel ids for CC 2015 product updates may be found where? Thanks. latest Ps CC 2015.0.1 mac update has brought my workflow to a standstill …. what is going on? extremely slow Hi Kate, Try restoring your preferences: It is disappointing to me that this update does not address a fundamental bug when using Photoshop on Yosemite with a Cintiq. The bug I am referring to is a problem with pressure sensitivity that creates an ugly “shoelace” like trail at the end of a pressure sensitive brush stroke. You can see a sample here: This problem does not exist in CS5 (with Yosemite) , it also does not exist in CS6 unless the White Window Workaround plugin is installed (however not installing it creates other problems.) In all subsequent versions including CC CC2014 and CC2015 this problem is inherent in the software and unavoidable. I do not understand why such a fundamental flaw is not be addressed or acknowledged. This is clearly not a Wacom driver issue as it exists with many drivers and does not exist on PS CS5 or 6 (without white window) . This bug can be controlled in a Windows environment with the Lazy Nezumi plugin which allows better control of smoothing, however no such plugin exists for Mac. The only way to create a smooth consistently tapered stroke is to turn off smoothing completely which results in segmented curves— so that too is no solution. i have brought this issue up repeatedly. i no other artists are aware of it. Why won’t Adobe acknowledge and address it? I am trying to use the crop tool and its so slow. I have a mac book pro and i have also a Imac the imac is working so well! What can i do? Best regards Marie Try restoring your preferences: If you do restore your prefs, please save a copy of your existing prefs in case they turn out to be the culprit. Having the bad prefs can sometimes help us track down the issue. How do i update to the new version of photoshop on my mac. tried the update thru photoshop and no luck, signed out of creative cloud and no luck. Adobe Photoshop Version: 2015.0.0 Are you on a Creative Cloud Team account or an enterprise account? Administrators for your team or company can disable updates. If you’re an individual, try solution 2 here: Available updates not listed Otherwise, try installing the update directly from here: It also fixed a problem I had with the Alt-Drag an Effects Layer method to duplicate that effect onto another layer! My drop shadows would mysteriously change their settings, effectively “fading out” to 0 within two alt-drags. Very annoying and time-consuming having to reset 14 layers on a graphic I update weekly. They now stay consistent. Thanks! I am so sick of hearing so many bad things happening and complaints and problems that I refuse to upgrade to Yosemite or to CC15 — even though I’m a cc-member! Let me know when everything works correctly! Been having problems where I Airplay to my Apple TV with a Photoshop PSD and the bottom portion of the canvas window is gray. Do you have the 2015.0.1 update installed? Beware of this upgrade. It deleted my plug-ins (and it looks like I have to purchase completely new ones for compatibility) and there is a really annoying welcome screen that I have yet to get rid of. It didn’t delete your plug-ins. You do not need to purchase new ones. See this: I have tried those options but no update option is available. I still have the Photoshop 2015.0.0. My Artlandia Plugin won’t work in this version, should in the update. Any tips anyone? Thanks for reading See the troubleshooting here: After the update how to we deactivate the real-time healing brush. I want to use the original healing brush. Every time I try to download a file from my computer to PS CC 2015 it says its unable to comply because my OpenGL is disabled, and there is no way to enable it. Update your driver: Update doesn’t show in CC app… “Updates” under help menu is greyed out… See this document: Update doesn’t show in CC app… “Updates” under help menu is greyed out… Also tried the direct update here but got an error telling me to contact the administrator (even though I ran the patch installer with admin privileges) :/ I would suspect you’re a CC Teams or enterprise customer who’s admin has disabled updates. See this doc: I have installed the update to 2015.0.1. I have restarted using Shift+Ctrl+Alt to reset preferences. I have deselected “Use Graphics Processor.” I have shut down and restarted my Windows 7 computer. Even though I’ve performed all these suggestions and workarounds Photoshop STILL either hangs or quits when I use “save for web.” What am I missing? Please help. Have you tried running from a new Admin user account? Thanks for your help. I created new admin account and PS worked while in that account, but not when I went back to my normal account (which I need to be in to access our network.) I’ll see if support contacts me, or I may reach out to them. So, from what I can tell, the advice for getting a piece of software running on one’s machine is to specifically ignore the inherent security features of that machine/OS by using an elevated user account when a standard one should suffice for running any (except integral OS functionality providing) software, once installed. Adobe, do you hear yourselves? Seriously, do you? May I ask what advice you have for the fact that the latest version runs like a sack of shit and despite huge amounts of RAM seems to be treating one’s hard-drive like road underneath a jackhammer? Go purchase time on a supercomputer? Go buy new hard drives? Adobe seems to be determined to take no responsibility at all for the disrupted workflow (and associated loss of earnings), the litany of errors introduced with EVERY new CC build or even with providing responses for support requests in a timely manner. It’s weird, because aren’t we the customers – the ones whom are paying for these products that we then have to spend UNPAID time repairing, by following 2nd rate instructions on a blog?!! Why is the onus on the customer? Apparently proof-reading is also too much to handle – as there are even several spelling and grammatical errors in the article. All the new paint-work may hide the rust in Adobe’s ship, but it won’t stop it from sinking. The question is how long do they expect us to keep paying them to carry our cargo before it becomes safer to find a different provider to handle it? Hi Jeff, Thanks for swearing on my blog and hurling insults at me. Running from a fresh admin user account is a an easy troubleshooting step to quickly determine if the problem is damaged permissions/user accounts. Most cases of damaged user accounts/permissions are caused by User Migration Assistant and similar paths when upgrading the OS. I don’t see any support cases nor any products registered under you account, so I’m unable to offer further assistance. Stay classy. Thanks mate finally i am able to run Photoshop. I was regretting the fact i updated my syste, to WINDOWS 10 and was on the verge of formatting when i came across this blog Can we make it clear that you can’t get this or any new updates if you didn’t upgrade your Application Manager to Creative Cloud Desktop app even if you apply the direct update file it will fail what if i don’t want to install the new cc updater and i just want to apply direct manual updates myself I don’t want I don’t want lots of proccess running unstoppable in mac for no reason just to apply updates You can set the CC app not to run under its preferences. That way it only runs when you need to update. Well… I’m having troubles with this version. It does not recognize my graphic card even when the adobe website says it has been tested on it (Nvidia Geforce GT 650M), I’ve updated the drivers and nothing. And the Scrubby zoom has stopped working completely. I now regret updating… =( For the scrubby zoom issue, have you installed this compatibility plug-in? Does it recognize your graphics card, or does it just report that 3D is not compatible? There is a problem of new “CC” version in “CS6” when we click on a shape through “Direct Selection Tool” without selecting the layer it automatically select the shape and focus the border line type but in “CC” it dose not work. First we select the layer then it will work kindly solve this issue. Hi, I replied to your report here: If you want the shape to be selected, make sure you set the option for “Select:” to “All Layers” in the Direct Select options bar. This is the version I have installed: Adobe Photoshop Version: 2015.0.0 20150529.r.88 2015/05/29:23:59:59 CL 1024429 x64 I assume it is the latest one since Creative Cloud panel doesn’t show any newer updates. This version of photoshop still crashes on zoom (windows). I see you are a Creative Cloud Team member – if you’re not seeing updates – your team leader may have updates disabled. It looks like you may also be the admin, so go into your admin account and make sure that the update is enabled from the admin tool. i have updated to 2015.0.1 but am still getting crash to BSOD. it seems that the crash happens after i have run the system along with other software for some time. it does not happen after the reboot from BSOD. is Adobe going to compensate for the recovery of corrupted pc, hdd and files because of these BSOD crashes? running on windows 8.1 with updated gfx/video driver. A system hang or freeze requiring a computer restart usually means a low level failure such as a driver (video card driver, etc), failing/damage hard disk, damaged OS installation, or failing hardware (hard disk, video card, etc). Did you use Windows Update to update the driver? Just doing Windows Update won’t give you the latest and greatest drivers. You must go directly to your card manufacturers website to check for driver updates. Determine what video card you have and go *directly to the manufacturers website (nVidia or ATI/AMD)* and download the latest driver: How about making these updates and versions available outside of creative cloud. I need to revert back to cc 2015 as the latest version is not compatible. Cant get support. All I can find is infornataion on how to find previous verions in creative cloud but they are not there. Paying for something I now cant use and lost a months worth of business Are you following the instructions here?: If you still need help, work with a support agent here:
http://blogs.adobe.com/crawlspace/2015/08/photoshop-cc-2015-0-1-update-now-available.html?replytocom=42594
CC-MAIN-2019-51
en
refinedweb
The Super Fast Purger is a plugin to VAC that is capable of distributing requests to groups of Varnish Cache servers. The initial rationale behind the Super Fast Purger is to provide high performance purging, specific for Varnish Cache, and across data centers. For more information please take a look at Varnish Administration Console installation guide. No additional steps required. The Super Fast Purger is an internal part of the VAC. Adding extra headers will be a matter of including them on the client-side and looking for them in the VCL (check the usage section). Upon receiving an HTTP request, the Super Fast Purger will look at its X-Cache-Group header, and duplicate the request to all the members of that group. The responses’ headers are aggregated into one message and then sent back to the client. Simple purge An example of a basic vcl allowing purging on a Varnish server. sub vcl_recv { if (req.method == "PURGE") { return (purge); } } We use a new verb, PURGE instead of GET to tell Varnish we want to actually purge. It is nice because the rest of the request can be an exact replica of a regular request, ensuring the vcl_hash for example, will react in the same way. Issuing purges across data centers is equivalent to remotely requesting content removal over a potentially unreliable medium. Within the Super Fast Purger VAC, the standard mechanism for checking the integrity of these purge requests is HMAC5. It is the preferred security mechanism for the Super Fast Purger when sending purges to Varnish Cache. HMAC provides a credible mechanism for ensuring data integrity and the authentication of the purge message. As an additional note HMAC is a cryptographic hash function requiring a secret key being shared between VAC and the respective Varnish Caches. The cryptographic strength lies within the size of the secret key, as well as the undisclosed makeup of the content being hashed. Brute force would be a common approach to uncover the secret key. However, unlike MD5 or SHA-1, HMAC is substantially less affected by collisions than their underlying hashing algorithms alone. However, you do not want all your users to be able to purge, so you can protect this operation using ACL, ie. ip ranges acl local { "localhost"; "192.168.1.0"/24; /* and everyone on the local network */ !"192.168.1.23"; /* except for the dialing router */ } sub vcl_recv { if (req.method == "PURGE" && client.ip ~ local) { return (purge); } } Note that X-group header is mandatory for the Super Fast Purger. It will be discussed in the Usage section. Another way to filter who can purge, is to use the digest VMOD to build check an HMAC sent in the request. For example: import digest; sub vcl_recv { if (req.method == "PURGE") { if(digest.hmac_sha256("super_secret", req.http.host + req.url) != req.http.x-hmac) { return (synth(401, "Naughty user!")); } else { return (purge); } } } Here, we hash a private key (“super_secret”), the host header and the url of the request. If the header “X-HMAC” doesn’t match, the request is invalid (most probably, the issuer didn’t know the private key). To create a request like that, we can use sha256sum HMAC=`echo -n "you.com/foo" | sha256 | awk '{print $1}'` curl you.com/foo -H "X-HMAC: $HMAC" Beware: we need the -n, otherwise sha256 will hash the ending \n.
https://docs.varnish-software.com/varnish-administration-console/super-fast-purger/setup/
CC-MAIN-2019-51
en
refinedweb
Difference between revisions of "BeagleBoard Community" Latest revision as of 09:01, 2 May 2019. Note that all of CircuitCo's Beagle specific support wiki pages can be found within elinux.org's Beagleboard:Main_Page namespace. This content is only editable by CircuitCo employees. Contents - 1 Hardware - 2 Availability - 3 I/O Interfaces - 4 BootRom - 5 Code - 6 Compiler - 7 Cortex A8 ARM features - 8 Board recovery - 9 Development environments - 10 Software hints - 11 Graphics accelerator - 12 Beginners guide - 13 FAQ - 14 Links - 15 Other OMAP boards - 16 Subpages Hardware The BeagleBoard M g - Currently six-layer PCB; target: four layer PCB Bottom of rev B: See jadonk's photostream for some more detailed BeagleBoard pictures. Manual See the links below. Schematic Schematic of BeagleBoard Rev. C3 is available as part of the BeagleBoard System Reference Manual. EBVBeagle was a rev C2 board with green PCB boxed with some useful accessories: AC adapter, USB-to-Ethernet adapter, MMC card, USB hub and some cables. - ICETEK-OMAP3530-Mini (Mini Board), a Chinese BeagleBoard clone. - Embest DevKit8000, a compact development board based on TI OMAP3530. - Embest DevKit8500D, a high-performance development board based on TI DM3730. - Embest SBC8530, a compact single board computer based on TI DM3730 and features UART, 4 USB Host, USB OTG, Ethernet, Audio, TF, WiFi/Bluetooth, LCD/VGA, DVI-D and S-Video. - Tianyeit CIP312, a Chinese clone with WLAN, Bluetooth, dual 10/100M Ethernet Contoller-LAN9221I/MCP2512, CAN, touch screen controller, USB hub, USB host, USB OTG based on the DM3730/OMAP3530. 40x40x3.5 mm package - IGEPv2 Platform, a Spanish BeagleBoard clone, with Ethernet, Wi-Fi and Bluetooth - SOM3530, a tiny Chinese System-on-Module BeagleBoard clone with Ethernet. 40x40x4 mm BeagleBoard-based products - Always Innovating Touch Book, see [2] - ViFFF-024 camera board, an extremely sensitive camera for Beagleboard XM, very easy to program and use. I/O Interfaces This section contains notes on some of the BeagleBoard's I/O interfaces. For detailed information about all integrated interfaces and peripherals see the BeagleBoard System Reference Manual. See the peripherals page for external devices like TI's DLP Pico Projector and compatible USB devices. RS-232 The 10-pin RS-232 header is useful for debugging the early boot process, and may be used as a traditional serial console in lieu of HDMI. The pinout on the BeagleBoard is "AT/Everex" or "IDC10". You can buy IDC10 to DB9M adapters in many places as they are commonly used for old PCs, or build one based on this schematic. You may also be able to also need a 9-Pin NullModem cable to connect BeagleBoard to serial port of your PC. Since many systems no longer come with an actual serial port, you may need to purchase a USB-to-serial converter to connect to your BeagleBoard. Be warned that some of them simply do not work. Many of them are based on the Prolific chip, and under Linux require pl2303 module to be loaded. But even when two converters appear to have exactly the same characteristics as listed in /var/log/messages, one simply may not work. Adapters based on the FTDI chipset are generally more reliable. USB There are two USB ports on the BeagleBoard, one with an EHCI (host) controller and another with an OTG (on-the-go, client) controller. EHCI Note that prior to Rev C, the EHCI controller did not work properly due to a hardware defect. The OMAP3 USB ECHI controller on the BeagleBoard only supports high-speed (HS) signaling. This simplifies the logic on the device. FS/LS (full speed/low speed) devices, such as keyboards and mice, must be connected via B and lower — The EHCI controller did not work properly due to a hardware defect, and was removed in rev B4. may get [3] for more information. OTG The HS USB OTG (OnTheGo) controller on OMAP3 on the BeagleBoard supports: [4]. JTAG For IC debugging the BeagleBoard sports a 14-pin TI JTAG connector, which is supported by a large number of JTAG emulation products such as OpenOCD. See BeagleBoardJTAG and OMAP3530_ICEPICK for more information. Expansion Boards Many have created expansion boards for the BeagleBoard, typically to add peripherals like LCD controllers (via the LCD header, SRM 5.11) or to break out functions of the OMAP3 like GPIO pins, I2C, SPI, and PWM drivers (via the expansion header, SRM 5.19). External hardware is usually necessary to support these functions because BeagleBoard's 1.8 V pins require level-shifting to interface with other devices. Expansion boards may also power the BeagleBoard itself through the expansion header. The most complete list of expansion boards can be found on the pin mux page, which also documents how different OMAP3 functions may be selected for expansion header pins. The BeagleBoard Expansion Boards category lists more expansion boards.. Update: 2019 the above x-loader link is "not found" Barebox can be used as an alternative bootloader (rather than U-Boot). You will have to generate it two times: - As a x-loader via defconfig: omap3530_beagle_xload_defconfig - As the real boot loader: omap3530_beagle_defconfig e17 as window manager, the AbiWord word processor, the gnumeric spreadsheet application, a NEON accelerated mplayer and the popular NEON accelerated omapfbplay which gives you fullscreen 720p decoding. The directory should contain all the files you need: See the beagle wiki on how to setup your SD card to use all this goodness. KB sized u-boot.bin in the main directory. Note: Due to (patch and binary) size, the: For beagleboard revision C4, above sources will not work. USB EHCI does not get powered, hence devices are not detected... Get a patched version of u-boot from (Update on April 23 - 2010: This repository has been superseded by the U-Boot version found at) Note: If you want to activate I²Board the main OMAP Git repository with additional patches, mainly display & framebuffer related. (Link to Unknown Project) - Tomi's kernel tree, a clone of the main OMAP Git repository: An pld c6000 Linux compiler is available on the TI FTP site. It does NOT support c64x+ core in OMAP3 devices. Not recommended. You can also use introduction' since ANSI C can only describe scalar floating point, where there is only one operation at a time. 2) NEON NEON vectorized single precision operations (two values in a D-register, or four one cycle/instruction throughput (processing two single-precision values at once) for consumer multimedia.>, float32x2_t datatype and vmul_f32() etc) - Use NEON assembly language directly On Cortex-A9, there is a much higher performance floating point unit which can sustain one cycle/instruction throughput, with low result latencies. OMAP4 uses dual-core Cortex-A9+NEON which gives excellent floating-point performance for both FPU and NEON instructions. Board recovery If you played, for example, with the contents of the NAND, it might happen that the BeagleBoard doesn't boot any more (without pressing user button) due to broken NAND content. See BeagleBoard recovery article how to fix this. Do not panic and think you somehow 'bricked' the board unless you did apply 12 V. So you likely will have to upgrade the X-Loader. Here's what to do: - Make an SD card with the Angstrom Demo files. See the Beagleboard Wiki Page for more information on making the SD card. - Put the SD card in the BeagleBoard, and boot up to the U-Boot prompt. - Do the first six instructions in the Flashing Commands with U-Boot section. - Reboot the BeagleBoard to see that the new X-Loader is properly loaded. This will update the X-Loader to a newer version that will automatically load uImage. Information on how RSE is used for, for example, Gumstix development is described in this post. See also Using Eclipse with Beagle (for JTAG debugging)..Board, are available here. Current release supports input devices (keyboard/mouse), network and sound. You can watch Android booting on BeagleBoard. the 0xdroid demo video on the BeagleBoard: *Board.. Arch Linux ARM See [5] how to install Arch Linux introduction, too. Software hints This section collects hints, tips & tricks for various software components running on BeagleBoard.Board. Mediaplayer (FFmpeg) There is a thread how to get a mediaplayer with NEON optimization (FFmpeg) to run on BeagleBoard. Includes compiler hints and patches. Java Open source. Oracle Java As of August 2012, there is a binary version of Oracle JDK 7 available for Linux/ARM under a free (but not open source) license. More information: - Download on java.oracle.com - Release notes for JDK 7 Update 6 - Original announcement - Oracle blog with FAQ - Oracle Binary Code License Supported features: - Java SE 7 compliant - Almost all development tools from the Linux/x86 JDK - Client and server JIT compilers - Swing/AWT support (requires X11R6) - Softfloat ABI only Oracle states in the FAQ that they are working on hard float support, as well as a JavaFX 2 port to Linux/ARM. Booting Android (TI_Android_DevKit) from a USB stick Please note - This procedure was tested on BeagleBoard-xM revision B(A3) - An SD card will be still needed to load the kernel. - An SD card will contain boot parameters for the kernel to use a USB stick as the root filesystem Procedure - Download Android Froyo for BeagleBoard-xM from TI - Follow the installation procedure for an SD card card. - Test if Froyo is working with your BeagleBoard-xM with an SD card. - You will notice that Android has a slow performance. That is why we will install root filesystem on the BeagleBoard. - Mount your SD card to your computer. - Now we need to tell the BeagleBoard to use the root filesystem from the /dev/sda1 partition instead of the SD card partition. That is done by overwriting boot.scr on the SD card with this one - Unmount the SD card and insert it into the BeagleBoard and test.. Tutorial: Some videos: - SGX on BeagleBoardBoard home) - Using Google you can search beagleboard.org (including IRC logs) using site:beagleboard.org <search term> Manuals and resources - BeagleBoard System Reference Manual (rev. C4) - BeagleBoard System Reference Manual (rev. C3) - BeagleBoard System Reference Manual (rev. B7) - BeagleBoard System Reference Manual (rev. B6) - BeagleBoard System Reference Manual (rev. B5) - BeagleBoard System Reference Manual (rev. B4) - BeagleBoard System Reference Manual (rev. A5) - OMAP3530 processor description and manuals - BeagleBoard at code.google.com - OMAP3530/25 CBB BSDL Model - Micron's multi chip packages (MCPs) for BeagleBoard - BeagleBoard resources page with hardware documentation - Some performance comparison of BeagleBoard Rev. B with some other ARM/PC systems. - OMAP3 pinmux setup - OMAP3 eLinux pinmux page Contact and communication - BeagleBoard discussion list - BeagleBoard open point list and issue tracker - BeagleBoard blog - BeagleBoard chat: #beagle channel on irc.freenode.net (archives)Board - LinuxDevices article about Digi-Key launch - LinuxDevices article about BeagleBoard Rev C, Beagle MID from HY Research, Touch Book and Sponsored Projects Contest - Linuxjournal article on the BeagleBoard Books BeagleBoard based training materials BeagleBoard wiki pages -BoardBoard -Board from Make:Online - Robert's private BeagleBoard wiki (please don't add anything there, do it here. It will help to avoid splittering. Thanks!) - Felipe's blog about D1 MPEG-4 decoding using less than 15% of CPU with help of DSP - Embedded Mediacenter based on BeagleBoard (German) - Floating Point Optimization with VFP-lite and NEON introduction - BeagleBoard setting date via GPS - Complete embedded Linux training labs on the BeageBoard - BeagleBoardPWM Details about PWM on the BeagleBoard - Compatible peripherals and other hardware BeagleBoard photos - BeagleBoard pictures at flickr - BeagleBoard and USRP - Modify SDP3430 QUART cable for BeagleBoard - MythTV on BeagleBoard BeagleBoard videos - BeagleBoard Beginnings - BeagleBoard in the Living Room - BeagleBoard 3D, Angstrom, and Ubuntu - testsprite with BeagleBoard - BeagleBoard LED demo - LCD2USB attached to a BeagleBoard - Video blending in hardware - BeagleBoard Running Angstrom (VGA) on DLP Pico Projector - SGX on BeagleBoard working with Linux 2.6.27 - Not on Beagle OMAP3530: Ubuntu 7.04 on on OMAP3430 SDP - BeagleBoard booting Android - BeagleBoard, SGX, and libfreespace demo BeagleBoard manufacturing - BeagleBoard Solder Paste Screening - BeagleBoard Assembly Inspection - BeagleBoard Functional Test - BeagleBoard Reflow - BeagleBoard Assembly at Circuitco Other OMAP boards - OMAP 4430 Based 40X40 mm, Wi-Fi and mm) OMAP35XX-based system on module in the world! (It is not-Gumstix Overo is smaller at 17 mm*58 mm) - OMAP35x based CM-T3530 from CompuLab Subpages <splist parent= showparent=no sort=asc sortby=title liststyle=ordered showpath=no kidsonly=no debug=0 />
https://elinux.org/index.php?title=BeagleBoard_Community&diff=491381&oldid=10079
CC-MAIN-2019-51
en
refinedweb
XML Schema Attributes ViewEdit online The Attributes view for XML Schemas presents the properties for the selected component in the schema diagram. By default, it is displayed on the right side of the editor. If the view is not displayed, it can be opened by selecting it from the menu. The default value of a property is presented in the Attributes view with blue foreground. The properties that can not be edited are rendered with gray foreground. A non-editable category that contains at least one child is rendered with bold. Bold properties are properties with values set explicitly to them. Properties for components that do not belong to the current edited schema are read-only but if you double-click them you can choose to open the corresponding schema and edit them. You can edit a property by double-clicking by pressing Enter. For most properties you can choose valid values from a list or you can specify another value. If a property has an invalid value or a warning, it will be highlighted in the table with the corresponding foreground color. By default, properties with errors are highlighted with red and the properties with warnings are highlighted with yellow. You can customize these colors from the Document checking user preferences. For imports, includes and redefines, the properties are not edited directly in the Attributes view. A dialog box will open that allows you to specify properties for them. The schema namespace mappings are not presented in Attributes view. You can view/edit these by choosing Edit Schema Namespaces from the contextual menu on the schema root. See more in the Edit Schema Namespaces section. The Attributes view has five actions available on the toolbar and also on the contextual menu: Add - Allows you to add a new member type to an union's member types category. Remove - Allows you to remove the value of a property. Move Up - Allows you to move up the current member to an union's member types category. Move Down - Allows you to move down the current member to an union's member types category. Copy - Copy the attribute value. Go to Definition - Shows the definition for the selected type. - Show Facets - Allows you to edit the facets for a simple type.
https://www.oxygenxml.com/doc/versions/21.1/ug-editor/topics/xml-schema-diagram-attributes-view-x-editing2.html
CC-MAIN-2019-51
en
refinedweb
Provided by: libsystemd-dev_239-7ubuntu10_amd64 NAME sd_machine_get_class, sd_machine_get_ifindices - Determine the class and network interface indices of a locally running virtual machine or container. SYNOPSIS #include <systemd/sd-login.h> int sd_machine_get_class(const char* machine, char **class); int sd_machine_get_ifindices(const char* machine, int **ifindices); DESCRIPTION. RETURN VALUE On success, these calls return 0 or a positive integer. On failure, these calls return a negative errno-style error code. ERRORS Returned errors may indicate the following problems: -ENXIO The specified machine does not exist or is currently not running. -EINVAL An input parameter was invalid (out of range, or NULL, where that is not accepted). -ENOMEM Memory allocation failed. NOTES These APIs are implemented as a shared library, which can be compiled and linked to with the libsystemd pkg-config(1) file. SEE ALSO systemd(1), sd-login(3), systemd-machined.service(8), sd_pid_get_machine_name(3)
http://manpages.ubuntu.com/manpages/cosmic/man3/sd_machine_get_class.3.html
CC-MAIN-2019-51
en
refinedweb
Development of program of graphics drawing functions of one variable. The function is defined by the formula. An example of drawing on the canvas graph of a function of one variable y = f(x) The function is defined by the formula. The task Develop the application, in which a graph of the function is built. y(x) = sin(x) The graph of function must be displayed in a separate form. The mathematical formulation of the problem To build any graph of the function y = f(x) must specify a rectangular area of the screen, in which the graph of the function will be displayed. The rectangular area of the screen is defined by the coordinates of extreme points (Figure 1): – (x1; y1) – left-top corner of the screen; – (x2; y2) – right-bottom corner of the screen. Figure 1. The rectangular area of the screen In the construction of on-screen graphs of functions, it is necessary to implement the scaling rectangle with coordinates (x1; y1), (x2; y2) into the rectangle with the screen coordinates (xx1; yy1), (xx2; yy2) as shown in Figure 2. Figure 2. Scaling of graphs Scaling the axes OX and OY is implemented using linear dependencies: When calculating the vertical screen coordinates, you must use the sign to take into account that the line numbering goes from top to bottom. Progress 1. Run Borland C++ Builder. Create the project as “VCL Form Application”. Automatically, main form of application is created. Name of form “Form1”. 2. Creating the basic form. Place on the form such components: – five components of TLabel type. The objects with names Label1, Label2, Label3, Label4, Label5 are created; – five components of TEdit type. The objects with names Edit1, Edit2, Edit3, Edit4, Edit5 are created; – one component of TButton type. The object with name “Button1” is created. Correct the positions of components on the form as shown in Figure 3. Figure 3. Placing of components on the form Form1 3. Setting of the form. To had is the correct form Form1, in the Object Inspector, do the following: – from the tab “Action” set the property Caption = “Graph of the function of one variable” (form title); – from the tab “Visual” set the property BorderStyle = “bsDialog” (Figure 4). As a result, the buttons of form control will be hided; – from the tab Miscellaneous set the property Position = “poScreenCenter” (Figure 5). This shows the form in the center of the screen when application is run. Figure 4. Property “BorderStyle” of form Form1 Figure 5. Property “Position” 4. Setting of properties and size of components, which are placed on the form. It is necessary to realize the setting of properties of the following components: – in the component Label1 the property Caption = “Number of dots horizontal, n =”; – in the component Label2 the property Caption = “The left boundary, x1 =”; – in the component Label3 the property Caption = “The right boundary, x2 =”; – in the component Label4 the property Caption = “The top boundary, y1 =”; – in the component Label5 the property Caption = “The bottom boundary, y2 =”; – in the component Button1 the property Caption = “Show the graph…”. Also, you need to change the size and position on the form of the component Button1. Approximate views of the form with the placement of components is shown in Figure 6. Figure 6. General view of form “Form1” 5. Programming of event of Form1 activation. When the application is running, you need to program the event OnActivate of main form of application. An example of event programming in C++ Builder is described here. Listing of event handler OnActivate is the following: void __fastcall TForm2::FormActivate(TObject *Sender) { Edit1->Text = "30"; Edit2->Text = "-5"; Edit3->Text = "5"; Edit4->Text = "-2"; Edit5->Text = "2"; Edit1->SetFocus(); // set the focus into Edit1 } In the event handler the fields of type TEdit are filled. These fields are the coordinates of the rectangle area of the screen in which graph is shown. The rectangle area is set by coordinates of left-top corner (x1; y1) and right-bottom corner (x2; y2). 6. Creating a new form and show of the function graph. Create the form named “Form2” by the example as shown in Figure 7. An example of creating of a new form in C++ Builder is shown here. The form «Form2” is defined in the files “Unit2.h” and “Unit2.cpp”. Place on the form components of the following types: – component of type TButton (button). The object “Button1” is created automatically; – component of type TImage. The object “Image1” is created automatically. In this component the graph of function sin(x) will be showed. Figure 7. Form “Form2”, which displays the graph of function Using “Object Inspector” you need to set the following properties of components: – in component Button1 property Caption = “OK”; – in component Button1 property ModalResult = “mrOk”. This means, when user clicks on the button “Button1” – main form will be closed with the returning code “mrOk”. Using “Object Inspector” you need to set the following properties of form “Form2”: – from the tab Action set the property Caption = “Graph of function sin(x)”; – from the tab Visual property BorderStyle = “bsDialog”. It means that the form window will be shown as a dialog window; – from the tab Miscellaneous property Position = “poScreenCenter”. As a result, the form will be showed in the center of the screen. Also, you need to correct the sizes and positions of components Button1 and Image1. As a result, the form “Form2” will have the following view (Figure 8). Figure 8. The form “Form2” 7. Programming of additional functions of scaling and calculating sin(x). 7.1. Inputting the internal variables in the Form2. It is needed to enter the following internal variables in the text of file “Unit2.h” (Figure 9). For this, you need to do the following actions. 1. Go to the header file “Unit2.h” (Figure 9). 2. Input the four variables in the “private” section of class TForm2: int xx1, xx2, yy1, yy2; These variables correspond to the screen coordinates (see Figure 2 b). 3. In section “public” of class TForm2, you need to enter five variables: float x1, x2, y1, y2; int n; These variables correspond to the actual coordinates (Figure 2 a) of the rectangular area, which displays a graph. The variable n sets the number of points, which are connected between them. The greater the value of n, the more smoothly the graph appears on the screen. The value of these variables is filled from the main form “Form1”. Therefore they are placed in the section “public”. Figure 9. The view of file “Unit2.h” For now, the code listing of file “Unit2.h” is the following: //------------------------------------------------ #ifndef Unit2H #define Unit2H //------------------------------------------------ #include <Classes.hpp> #include <Controls.hpp> #include <StdCtrls.hpp> #include <Forms.hpp> #include <ExtCtrls.hpp> //------------------------------------------------; // actual coordinates of rectangular area int n; // number of points, which are connected between them }; //-------------------------------------------------- extern PACKAGE TForm2 *Form2; //-------------------------------------------------- #endif 7.2. Adding the conversion functions and computation functions of TForm2 class. In the module of form Form2 you need to create two functions of conversion of actual coordinates in the screen coordinates. The functions are named ZoomX() and ZoomY(). Also, you need to add function of calculation sin(x). The name of function is func(). First of all, in the file “Unit2.h”, are added prototypes of functions. The prototypes are added in the section “public”. The snippet of listing of file “Unit2.h” is following: ...; int n; int ZoomX(float x); int ZoomY(float y); float func(float x); }; ... Next step, you need to go to the file “Unit2.cpp” (Figure 10). In this file the implementation of class TForm2 is described. At the end of file the next program code is added:); // function for which a graph is created return ret; } In the code snippet above, the functions ZoomX() and ZoomY() are getting as the input parameters the corresponding values x and y, which are the actually coordinates. Then, the conversion by formulas from mathematical formulation of the problem is realized. Function func() gets as input parameter the value of local variable x. In the body of function is calculated the value of sin(x). In this place you can insert any other own function. Figure 10. File “Unit2.cpp” with the entered functions ZoomX(), ZoomY() and func(). At the moment, the code of file “Unit2.cpp”); return ret; } To use the sin(x) function, is added the string #include <math.h> in the file “Unit2.cpp”. This string connects the standard library of mathematical functions. The function sin(x) is realized in this library. 8. The event programming of Form2 activation. The graph of function is displayed in the form Form2, when you clicked on the button “Show the graph…” of form Form1. Therefore it is advisable to program the output of graph in the event OnActivate of form Form2. The event handler is realized in the file “Unit2.cpp”. The code of event handler “OnActivate” of form “Form2” is the following: void __fastcall TForm2::FormActivate(TObject *Sender) { TCanvas * canv; // additional variable int tx, ty; int i; float x, y, h; canv = Image1->Canvas; // 1. Setting the boundaries of screen coordinates xx1 = 0; yy1 = 0; xx2 = Image1->Width; yy2 = Image1->Height; // 2. Drawing of graph canv->Pen->Color = clBlue; canv->Brush->Color = clWhite; canv->Rectangle(0, 0, Image1->Width, Image1->Height); // 2.1. Drawing of coordinate axes canv->Pen->Color = clBlack; // 2.2. Take the point of origin X of the screen. tx = ZoomX(0); ty = ZoomY(y1); canv->MoveTo(tx,ty); // draw a line of coordinates of the X axis tx = ZoomX(0); ty = ZoomY(y2); canv->LineTo(tx,ty); // 2.3. Take the point of origin X of the screen. canv->Pen->Color = clBlack; tx = ZoomX(x1); ty = ZoomY(0); canv->MoveTo(tx,ty); // Draw the Y axis. tx = ZoomX(x2); ty = ZoomY(0); canv->LineTo(tx,ty); // 3. Drawing of the graph. canv->Pen->Color = clRed; // color canv->Pen->Width = 2; // line thickness // coordinates of the first point x = x1; y = func(x); h = (x2-x1)/n; tx = ZoomX(x); ty = ZoomY(y); canv->MoveTo(tx,ty); // The cycle of enumerating of points and drawing the connecting lines for (i = 0; i < n; i++) { x = x + h; y = func(x); tx = ZoomX(x); ty = ZoomY(y); canv->LineTo(tx,ty); } } In the short view the file “Unit2.cpp” is TForm2::ZoomY(float y) { ... } float TForm2::func(float x) { ... } void __fastcall TForm2::FormActivate(TObject *Sender) { ... } //-------------------------------------------------------------- 9. Programming of event of click on the button “Show the graph…” of form Form1. The last stage is the programming of event of click on the button of show the graph. For this you need to do the following actions. 9.1. Connecting of Form2 to the Form1. An example of creating the new form of application is described here in details. To connects the new form “Form2” to the form “Form1” you need to add the string “Unit1.cpp” in the top of file” #include "Unit2.h" Thereafter, the methods of “TForm2” class are available from the Form1. For now, the listing of file “Unit2.cpp” is the following: Activate(TObject *Sender) { Edit1->Text = "30"; Edit2->Text = "-5"; Edit3->Text = "5"; Edit4->Text = "-2"; Edit5->Text = "2"; Edit1->SetFocus(); // set the input focus to Edit1 } //--------------------------------------------------------------- 9.2. Programming of event of click on the button “Show the graph…”. The event handler Button1Click() of clicking on the button “Show the graph…” is following: void __fastcall TForm1::Button1Click(TObject *Sender) { Form2->n = StrToInt(Edit1->Text); Form2->x1 = StrToFloat(Edit2->Text); Form2->x2 = StrToFloat(Edit3->Text); Form2->y1 = StrToFloat(Edit4->Text); Form2->y2 = StrToFloat(Edit5->Text); Form2->ShowModal(); } In the code is formed the variables n, x1, x2, y1, y2. 10. Running the application. After, you can run the application (Figure 11). Figure 11. The graph of sin(x).
http://www.bestprog.net/en/2016/05/29/012-development-of-program-of-graphics-drawing-functions-of-one-variable-the-function-is-defined-by-the-formula/
CC-MAIN-2017-39
en
refinedweb
Timeline Apr 22, 2009: - 10:02 PM Changeset [5576] by - mailtotracplugin/0.11/mail2trac/email2ticket.py remove accidentally committed pdb - 10:01 PM Changeset [5575] by - mailtotracplugin/0.11/mail2trac/email2ticket.py set resolution to None or you get "new defect: fixed" - 9:56 PM Changeset [5574] by - mailtotracplugin/0.11/mail2trac/email2ticket.py populate ticket fields from defaults - 9:50 PM Changeset [5573] by - imagetracplugin/0.11/imagetrac/image.py check to see if the attr exists - 9:28 PM Changeset [5572] by - geotracplugin/0.11/geotrac/ticket.py allow entry of lat, lonas location - 9:20 PM Changeset [5571] by - imagetracplugin/0.11/imagetrac/image.py handle case where ticket was created without an image - - 6:00 PM TracSentinelPlugin edited by - (diff) - 5:59 PM Changeset [5569] by - tracsentinelplugin/0.11/README Really fixed formatting this time (I think). - 5:58 PM Changeset [5568] by - tracsentinelplugin/0.11/README Fixed formatting of README. - 5:57 PM Changeset [5567] by - tracsentinelplugin/0.11/README Changed from mime-type property to correct svn:mime-type. - 5:51 PM Changeset [5566] by - tracsentinelplugin/0.11/README Added README. - 5:00 PM Changeset [5565] by - autoupgradeplugin/0.11/autoupgrade/autoupgrade.py import exception_to_unicode - - 4:51 PM Changeset [5563] by - traclegosscript/anyrelease/example/mobilegeo/mobilegeo - traclegosscript/anyrelease/example/mobilegeo/setup.py name changes from SimpleTrac - 4:49 PM Changeset [5562] by - traclegosscript/anyrelease/example/mobilegeo copying simple as the basis for the MobileGeoTrac type - 4:22 PM ImageTracPlugin edited by - (diff) - 4:19 PM ImageTracPlugin edited by - (diff) - - 4:17 PM ImageTracPlugin created by - New hack ImageTracPlugin, created by k0s - 4:17 PM Changeset [5560] by - imagetracplugin - imagetracplugin/0.11 New hack ImageTracPlugin, created by k0s - 3:34 PM TracSentinelPlugin edited by - (diff) - 2 - 9:51 AM Ticket #4877 (TracHtmlNotificationPatch - Patch is empty) closed by - wontfix: The source isn't on svn, I battled for a while with this but got it … - 9:12 AM Ticket #4963 (PermRedirectPlugin - Redirection problem with different hostnames in NAT) created by - === Facts === * Server is named lets say 'herbert' with Trac * … - 9:02 AM angel created by - New user angel registered Apr 21, 2009: -. - 8:36 PM Changeset [5557] by - loomingcloudsplugin/0.11/loomingclouds/__init__.py - loomingcloudsplugin/0.11/loomingclouds/autocompletetags.py adding text handler for tags....maybe there is a better way of doing this? - 8:27 PM TracSentinelPlugin edited by - (diff) - 8:04 PM TracSentinelPlugin created by - New hack TracSentinelPlugin, created by tgish - 8:04 PM Changeset [5556] by - tracsentinelplugin - tracsentinelplugin/0.11 New hack TracSentinelPlugin, created by tgish - 7:47 PM Changeset [5555] by - loomingcloudsplugin/0.11/loomingclouds/htdocs/js/autocomplete.js including JS for autocompletion - 7:46 PM Changeset [5554] by - loomingcloudsplugin/0.11/loomingclouds/htdocs/css/autocomplete.css copying needed css for autocomplete - 7:45 PM Changeset [5553] by - loomingcloudsplugin/0.11/loomingclouds/htdocs/css making directory for CSS - 6:42 PM Ticket #4962 (GalleryPlugin - galleryplugin) created by - Is there a gallery plugin that works with v0.10? This page doesn't … - 6:42 PM Ticket #4961 (GalleryPlugin - galleryplugin) created by - Is there a gallery plugin that works with v0.10? This page doesn't … - 6:42 PM Ticket #4960 (GalleryPlugin - 0.10 version) created by - Is there a gallery plugin that works with v0.10? This page doesn't … - 6:38 PM abalter created by - New user abalter registered - 6:00 PM Ticket #4959 (AddCommentMacro - inconsistent error handling) created by - read-only pages do not raise a TracError when there are insufficient … - 3:05 PM Ticket #4954 (TracHoursPlugin - TypeError: __init__() takes at most 12 arguments (13 given)) closed by - duplicate: Duplicate of #4955. - 2:15 PM Changeset [5552] by - geotracplugin/0.11/geotrac/ticket.py - geotracplugin/0.11/geotrac/web_ui.py "fix" some unicode errors - 2:09 PM Changeset [5551] by - geotracplugin/0.11/geotrac/ticket.py blindly assume utf-8 - 1:48 PM Changeset [5550] by - geotracplugin/0.11/geotrac/templates/mapscript.html - geotracplugin/0.11/geotrac/ticket.py updated map with tiles in spherical mercator (google projection) - 8:52 AM Ticket #4688 (TicketChangePlugin - Wrong 'TracWebAdmin' dependancy when installing on 0.11.3) reopened by - - 7:31 AM Changeset [5549] by - downloadsplugin/0.12 - Creating 0.12 branch. - 7. - 3:13 AM Ticket #4956 (ExcelViewerPlugin - http link for ticket number) created by - excel cell with trac ticket number not showing http link of that ticket Apr 20, 2009: - 9 - 8:52 PM Ticket #4955 (TracHoursPlugin - TypeError: __init__() takes at most 12 arguments (13 given)) created by - I installed, enable component, execute trac-admin <env> upgrade and … - 8:46 PM Ticket #4954 (TracHoursPlugin - TypeError: __init__() takes at most 12 arguments (13 given)) created by - After Install, enable the component and run trac-admin <env> upgrade, … - 5:17 PM Changeset [5546] by - ticketsidebarproviderplugin/0.11/ticketsidebarprovider/htdocs/css/ticket-sidebar.css resize the ticket box sensibly - 4:20 PM TicketModeratorPlugin edited by - (diff) - 4:09 PM Ticket #4953 (GeoTicketPlugin - geopy - also for openstreetmap?) created by - could this work also with openstreetmap? - 3:52 PM Changeset [5545] by - geotracplugin/0.11/geotrac/mail.py delete the subject so you can set it - 3:43 PM Changeset [5544] by - geotracplugin/0.11/geotrac/mail.py - geotracplugin/0.11/setup.py typos - 3:34 PM Changeset [5543] by - geotracplugin/0.11/geotrac/mail.py - geotracplugin/0.11/setup.py - adding email interface - giving version 0.1 (beta!) - 3:15 PM Changeset [5542] by - mailtotracplugin/0.11/mail2trac/email2ticket.py break fields into a method so it can be consumed - 3:09 PM BadContent edited by - (diff) - 3:08 PM ProjectManagementIdeas edited by - Minor small edits. (diff) - 2:54 PM ProjectManagementIdeas edited by - Refine dependency rewrite (diff) - 2:53 PM Ticket #4952 (TracTicketStatsPlugin - No graph displays) created by - Just finished installing the TracTicketStats to our trac environment, … - 2:51 PM ProjectManagementIdeas edited by - Revise dependency descriptions (diff) - 2:32 PM ProjectManagementIdeas edited by - Ooops ! wrong little link to the ouuuu ... :$ (diff) - 2:29 PM ProjectManagementIdeas edited by - Implementation details ... bah! (diff) - 2:22 PM Changeset [5541] by - ticketsidebarproviderplugin/0.11/ticketsidebarprovider/htdocs/css/ticket-sidebar.css clear after for vertical alignment - 9. - 7. - 7:34 AM Changeset [5538] by - tracwikiprintplugin/0.11/setup.py - tracwikiprintplugin/0.11/wikiprint/web_ui.py Fixed parameter order (version and date were swapped) …
https://trac-hacks.org/timeline?from=2009-04-22&daysback=7&authors=
CC-MAIN-2017-39
en
refinedweb
complex − basics of complex mathematics #include <complex.h>. Your C-compiler can work with complex numbers if it supports the C99 standard. Link with -lm. The imaginary unit is represented by I. /* check that exp(i*pi) == -1 */ #include <math.h> /* for atan */ #include <complex.h> main() { double pi = 4*atan(1); complex z = cexp(I*pi); printf("%f+%f*i\n", creal(z), cimag(z)); } cabs(3), carg(3), cexp(3), cimag(3), creal(3)
http://man.sourcentral.org/SLES9/5+complex
CC-MAIN-2017-39
en
refinedweb
remctl man page remctl, remctl_result_free — Simple remctl call to a remote server Synopsis #include <remctl.h> struct remctl_result * remctl(const char *host, unsigned short port, const char *principal, const char **command); void remctl_result_free(struct remctl_result *result); Description.. If the client needs to control which ticket cache is used without changing the environment, use the full client API along with remctl_set_ccache(3).,). Return Value). Compatibility This interface has been provided by the remctl client library since its initial release in version 2.0. The default port was changed to the IANA-registered port of 4373 in version 2.11. Support for IPv6 was added in version 2.4. Caveats.. Notes The remctl port number, 4373, was derived by tracing the diagonals of a QWERTY keyboard up from the letters "remc" to the number row. Author Russ Allbery <[email protected]> Copyright 2007, 2008, remctl_new(3), remctl_open(3), remctl_command(3), remctl_commandv(3), remctl_output(3), remctl_close(3) The current version of the remctl library and complete details of the remctl protocol are available from its web page at <>.
https://www.mankier.com/3/remctl
CC-MAIN-2017-39
en
refinedweb
11 07 2009. Analyzing current model Let’s start with analyzing the current model shown below. We can see that Album and Photo have some fields in common: Id, Title, Description and Visible. Currently these fields are duplicated to both tables. Photo Gallery. This is my current very simple model. I made this model so trivial because it is easier to study new code stuff step-by-step. I played already with UI a little bit and using separate classes is pretty painful. Also I know I want to add new class to model in near future – my camera is able to make some simpler movie clips and why not to store clips in same system. Clips class will be almost like Photo but there are some more attributes. Adding another class to model in the way Photo is there makes coding more complex. So we need generalization. GalleryItem class I created class called GalleryItem as generalization of Album and Photo. GalleryIitem generates ID-s when new gallery item is added. As you may notice the associations on the model are same as before. I don’t want to change them now because currently I have no reason to play them around. My new model is shown on the following image. Photo Gallery. Album and Photo are now subclasses of GalleryItem. Associations between Album and Photo are same as before. Refactoring existing model may be pretty hard headache on Entity Framework designer. It is still same raw and unstable as it has always been. No matter if you refactor existing model or create a new one, here are some guiding resources for you: - Entity Framework Modeling: Table Per Type Inheritance - deCast – Entity Framework Modeling: Implementing Table Per Type - How to: Define a Model with Table-per-Type Inheritance (Entity Framework) Database To get better idea what is going on take a look at my database diagram. It looks almost the same as my class diagram. Database diagram below shows how to relate tables for table-per-type inheritance mapping. Code Due to these changes we have to modify also the source code of gallery. We have new class called GalleryItem and we have to change our custom context class MyGalleryEntities. Also Album and Photo have some changes. Let’s look at business classes first. public abstract class GalleryItem { public virtual int Id { get; set; } public virtual string Description { get; set; } public virtual string Title { get; set; } public virtual bool Visible { get; set; } } public class Album : GalleryItem { public virtual ICollection<Album> ChildAlbums { get; set; } public virtual Album ParentAlbum { get; set; } public virtual ICollection<Photo> Photos { get; set; } public virtual int ParentId { get; set; } } public class Photo : GalleryItem { public virtual Album Album { get; set; } public virtual string FileName { get; set; } public virtual bool IsGalleryThumb { get; set; } } Changes shoul be pretty straightforward. Only thing that model before doesn’t reflect visually is that GalleryItem is abstract class. We cannot create instances of it but we can extend it. MyGalleryEntities class has no more separate properties for albums and photos. There is one property that returns ObjectSet of GalleryItems. We have to base all our queries on GalleryItems collection. Now let’s see new version of MyGalleryEntities class. NB!Don’t forget these two mandatory lines in constructor. Without them your collection and object references are always null. public class MyGalleryEntities : ObjectContext { private ObjectSet<GalleryItem> _galleryItems; public MyGalleryEntities() : base("Name=MyGalleryEntities") { DefaultContainerName = "MyGalleryEntities"; // MANDATORY LINES!!! ContextOptions.DeferredLoadingEnabled = true; ContextOptions.ProxyCreationEnabled = true; _galleryItems = CreateObjectSet<GalleryItem>(); } public ObjectSet<GalleryItem> GalleryItems { get { return _galleryItems; } } } Here are some examples of queries against new MyGalleryEntities. public void Examples() { var context = new MyGalleryEntities(); var all = from g in context.GalleryItems select g; var albums = from a in context.GalleryItems.OfType<Album>() select a; var photos = from p in context.GalleryItems.OfType<Photo>() select p; } Now my gallery made through the first refactoring that touched database, model and source code. Everything works as expected and I can go on with other tasks on my gallery! 🙂 Entity Framework 4.0: How to use POCOs Entity Framework 4.0: On the way to Composite Pattern
http://gunnarpeipman.com/2009/07/entity-framework-4-0-pocos-and-table-per-type-inheritance-mapping/
CC-MAIN-2017-39
en
refinedweb
Solving Bessel's Equation numerically Posted February 07, 2013 at 09:00 AM | categories: ode, math | tags: | View Comments Updated March 06, 2013 at 06:33 PM Reference Ch 5.5 Kreysig, Advanced Engineering Mathematics, 9th ed. Bessel's equation \(x^2 y'' + x y' + (x^2 - \nu^2)y=0\) comes up often in engineering problems such as heat transfer. The solutions to this equation are the Bessel functions. To solve this equation numerically, we must convert it to a system of first order ODEs. This can be done by letting \(z = y'\) and \(z' = y''\) and performing the change of variables: $$ y' = z$$ $$ z' = \frac{1}{x^2}(-x z - (x^2 - \nu^2) y$$ if we take the case where \(\nu = 0\), the solution is known to be the Bessel function \(J_0(x)\), which is represented in Matlab as besselj(0,x). The initial conditions for this problem are: \(y(0) = 1\) and \(y'(0)=0\). There is a problem with our system of ODEs at x=0. Because of the \(1/x^2\) term, the ODEs are not defined at x=0. If we start very close to zero instead, we avoid the problem. import numpy as np from scipy.integrate import odeint from scipy.special import jn # bessel function import matplotlib.pyplot as plt def fbessel(Y, x): nu = 0.0 y = Y[0] z = Y[1] dydx = z dzdx = 1.0 / x**2 * (-x * z - (x**2 - nu**2) * y) return [dydx, dzdx] x0 = 1e-15 y0 = 1 z0 = 0 Y0 = [y0, z0] xspan = np.linspace(1e-15, 10) sol = odeint(fbessel, Y0, xspan) plt.plot(xspan, sol[:,0], label='numerical soln') plt.plot(xspan, jn(0, xspan), 'r--', label='Bessel') plt.legend() plt.savefig('images/bessel.png') You can see the numerical and analytical solutions overlap, indicating they are at least visually the same. Copyright (C) 2013 by John Kitchin. See the License for information about copying.
http://kitchingroup.cheme.cmu.edu/blog/2013/02/07/Solving-Bessel-s-Equation-numerically/
CC-MAIN-2017-39
en
refinedweb
Earlier in the chapter, I described the manager architecture, listed each manager and its interfaces, and talked about how the CLR and the host go about obtaining manager implementations. In this section, I take a brief look at each manager to understand how it can be used by an application to customize a running CLR. The CLR has a default, well-defined set of steps it follows to resolve a reference to an assembly. These steps include applying various levels of version policy, searching the global assembly cache (GAC), and looking for the assemblies in subdirectories under the application's root directory. These defaults include the assumption that the desired assembly is stored in a binary file in the file system. These resolution steps work well for many application scenarios, but there are situations in which a different approach is required. Remember that CLR hosts essentially define a new application model. As such, it's highly likely that different application models will have different requirements for versioning, assembly storage, and assembly retrieval. To that end, the assembly loading manager enables a host to customize completely the assembly loading process. The level of customization that's possible is so extensive that a host can implement its own custom assembly loading mechanism and bypass the CLR defaults altogether if desired. Specifically, a host can customize the following: The location from which an assembly is loaded How (or if) version policy is applied The format from which an assembly is loaded (assemblies need not be stored in standalone disk files anymore) Let's take a look at how Microsoft SQL Server 2005 uses the assembly loading manager to get an idea of how these capabilities can be used. As background, SQL Server 2005 allows user-defined types, procedures, functions, triggers, and so on to be written in managed languages. A few characteristics of the SQL Server 2005 environment point to the need for customized assembly binding: Assemblies are stored in the database, not in the file system. Managed code that implements user-defined types, procedures, and the like is compiled into assemblies just as you'd expect, but the assembly must be registered in SQL before it can be used. This registration process physically copies the contents of the assembly into the database. This self-contained nature of database applications makes them easy to replicate from server to server. The assemblies installed in SQL are the exact ones that must be run. SQL Server 2005 applications typically have very strict versioning requirements because of the heavy reliance on persisted data. For example, the return value from a managed user-defined function might be used to build an index used to optimize performance. It is imperative that only the exact assembly that was used to build the index is used when the application is run. If a reference to that assembly were somehow redirected through version policy, the index that was previously stored could become invalid. To support these requirements, SQL Server 2005 makes extensive use of the assembly loading manager to load assemblies out of the database instead of from the file system and to bypass many of the versioning rules that the CLR follows by default. It's important to notice, however, that not all assemblies are stored and loaded out of the database by SQL. The assemblies used in a SQL Server 2005 application fall into one of two categories: the assemblies written by customers that define the actual behavior of the application (the add-ins), and the assemblies written by Microsoft that ship as part of the Microsoft .NET Framework. In the SQL case, only the add-ins are stored in the databasethe Microsoft .NET Framework assemblies are installed and loaded out of the global assembly cache. In fact, it is often the case that a host will want to load only the add-ins in a custom fashion and let the default CLR behavior govern how the Microsoft .NET Framework assemblies are loaded. To support this idea, the assembly loading manager enables the host to pass in a list of assemblies that should be loaded in the normal, default CLR fashion. All other assembly references are directed to the host for resolution. Those assemblies that the host resolves can be loaded from any location in any format. These assemblies are returned from the host to the CLR in the form of a pointer to an IStream interface. For hosts that implement the assembly loading manager (that is, provide an implementation of IHostAssemblyManager when queried for it through IHostControl::GetHostManager), the process of binding generally works like this: As the CLR is running code, it often finds references to other assemblies that must be resolved for the program to run properly. These references can be either static in the calling assembly's metadata or dynamic in the form of a call to Assembly.Load or one of the other class library methods used to load assemblies. The CLR looks to see if the reference is to an assembly that the host has told the CLR to bind to itself (a Microsoft .NET Framework assembly in our SQL example). If so, binding proceeds as normal: version policy is applied, the global assembly cache is searched, and so on. If the reference is not in the list of CLR-bound assemblies, the CLR calls through the interfaces in the Assembly Manager (IHostAssemblyStore, specifically) to resolve the assembly. At this point, the host is free to load the assembly in any way and returns an IStream * representing the assembly to the CLR. In the SQL scenario, the assembly is loaded directly from the database. Figure 2-4 shows the distinction between how add-ins and the Microsoft .NET Framework assemblies are loaded in SQL Server 2005. Details of how to implement an assembly loading manager to achieve the customizations described here is provided in Chapter 8. The CLR hosting APIs are built to accommodate a variety of hosts, many of which will have different tolerances for handling failures that occur while running managed code in the process. For example, hosts with largely stateless programming models, such as ASP.NET, can use a process recycling model to reclaim processes deemed unstable. In contrast, hosts such as SQL Server 2005 and the Microsoft Windows shell rely on the process being stable for a logically infinite amount of time. The CLR supports these different reliability needs through an infrastructure that can keep a single application domain or an entire process consistent in the face of various situations that would typically compromise stability. Examples of these situations include a thread that fails to abort properly (because of a finalizer that loops infinitely, for example) and the inability to allocate a resource such as memory. In general, the CLR's philosophy is to throw exceptions on resource failures and thread aborts. However, there are cases in which a host might want to override these defaults. For example, consider the case in which a failure to allocate memory occurs in a region of code that might be sharing state across threads. Because such a failure can leave the domain in an inconsistent state, the host might choose to unload the entire domain instead of aborting just the thread from which the failed allocation occurred. Although this action clearly affects all code running in the domain, it guarantees that the rest of the domains remain consistent and the process remains stable. In contrast, a different host might be willing to allow the questionable domain to keep running and instead will stop sending new requests into it and will unload the domain later. Hosts use the failure policy manager to specify which actions to take in these situations. The failure policy manager enables the host to set timeout values for actions such as aborting a thread or unloading an application domain and to provide policy statements that govern the behavior when a request for a resource cannot be granted or when a given timeout expires. For example, a host can provide policy that causes the CLR to unload an application domain in the face of certain failures to guarantee the continued stability of the process as described in the previous example. The CLR's infrastructure for supporting scenarios requiring high availability requires that managed code library authors follow a set of programming guidelines aimed at proper resource management. These guidelines, combined with the infrastructure that supports them, are both needed for the CLR to guarantee the stability of a process. Chapter 11 discusses how hosts can customize CLR behavior in the face of failures and also describes the coding guidelines that library authors must follow to enable the CLR's reliability guarantees. The .NET Framework class libraries provide an extensive set of built-in functionality that hosted add-ins can take advantage of. In addition, numerous third-party class libraries exist that provide everything from statistical and math libraries to libraries of new user interface (UI) controls. However, the full extent of functionality provided by the set of available class libraries might not be appropriate in particular hosting scenarios. For example, displaying user interface in server programs or services is not useful, or allowing add-ins to exit the process cannot be allowed in hosts that require long process lifetimes. The host protection manager provides the host with a means to block classes, methods, properties, and fields offering a particular category of functionality from being loaded, and therefore used, in the process. A host can choose to prevent the loading of a class or the calling of a method for a number of reasons including reliability and scalability concerns or because the functionality doesn't make sense in that host's environment, as in the examples described earlier. You might be thinking that host protection sounds a lot like a security feature, and in fact we typically think of disallowing functionality to prevent security exploits. However, host protection is not about security. Instead, it's about blocking functionality that doesn't make sense in a given host's programming model. For example, you might choose to use host protection to prevent add-ins from obtaining synchronization primitives used to coordinate access to a resource from multiple threads because taking such a lock can limit scalability in a server application. The ability to request access to a synchronization primitive is a programming model concern, not a security issue. When using the host protection manager to disallow certain functionality, hosts indicate which general categories of functionality they're blocking rather than individual classes or members. The classes and members contained in the .NET Framework class libraries are grouped into categories based on the functionality they provide. These categories include the following: Shared state Library code that exposes a means for add-ins to share state across threads or application domains. The methods in the System.Threading namespace that allow you to manipulate the data slots on a thread, such as Thread.AllocateDataSlot, are examples of methods that can be used to share state across threads. Synchronization Classes or members that expose a way for add-in to hold locks. The Monitor class in the System.Threading namespace is a good example of a class you can use to hold a lock. Threading Any functionality that affects the lifetime of a thread in the process. Because it causes a new thread to start running, System.Threading.Thread.Start is an example of a method that affects thread lifetime within a process. Process management Any code that provides the capability to manipulate a process, whether it be the host's process or any other process on the machine. System.Diagnostics.Process.Start is clearly a method in this category. Classes and members in the .NET Framework that have functionality belonging to one or more of these categories are marked with a custom attribute called the HostProtectionAttribute that indicates the functionality that is exposed. The host protection manager comes into play by providing an interface (ICLRHostProtectionManager) that hosts use to indicate which categories of functionality they'd like to prevent from being used in the process. The attribute settings in the code and the host protection settings passed in through the host are examined at runtime to determine whether a particular member is allowed to run. If a particular member is marked as being part of the threading category, for example, and the host has indicated that all threading functionality should be blocked, an exception will be thrown instead of the member being called. Annotating code with the category custom attributes and using the host protection manager to block categories of functionality is described in detail in Chapter 12. The managers we've looked at so far have allowed the host to customize different aspects of the CLR. Another set of managers has a slightly different flavorthese managers enable a host to integrate its runtime environment deeply with the CLR's execution engine. In a sense, these managers can be considered abstractions over the set of primitives or resources that the CLR typically gets from the operating system (OS) on which it is running. More generally, the COM interfaces that are part of the hosting API can be viewed as an abstraction layer that sits between the CLR and the operating system, as shown in Figure 2-5. Hosts use these interfaces to provide the CLR with primitives to allocate and manage memory, create and manipulate threads, perform synchronization, and so on. When one of these managers is provided by a host, the CLR will use the manager instead of the underlying operating system API to get the resource. By providing implementations that abstract the corresponding operating system concepts, a host can have an extremely detailed level of control over how the CLR behaves in a process. A host can decide when to fail a memory allocation requested by the CLR, it can dictate how managed code gets scheduled within the process, and so on. The first manager of this sort that we examine is the memory manager. The memory manager consists of three interfaces: IHostMemoryManager, IHostMalloc, and ICLRMemoryNotificationCallback. The methods of these interfaces enable the host to provide abstractions for the following: Win32 and the standard C runtime memory allocation primitives Providing abstractions over APIs such as VirtualAlloc, VirtualFree, VirtualQuery, malloc, and free allow a host to track and control the memory used by the CLR. A typical use of the memory manager is to restrict the amount of memory the CLR can use within a process and to fail allocations when it makes sense in a host-specific scenario. For example, SQL Server 2005 operates within a configurable amount of memory. Oftentimes, SQL is configured to use all of the physical memory on the machine. To maximize performance, SQL tracks all memory allocations and ensures that paging never occurs. SQL would rather fail a memory allocation than page to disk. To track all allocations made within the process accurately, the SQL host must be able to record all allocations made by the CLR. When the amount of memory used is reaching the preconfigured limit, SQL must start failing memory allocation requests, including those that come from the CLR. The consequence of failing a particular CLR request varies with the point in time in which that request is made. In the least destructive case, the CLR might need to abort the thread on which an allocation is made if it cannot be satisfied. In more severe cases, the current application domain or even the entire process must be unloaded. Each request for additional memory made by the CLR includes an indication of what the consequences of failing that allocation are. This gives the host some room to decide which allocations it can tolerate failing and which it would rather satisfy at the expense of some other alternative for pruning memory. The low-memory notification available on Microsoft Windows XP and later versions Windows XP provides memory notification events so applications can adjust the amount of memory they use based on the amount of available memory as reported by the operating system. (See the CreateMemoryResourceNotification API in the Platform SDK for background.) The memory management interfaces provided by the CLR hosting API enable a host to provide a similar mechanism that allows a host to notify the CLR of low- (or high-) memory conditions based on a host-specific notion, rather than the default operating system notion. Although the mechanism provided by the operating system is available only on Windows XP and later versions, the notification provided in the hosting API works on all platforms on which the CLR is supported. The CLR takes this notification as a heuristic that garbage collection is necessary. In this way, hosts can use this notification to encourage the CLR to do a collection to free memory so more memory is made available from which to satisfy additional allocation requests. In addition to the memory manager, the CLR hosting API also provides a garbage collection manager that allows you to monitor and influence how the garbage collector uses memory in the process. Specifically, the garbage collection manager includes interfaces that enable you to determine when collections begin and end and to initiate collections yourself. We discuss the details of implementing both the memory and garbage collection managers in Chapter 13. The most intricate of the managers provided in the hosting APIs are the threading manager and the synchronization manager. Although the managers are defined separately in the API, it's hard to imagine a scenario in which a host would provide an implementation of the threading manager without implementing the synchronization manager as well. These managers work together to enable the host to customize the way managed code gets scheduled to run within a process. The purpose of these two managers is to enable the host to abstract the notion of a unit of execution. The first two versions of the CLR assumed a world based on physical threads that were preemptively scheduled by the operating system. In .NET Framework 2.0, the threading manager and synchronization manager allow the CLR to run in environments that use cooperatively scheduled fibers instead. The threading manager introduces the term task as this abstract notion of a unit of execution. The host then maps the notion of a task to either a physical operating system thread or a host-scheduled fiber. The scenarios in which these managers are used extensively are likely to be few, so I don't spend too much time discussing them in this book. However, the subject is interesting if for no other reason than the insight it provides into the inner workings of the CLR. The set of capabilities provided by the threading manager is quite extensiveenough to model a major portion of an operating system thread API such as Win32, with additional features specifically required by the CLR. These additional features include a means for the CLR to notify the host of times in which thread affinity is required and callbacks into the CLR so it can know when a managed task gets scheduled (or unscheduled), among others. The general capabilities of the threading manager are as follows: Task management Starting and stopping tasks as well as standard operations such as join, sleep, alert, and priority adjustment. Scheduling Notifications to the CLR that a managed task has been moved to or from a runnable state. When a task is scheduled, the CLR is told which physical operating system thread the task is put on. Thread affinity A means for the CLR to tell the host of specific window during which thread affinity must be maintained. That is, a time during which a task must remain running and must stay on the current thread. Delayed abort There are windows of time in which the CLR is not in a position to abort a task. The CLR calls the host just before and just after one of these windows. Locale management Some hosts provide native APIs for users to change or retrieve the current thread locale setting. The managed libraries also provide such APIs (see System.Globalization.CurrentCulture and CurrentUICulture in the Microsoft .NET Framework SDK). In these scenarios, the host and the CLR must inform each other of locale changes so that both sides stay synchronized. Task pooling Hosts can reuse or pool the CLR-implemented portion of a task to optimize performance. Enter and leave notifications Hosts are notified each time execution leaves the CLR and each time it returns. These hooks are called whenever managed code issues a PInvoke or Com Interoperability call or when unmanaged code calls into managed code. One feature that perhaps needs more explanation is the ability to hook calls between managed and unmanaged code. On the surface it might not be obvious how this is related to threading, but it ends up that hosts that implement cooperatively scheduled environments often must change how the thread that is involved in the transition can be scheduled. Consider the scenario in which an add-in uses PInvoke to call an unmanaged DLL that the host knows nothing about. Because of the information received by implementing the threading and synchronization abstractions, the host can cooperatively schedule tasks running managed code just fine. However, when control leaves that managed code and enters the unmanaged DLL, the host no longer can know what that code is going to do. The unmanaged DLL could include code that takes a lock on a thread and holds it for long periods of time, for example. In this case, managed code should not be cooperatively scheduled on that thread because the host cannot control when it will next get a chance to run. This is where the hooks come in. When a host receives the notification that control is leaving the CLR, it can switch the scheduling mode of that thread from the host-control cooperative scheduling mode to the preemptive scheduling mode provided by the operating system. Said another way, the host gives responsibility for scheduling code on that thread back to the operating system. At some later point in time, the PInvoke call in our sample completes and returns to managed code. At this point, the hook is called again and the host can switch the scheduling mode back to its own cooperatively scheduled state. I mentioned earlier that the threading manager and synchronization manager are closely related. The preceding example provides some hints as to why. The interfaces in the threading manager provide the means for the host to control many aspects of how managed tasks are run. However, the interfaces in the synchronization manager provide the host with information about how the tasks are actually behaving. Specifically, the synchronization manager provides a number of interfaces the CLR will use to create synchronization primitives (locks) when requested (or needed for internal reasons) during the execution of managed code. Knowing when locks are taken is useful information to have during scheduling. For example, when code blocks on a lock, it's likely a good time to pull that fiber off a thread and schedule another one that's ready to run. Knowing about locks helps a host tune its scheduler for maximum throughput. There's another scenario in which it's useful for a host to be aware of the locks held by managed tasks: deadlock detection. It's quite possible that a host can be running managed tasks and tasks written in native code simultaneously. In this case, the CLR doesn't have enough information to resolve all deadlocks even if it tried to implement such a feature. Instead, the burden of detecting and resolving deadlocks must be on the host. Making the host aware of managed locks is essential for a complete deadlock detection mechanism. Primarily for these reasons, the synchronization manager contains interfaces that provide the CLR with implementations of the following: Critical sections Events (both manual and auto-reset) Semaphores Reader/writer locks Monitors We dig into more details of these two managers in Chapter 14. We've now covered most of the significant functionality the CLR makes available to hosts through the hosting API. However, a few more features are worth a brief look. These features are discussed in the following sections. When assemblies are loaded domain neutral, their jit-compiled code and some internal CLR data structures are shared among all the application domains in the process. The goal of this feature is to reduce the working set. Hosts use the hosting interfaces (specifically, IHostControl) to provide a specific list of assemblies they'd like to have loaded in this fashion. Although domain-neutral loading requires less memory, it does place some additional restrictions on the assembly. Specifically, the code that is generated is slightly slower in some scenarios, and a domain-neutral assembly cannot be unloaded until the process exits. As such, hosts typically do not load all assemblies domain neutral. In practice, the set of assemblies loaded in this way often are the system assembliesadd-ins are almost never loaded domain neutral so they can be dynamically unloaded while the process is running. This is the exact model that hosts such as SQL Server 2005 follow. Domain-neutral code is covered in detail in Chapter 9. Hosts can provide the CLR with a thread pool by implementing the thread pool manager. The thread pool manager has one interface (IHostThreadPoolManager) and provides all the functionality you'd expect including the capability to queue work items to the thread pool and set the number of threads in the pool. The thread pool manager is described in detail in Chapter 14. Overlapped I/O can also be abstracted by the host using the I/O completion manager. This manager enables the CLR to initiate asynchronous I/O through the host and receive notifications when it is complete. For more information on the I/O completion manager, see the documentation for the IHostIoCompletionPort and ICLRIoCompletionPort interfaces in the .NET Framework SDK. The debugging manager provides some basic capabilities that enable a host to customize the way debuggers work when attached to the host's process. For example, hosts can use this manager to cause the debugger to group related debugging tasks together and to load files containing extra debugging information. For more information on the debugging manager, see the ICLRDebugManager documentation in the .NET Framework SDK. Application domains serve two primary purposes as far as a host is concerned. First, hosts use application domains to isolate groups of assemblies within a process. In many cases, application domains provide the same level of isolation for managed code as operating system processes do for unmanaged code. The second common use of application domains is to unload code from a process dynamically. Once an assembly has been loaded into a process, it cannot be unloaded individually. The only way to remove it from memory is to unload the application domain the assembly was in. Application domains are always created in managed code using the System.AppDomain class. However, the hosting interface ICLRRuntimeHost enables you to register an application domain manager[1] that gets called by the CLR each time an application domain is created. You can use your application domain manager to configure the domains that are created in the process. In addition, ICLRRuntimeHost also includes a method that enables you to cause an application domain to be unloaded from your unmanaged hosting code. [1] The term "manager" as used here can be a bit confusing given the context in which we've used it in the rest of the chapter. An application domain manager isn't a "manager" as specifically defined by the CLR hosting interfaces. Instead, it is a managed class that you implement to customize how application domains are used within a process. Application domains are such a central concept to hosts and other extensible applications that I dedicate two chapters to them. The first chapter (Chapter 5) provides an overview of application domains and provides guidelines to help you use them most effectively. The second chapter (Chapter 6) describes the various ways you can customize application domains to fit your application's requirements most closely. Hosts can register a callback with the CLR that gets called when various events happen when running managed code. Through this callback, hosts can receive notification when the CLR has been disabled in the process (that is, it can no longer run managed code) or when application domains are unloaded. More details on the CLR event manager are provided in Chapter 5.
http://flylib.com/books/en/4.331.1.20/1/
CC-MAIN-2015-06
en
refinedweb
This article explains the step by step procedure of how to use a .NET Assembly with COM Client. Step 1: Creating a .NET Assembly Project1.1 Visual Studio .NET IDE -> File -> New Project -> Visual Basic Projects -> Class Library1.2 Name the Application, for instance NetServer, this willbe our .NET Assembly Project,which will be consisting of at least one class called Class1.vb.1.3 Rename the class to NetClass from properties window and manuallyin code window.This class will hold all the functionality of .NET Component,in the form of few functions. Step 2: Adding FunctionalityOn the path of achieving COM Interoperability, Microsoft .NEToffers an attribute named ,which islocated inside the Micrsoft.VisualBasic namespaceand it makes the .NET Class available for use by a COM Client.It’s also a wise choice to add System.Runtime.InteropService namespace to the class, which offers various features to be used.The figure below shows the whole Code of .NET Assembly. Step 3: Set Property for COM InteroperabilitySelect NETServer project from Solution Explorer, Right Click ->Properties -> Configuration Properties -> Build -> Check on Register for COM Interop After setting the property, do Build The Solution. It will createa NetServer.dll (assembly) in your applications \bin folder. Step 4: Deploying for COM accessA .NET assembly which has been created can’t be used by a COM Client,because COM Client can access an object with help of a Type Library,which tells to the Client about the Object.In .NET world there is no such concept of Type LIbrary, but because of COM Interop feature, .NET Framework SDK offers a tool calledRegAsm.Exe which offers to make a type library and register it in Windows Registry, the very own style of COM.4.1 Access Command Prompt (cmd)4.2 Change the path to your application folderfor example : C:\>D: press enterD:\> cd Net Applications\NetServer\Bin D:\Net Applications\NetServer\Bin> 4.3 Type the following command to create a Type Library,which is a COM world’s buzz word and equivalentto Metadata of a .NET Assembly.D:\Net Applications\NetServer\Bin> RegAsm /tlb: NetServer.tlb NetServer.dll This command will create a Type Library named NetServer.tlbin the \bin folder of .NET Assembly application. Which wasautomatically registered in Windows Registry as well.Now the .NET Assembly is ready to use by a COM Client.Step 5: VB 6.0 Client to access .NET Assembly5.1 Open Visual Studio 6.0 -> Visual Basic 6.0 ->File -> New -> Standard.Exe 5.2 Drag one Label and two Command Buttons onto the form. Step 6: Set Reference to the Type Library Before consuming the class you build using .NETProject -> References -> Find the NetServer and select that. Step 7: Code to access the .NET Class after adding the code, run your VB 6.0 and click on the command buttons, you will see that your application communicating with .NET.
http://www.codeproject.com/Articles/11179/Using-NET-Assembly-with-COM-Client?fid=203934&df=90&mpp=50&noise=1&prof=True&sort=Position&view=None&spc=Relaxed
CC-MAIN-2015-06
en
refinedweb
A friendly place for programming greenhorns! Big Moose Saloon Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies Register / Login JavaRanch » Java Forums » Java » Swing / AWT / SWT Author looking for a graphical representation Kevin Tysen Ranch Hand Joined: Oct 12, 2005 Posts: 255 posted Oct 26, 2009 16:57:11 0 I am trying to think of a good way to represent graphically a logical framework. Specifically, it's something like this: Suppose you have some Strings A, B, C, D, E, F, and another String K which you want to evaluate. You want to evaluate K like this: K score is initially 0. If K contains A or B, add 4 to the score. If K contains C, add 10 to the score. If K contains D, E, or F, add 5 to the score. The program user wants to see a graphic representation of this logic. I thought of two possible ways to represent this logic graphically. One way is to put A through F on a table. In the first row is the number 4, then A and B. In the second row is 10 and C, and in the third row is 5, D, E, and F. Another way to represent this graphically is to put them in a JTree . The JTree has three nodes named '4', '10', and '5'. The 4 node contains A and B, the 10 node contains C, and the 5 node contains D, E, and F. Well, these are the two ways I thought of to represent the logic graphically. Any other ideas? Craig Wood Ranch Hand Joined: Jan 14, 2004 Posts: 1535 posted Oct 27, 2009 23:17:20 0 import java.awt.*; import java.awt.font.*; import javax.swing.*; public class AnIdea extends JPanel { Font font = new Font("Monospaced", Font.PLAIN, 18); Font checkFont = new Font("Monospaced", Font.PLAIN, 14); final int PAD = 20; final int VPAD = 5; final int CPAD = 30; final int SPAD = 125; final int NPAD = 105; protected void paintComponent(Graphics g) { super.paintComponent(g); Graphics2D g2 = (Graphics2D)g; g2.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON); FontRenderContext frc = g2.getFontRenderContext(); LineMetrics lm = font.getLineMetrics("0", frc); float sh = lm.getAscent() + lm.getDescent(); g2.setFont(font); g2.setPaint(Color.blue); g2.drawRect(0,0,getWidth()-1,getHeight()-1); g2.setPaint(Color.black); // K float x = PAD; float y = PAD + lm.getAscent(); g2.drawString("K", x, y); x += SPAD; g2.drawString("abcd", x, y); x += NPAD; g2.drawString("0", x, y); // A and B section y = drawSection(g2, y, sh, "Add 4:", "4", frc, lm); y = drawLine(g2, y, sh, "A", "b", "\u2611"); y = drawLine(g2, y, sh, "B", "y", "\u2610"); // C section y = drawSection(g2, y, sh, "Add 10:", null, frc, lm); y = drawLine(g2, y, sh, "C", "f", "\u2610"); // D, E and F section y = drawSection(g2, y, sh, "Add 5:", "5", frc, lm); y = drawLine(g2, y, sh, "D", "a", "\u2611"); y = drawLine(g2, y, sh, "E", "n", "\u2610"); y = drawLine(g2, y, sh, "F", "x", "\u2610"); // Total y += lm.getDescent() + VPAD; x = PAD + SPAD + NPAD + PAD; int x1 = PAD; int y1 = (int)(y); int x2 = (int)(x); int y2 = y1; g2.drawLine(x1, y1, x2, y2); y += sh; x = PAD; String s = "Total:"; g2.drawString(s, x, y); x += SPAD + NPAD; s = "9"; g2.drawString(s, x, y); } private float drawSection(Graphics2D g2, float y, float sh, String s1, String s2, FontRenderContext frc, LineMetrics lm) { y += sh + VPAD; float x = PAD; g2.drawString(s1, x, y); float sw = (float)font.getStringBounds(s1, frc).getWidth(); int x1 = PAD; int y1 = (int)(y + lm.getDescent()); int x2 = (int)(x1 + sw); int y2 = y1; g2.drawLine(x1, y1, x2, y2); if(s2 != null) { x += SPAD + NPAD; g2.drawString(s2, x, y); } return y; } private float drawLine(Graphics2D g2, float y, float sh, String s1, String s2, String check) { y += sh + VPAD; float x = PAD; g2.drawString(s1, x, y); g2.setFont(checkFont); g2.drawString(check, x+CPAD, y); g2.setFont(font); x += SPAD; g2.drawString(s2, x, y); return y; } public static void main(String[] args) { AnIdea idea = new AnIdea(); idea.setPreferredSize(new Dimension(400,400)); JOptionPane.showMessageDialog(null, idea, "", -1); } } Kevin Tysen Ranch Hand Joined: Oct 12, 2005 Posts: 255 posted Oct 27, 2009 23:59:34 0 Wow, that is a nice program! Thank you. I'll take a close look at it and use some of the main ideas in the program. I agree. Here's the link: subject: looking for a graphical representation Similar Threads JTree: Setting node to be a JPanel JTable In JTree JTree JTree help collapsing JTree All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/468217/GUI/java/graphical-representation
CC-MAIN-2015-06
en
refinedweb
15 September 2011 02:54 [Source: ICIS news] TORONTO (ICIS)--Enerkem will supply methanol produced at its facility in ?xml:namespace> Enerkem CEO Vincent Chornet said the two companies have reached an off-take deal to that effect. He did not disclose volume or financial details. “The access to Methanex’s worldwide distribution network coupled with our ability to produce methanol from non-recyclable waste represents a timely opportunity for Enerkem in the development of its commercial activities,” Chornet said. Enerkem’s The plant will subsequently produce cellulosic ethanol from methanol. Commercial plant capacity is expected to be 36m litres/year (10m gal/year), Enerkem said. It did not comment on the plant’s expected start-up date. Last year, Methanex CEO Bruce Aitken joined Enerkem’s board of directors. For more on Methanex
http://www.icis.com/Articles/2011/09/15/9492492/canadas-enerkem-agrees-methanol-offtake-deal-with-methanex.html
CC-MAIN-2015-06
en
refinedweb
Java.io.StringBufferInputStream Class Introduction The Java.io.StringBufferInputStream class allows an application to create an input stream in which the bytes read are supplied by the contents of a string. Applications can also read bytes from a byte array by using a ByteArrayInputStream.Only the low eight bits of each character in the string are used by this class. This class has been deprecated by Oracle and should not be used any more. Class declaration Following is the declaration for Java.io.StringBufferInputStream class: public class StringBufferInputStream extends InputStream Field Following are the fields for Java.io.StringBufferInputStream class: protected String buffer -- This is the string from which bytes are read.. protected int count -- This is the number of valid characters in the input stream buffer. protected int pos -- This is the index of the next character to read from the input stream buffer. Class constructors Class methods Methods inherited This class inherits methods from the following classes: Java.io.InputStreams Java.io.Object
http://www.tutorialspoint.com/java/io/java_io_stringbufferinputstream.htm
CC-MAIN-2015-06
en
refinedweb
IRC log of ws-ra on 2011-03-01 Timestamps are in UTC. 20:30:18 [RRSAgent] RRSAgent has joined #ws-ra 20:30:18 [RRSAgent] logging to 20:30:20 [trackbot] RRSAgent, make logs public 20:30:20 [Zakim] Zakim has joined #ws-ra 20:30:22 [trackbot] Zakim, this will be WSRA 20:30:22 [Zakim] ok, trackbot, I see WS_WSRA()3:30PM already started 20:30:23 [trackbot] Meeting: Web Services Resource Access Working Group Teleconference 20:30:23 [trackbot] Date: 01 March 2011 20:30:40 [Zakim] +Wu_Chou 20:31:05 [Katy] Katy has joined #ws-ra 20:31:16 [Zakim] +[Microsoft] 20:31:55 [Zakim] + +44.196.281.aaaa 20:32:17 [Zakim] +asoldano 20:32:32 [Ram] Ram has joined #ws-ra 20:32:33 [Zakim] +Yves 20:34:26 [Zakim] +Tom_Rutt 20:34:50 [BobF] agenda: 20:35:26 [trutt] trutt has joined #ws-ra 20:35:27 [Katy] Topic: Appoval of Agenda 20:35:36 [dug] scribe: Katy 20:36:35 [Katy] Bob: Goal for CR vote 15th March 20:36:58 [Katy] Topic: Approval of minutes of F2F 20:37:08 [Katy] Resolution: Minutes approved 20:37:24 [dug] bots are sleep today 20:37:35 [dug] s/sleep/asleep/ 20:37:44 [Katy] Topic: 20:38:35 [asoldano] yes 20:38:42 [Katy] Bob: Any objection to accepting proposal in comment no 1 of proposal. 20:39:57 [Katy] Doug: Describes proposal 20:41:38 [Katy] Gil: Feel uneasy about this because assigning semantic meaning to the empty string 20:42:43 [Katy] ... How about only making the Get the special case 20:43:01 [Katy] Doug: What if someone wants to use the empty string as id value 20:43:13 [Katy] Gil: That's associating special value to "" 20:43:34 [li] li has joined #ws-ra 20:43:40 [Katy] ... we could have special string that means "unidentified" and special case that 20:44:00 [Katy] ... within the W3C namespace 20:44:49 [trutt] q+ 20:44:57 [Katy] Doug: I don't think this is special semantics as it's indicated no identifier 20:45:00 [BobF] act tom 20:45:05 [BobF] ack tr 20:45:07 [Katy] Bob: Empty string might mean no value 20:46:17 [Katy] Tom: What if there's an overloaded identifier that happens to be ""? 20:47:15 [Katy] Gil: The point is a symbol to identify no useful Id - whether it's a "" or special URI 20:48:44 [Katy] Doug: Initial problem was that the client doesn't know whether it needs an identifier or not. 20:49:14 [Katy] Gil: So when types with identifier defined those must be used, I am thinking of types with no identifier 20:49:20 [Katy] ... specified 20:52:46 [Katy] Gil: Problem is we don't know all the dialects there may be some types where we don't have an identifier. We should have a way to put these things without an identifier if people choose not to - but it's their problem 20:52:55 [Katy] Doug: But that kills interop 20:52:59 [trutt] q+ 20:53:04 [dug] q+ 20:53:12 [Katy] Gil: disagree 20:53:18 [BobF] ack tr 20:53:35 [Katy] Tom: In what scenario would someone not have an id for their metadata section? 20:53:57 [BobF] ack d 20:55:13 [Katy] Doug: If you know enough about the metadata to 'put' it, you must put the appropriate identifier. If it's optional then the clients always need to ask for everything 20:55:44 [Katy] ... either mandate the use of identifier or there's no point in it. 20:56:17 [trutt] q+ 20:56:27 [BobF] ack t 20:56:57 [Katy] q+ 20:57:18 [BobF] ack k 20:58:05 [trutt] If you require an id, but allow it to be "", it will all work 20:58:51 [Katy] Katy: Id should not be optional or clients would have to assume it's not there 20:58:55 [BobF] that means that the set of values for the id attribute is empty 20:59:58 [Katy] Bob: Empty identifier means set of values is empty/not present (in terms of XSLT test) 21:00:01 [trutt] "" is a value, it will test for presence 21:00:25 [Dave] Dave has joined #ws-ra 21:00:45 [Yves] optimization or not, absent and null value must be described 21:01:13 [dug] q+ 21:01:15 [Dave] Dave Snelling is lurking on tthe IRC only. 21:01:24 [trutt] "" is not a null value, it is a valid string "empty string" , not null 21:01:38 [BobF] zakim, I would like to report a lurker 21:01:38 [Zakim] I don't understand 'I would like to report a lurker', BobF 21:02:27 [trutt] if you test for presence of the value with "", it will be true in xpath. To test for "" you have to actually do a sting compare operation with "" as the compared value 21:02:37 [Katy]. 21:02:57 [BobF] ack d 21:03:56 [gpilz] gpilz has joined #ws-ra 21:04:21 [trutt] q+ 21:04:35 [BobF] Java will return an empty string if there is no value defined 21:04:40 [BobF] ack tr 21:04:49 [Katy] Doug: To ease confusion factor, I would like to require the identifier to be set (to "" or syntax string) else folk will think that absence = wildcard. 21:05:51 [gpilz] q+ 21:06:14 [BobF] ack gp 21:06:30 [Katy] Tom: Schema point, technically speaking a default would work but it would cause more problems to have a default than to use "" - the latter makes it easier for xpath 21:07:18 [Katy] Gil: I think we have agreed the following 1) Put needs the type; 2) in some cases value of the type is default which="" 21:07:49 [Katy] ... we need to say whether it is legal to put empty string for a dialect that mex provides an identifier to 21:07:54 [trutt] q+ 21:08:06 [Katy] Doug: I agree, I think we have come full circle back to the proposal 21:08:44 [BobF] ack tr 21:08:52 [dug] <a foo='1'/> @foo != @foo2 21:09:02 [dug] oops, <a foo''/> 21:09:20 [dug] according to xml spy anyway 21:09:22 21:09:24 [gpilz] oops 21:09:28 [trutt] q+ 21:10:19 [BobF] ack tr 21:10:46 [gpilz] - make @Identifier a required attribute of mex:MetadataSection 21:10:46 [gpilz] - you MAY use "" as the value of @Identifier except for those Dialects defined by WS-MEX 21:10:46 [gpilz] - keep @Identifier optional on the mex:GetMetadata operation 21:10:46 [gpilz] - mex:GetMetadata w/o @Identifier (not "") means match ALL @Identifiers 21:11:09 [BobF] q? 21:11:11 [dug] q+ 21:11:37 [BobF] ack dug 21:12:15 [wuchou] wuchou has joined #ws-ra 21:12:24 [Katy] Doug: I will work on this text before the next meeting when we can review 21:12:56 [Katy] Bob: Do we agree directionally so Bob can work on final text 21:13:26 [Katy] Action: Doug to write up text based on comment one with some changes to 2nd bullet 21:13:26 [trackbot] Created ACTION-177 - Write up text based on comment one with some changes to 2nd bullet [on Doug Davis - due 2011-03-08]. 21:14:05 [Ram] q+ 21:14:24 [Katy] Topic: 21:14:38 [BobF] ack ram 21:14:44 [Katy] Ram: Currently collecting feedback, will have information in the next few days 21:15:09 [Katy] ... Wait until next call prior to confirming final answer 21:15:17 [Katy] Bob: Defer to the next call 21:15:53 [dug] Puffin 21:15:59 [Katy] Topic: 21:16:21 [Ram] q+ 21:16:44 [BobF] ack ram 21:17:24 [Katy] Ram: No further testing required? 21:18:19 [Ram] q+ 21:18:40 [BobF] ack ram 21:18:43 [Katy] Bob: We will be producing new specs so we should crank through all the tests again 21:19:17 [Katy] Ram: Previous mex issue need aditional testing? 21:19:51 [Katy] Doug: Difficult to answer because the issue is clarifying the semantics 21:20:03 [Katy] ... so to some may be no change 21:20:28 [Katy] Ram: Recommend that we don't do unecessary testing as it has big resource issues 21:21:18 [gpilz] q+ 21:21:37 [Yves] we definitely have to test it, but that's what CR is all about 21:21:41 [Katy] Bob: I would prefer to be conservative in our testing, even if just syntax change 21:21:54 [dug] q+ 21:22:05 [Katy] Yves: Any changes to element needs to be re-tested if after CR 21:22:12 [BobF] ack gp 21:24:40 [BobF] ack dug 21:25:57 [Katy] Bob: If we change the spec, we should retest. Now we have gone through the process once, it should be easier 21:26:10 [Katy] ... consider this when accepting the proposals 21:26:53 [Katy] ... we could defer 11776 so we can decide whether the test impact too big 21:27:45 [dug] 21:27:48 [Katy] Topic: Misc issues 21:28:14 [Katy] Bob: need to apply for the MIME type. 21:28:22 [Katy] ... done in link above 21:28:23 [trutt] given example xml doc 21:28:25 [trutt] <?xml version="1.0" encoding="UTF-8"?> 21:28:27 [trutt] <doc> 21:28:29 [trutt] <element atr1=""> "" </element> 21:28:30 [trutt] </doc> 21:28:31 [trutt] The following xpath returns the element: 21:28:33 [trutt] //element[@atr1=""] 21:28:34 [trutt] the following xpath does not return the element (no match) //element[@atr1=" "] 21:28:36 [trutt] Thus the "" is not comparable with " " 21:28:52 [Katy] Topic: Items at risk 21:30:03 [Katy] Bob: Items at risk are no longer at risk as we have adequate implementations for WS-Eventing and WS-Enum 21:30:28 [trutt] q+ 21:31:04 [BobF] ack tr 21:31:35 [Zakim] -Tom_Rutt 21:31:50 [trutt] I just lost my connection , is the meeting over? 21:32:00 [Ram] not yet 21:32:06 [Ram] We are talking about next meeting. 21:32:07 [BobF] talking about next meeting 21:32:24 [Katy] Topic: Next week's meeting 21:33:00 [dug] q+ 21:33:07 [BobF] ack dug 21:33:08 [Katy] Bob: Clash with cloud management meeting on 8th so we will have next meeting on the 15th and meeting on 22nd 21:33:14 [Zakim] +Tom_Rutt 21:33:34 [Ram] WS-Enumeration test coverage analysis to be completed by Microsoft. 21:33:40 [gpilz] q+\ 21:33:45 [dug] q+ gil 21:33:55 [dug] q- \ 21:34:42 [li] is Darth Vadar speaking as well? 21:34:49 [dug] zakim, who is making noise? 21:35:00 [Zakim] dug, listening for 10 seconds I heard sound from the following: Bob_Freund (25%), Gilbert_Pilz (41%) 21:35:32 [dug] q+ 21:35:46 [Ram] Test coverage analysis actions: 21:35:48 [Ram] WS-Enumeration test coverage analysis to be completed by Microsoft. 21:35:56 [Ram] WS-Eventing test coverage analysis to be analysis by Avaya. 21:36:02 [Ram] WS-Transfer/WS-Fragment test coverage analysis to be analysis by IBM. 21:36:04 [BobF] ack du 21:36:08 [Ram] WS-MEX test coverage analysis to be analysis by Oracle. 21:36:13 [BobF] ack g 21:36:46 [Katy] Gil: Why aren't faults defined in the WSDL in the W3C specs? 21:39:05 [Katy] Tom: If SOAP faults they can happen anywhere so don't need to be defined in the WSDL 21:40:52 [dug] answer: we're lazy 21:40:59 [dug] answer: 'cause 21:41:11 [dug] answer: go away, use REST 21:41:30 [BobF] just log them 21:43:43 [dug] would we need a union to express multiple faults could be returned? 21:43:52 [Katy] Gil: will look into this and decide whether issue or not next meeting 21:43:53 [Zakim] -Tom_Rutt 21:43:55 [Zakim] -asoldano 21:43:55 [Katy] Katy has left #ws-ra 21:44:02 [Zakim] -Wu_Chou 21:44:04 [Zakim] -[Microsoft] 21:44:05 [Zakim] -Yves 21:44:06 [Zakim] - +44.196.281.aaaa 21:44:07 [Zakim] -Bob_Freund 21:44:12 [Zakim] -Gilbert_Pilz 21:44:13 [Zakim] WS_WSRA()3:30PM has ended 21:44:14 [Zakim] Attendees were Bob_Freund, Doug_Davis, Gilbert_Pilz, Wu_Chou, [Microsoft], +44.196.281.aaaa, asoldano, Yves, Tom_Rutt 21:44:16 [BobF] rrsagent, generate minutes 21:44:16 [RRSAgent] I have made the request to generate BobF 21:54:05 [gpilz] gpilz has left #ws-ra 22:39:39 [trutt_] trutt_ has joined #ws-ra
http://www.w3.org/2011/03/01-ws-ra-irc
CC-MAIN-2015-06
en
refinedweb
Thomas A. Anastasio 22 November 1999 Skip Lists were developed around 1989 by William Pugh1 of the University of Maryland. Professor Pugh sees Skip Lists as a viable alternative to balanced trees such as AVL trees or to self-adjusting trees such as splay trees. The find, insert, and remove operations on ordinary binary search trees are efficient, , when the input data is random; but less efficient, , when the input data are ordered. Skip List performance for these same operations and for any data set is about as good as that of randomly-built binary search trees - namely In an ordinary sorted linked list, find, insert, and remove are in because the list must be scanned node-by-node from the head to find the relevant node. If somehow we could scan down the list in bigger steps (``skip'' down, as it were), we would reduce the cost of scanning. This is the fundamental idea behind Skip Lists. In simple terms, Skip Lists are sorted linked lists with two differences: We speak of a Skip List node having levels, one level per forward reference. The number of levels in a node is called the size of the node. In an ordinary sorted list, insert, remove, and find operations require sequential traversal of the list. This results in performance per operation. Skip Lists allow intermediate nodes in the list to be ``skipped'' during a traversal - resulting in an expected performance of per operation. To introduce the Skip List data structure, let's look at three list data structures that allow skipping, but are not truly Skip Lists. The first of these, shown in Figure 1 allows every other node to be skipped in a traversal. The second, shown in Figure 2 additionally allows every fourth node to be skipped. The third, shown in Figure 3 additionally allows every eighth node to be skipped, but suggests further development of the idea to skipping every -th node. For all three pseudo-skip-lists, there is a header node and the nodes do not all have the same number of forward references. Every node has a reference to the next node, but some have additional references to nodes further along the list. Here is pseudo-code for the find operation on each list. This same algorithm will be used with real Skip Lists later. Comparable & find(const Comparable & X) { node = header node for (reference level of node from (nodesize - 1) down to 0) while (the node referred to is less than X) node = node referred to if (node referred to has value X) return node's value else return item_not_found } We start at the highest level of the header node and follow the references along this level until the value at the node referred to is equal to or larger than X. At this point, we switch to the next lower level and continue the search. Eventually, we shall be dealing with a reference at the lowest level of a node. If the next node has the value X, then we return that value; otherwise we return a value that signals unsuccessful search. Figure 1 shows a 16-node list in which every second node has a reference two nodes ahead. The value stored at each node is shown below it (and corresponds in this example to the position of the node in the list). The header node has two levels; it's no smaller than largest node in the list. Node 2 has a reference to node 4, two nodes ahead. Similarly for nodes 4, 6, 8, etc - every second node has a reference two nodes ahead. It's clear that the find operation does not need to visit each node. It can skip over every other node, then do a final visit at the end. The number of nodes visited is therefore no more than . For example, the nodes visited in scanning for the node with value 15 would be 2, 4, 6, 8, 10, 12, 14, 16, 15, a total of . Follow the algorithm for find on page for this simple example to be sure you understand it thoroughly. The second example is a list in which every second node has a reference two nodes ahead and additionally every fourth node has a reference four nodes ahead. Such a list is shown in Figure 2. The header is still no smaller than the largest node in the list. The find operation can now make bigger skips than those for the list in Figure 1. Every fourth node is skipped until the search is confined between two nodes of size 3. At this point, as many as three nodes may need to be scanned. It is also possible that some nodes will be visited more than once using the algorithm on page . The number of nodes visited2 is no more than . As an example, look at the nodes visited in scanning for the node with value of 15. These are 4, 8, 12, 16, 14, 16, 15 for a total of . This final example is for a list in which some skip lengths are even larger. Every -th node, , has a forward reference nodes ahead. For example, every node has a reference 2 nodes ahead; every node has a reference 8 nodes ahead, etc. Figure 3 shows a short list of this type. Once again, the header is no smaller than the largest node on the list. It is shown arbitrarily large in the Figure. Suppose the Skip List in Figure 3 contained 32 nodes and consider a search in it. Working down from the highest level, we first encounter node 16 and have cut the search in half. We then search again, one level down in either the left or right half of the list, again cutting the remaining search in half. We continue in this manner till we find the sought-after node (or not). This is quite reminiscent of binary search in an array and is perhaps the best way to intuitively understand why the maximum number of nodes visited in this list is in . This data structure is looking pretty good, but there's a serious problem with it for the insert and remove operations. The work required to reorganize the list after an insertion or deletion is in . For example, suppose that the first element is removed in Figure 3. Since it is necessary to maintain the strict pattern of node sizes, values from 2 to the end must be moved toward the head and the end node must be removed. A similar situation occurs when a new element is added to the list. This is where the probabilistic approach of a true Skip List comes into play. A Skip List is built with the same distribution of node sizes, but without the requirement for the rigid pattern of node sizes shown. It is no longer necessary to maintain the rigid pattern by moving values around after a remove or insert operation. Pugh shows that with high probability such a list still exhibits behavior. The probability that a given Skip List will perform badly is very small. Figure 4 shows the list of Figure 3 with the nodes reorganized. The distribution of node sizes is exactly the same as that of Figure 3; the nodes just occur in a different pattern. In this example, the pattern would require that 50% of the nodes have just one reference, 25% have just two references, 12.5% have just three references, etc. The distribution of node sizes is maintained, but the strict order is not required. Figure 4 shows one way this might work out. Of course, this is probabilistic, so there are many other possible node sequences that would satisfy the required probability distribution. When inserting new nodes, we choose the size of the new node probabilistically. Every Skip List has an associated (and fixed) probability that determines the distribution of nodes. A fraction of the nodes that have at least references also have references. The Skip List does not have to be reorganized when a new element is inserted. Suppose we have an infinitely-long Skip List with associated probability . This means that a fraction, , of the nodes with a forward reference at level also have a forward reference at level . Let be the fraction of nodes having precisely forward references ( i.e., is the fraction of nodes of size ). Then, and Since Recall that is the sum of the geometric progression with first term and common ratio . Thus, Therefore, . Since , we can write Example: In the situation shown in Figure 4, . Therefore, of the nodes with at least one reference have two references; one-half of those with at least two references have three references, etc. You should work out the distribution for a SkipList with associated probability of to be sure you understand how distributions are computed. int generateNodeLevel(double p, int maxLevel) { int level = 1; while (drand48() < p) level++; return (level > maxLevel) ? maxLevel : level; } Note that the level of the new node is independent of the number of nodes already in the Skip List. Each node is chosen only on the basis of the Skip List's associated probability. When the associated probability is , the average number of comparisons that must be done for find, is . For example, for a list of size 65,536, the average number of nodes to be examined is 34.3 for and 35 for . This is a tremendous improvement over an ordinary sorted list for which the average number of comparisons is . The level of the header node is the maximum allowed level in the SkipList and is chosen at construction. Pugh shows that the maximum level should be chosen as . Thus, for , the maximum level for a SkipList of up to 65,536 elements should be chosen no smaller than . The probability that an operation will take longer than expected is a function of the probability associated with the list. For example, Pugh calculates that for a list with and 4096 elements, the probability that the actual time will exceed the expected time by a factor of 3 is less than one in 200 million. The relative time and space performance of a Skip List depends on the probability level associated with the list. Pugh suggests that a probability of 0.25 be used in most cases. If the variability of performance is important, he suggests using a probability of 0.5 (variability decreases with increasing probability). Interestingly, the average number of references per node is only 1.33 when a probability of 0.25 is used. A binary search tree, of course, has 2 references per node, so Skip Lists can be more space-efficient. We will examine SkipList methods insert, and remove in some detail below. Pseudocode for find was given on page . Insertion and deletion involve searching the SkipList to find the insertion or deletion point, then manipulating the references to make the relevant change. When inserting a new element, we first generate a node that has had its level selected randomly. The SkipList has a maximum allowed level set at construction time. The number of levels in the header node is the maximum allowed. For convenience in searching, the SkipList keeps track of the maximum level actually in the list. There is no need to search levels above this actual maximum. In Skip Lists, we need pointers to all the see-able previous nodes between the insertion point and the header. Imagine standing at the insertion point, looking back toward the header. All the nodes you can see are the see-able nodes. Some nodes may not be see-able because they are blocked by higher nodes. Figure 5 shows an example. We construct a backLook node that has its forward pointers set to the relevant see-able nodes. This is the type of node returned by the findInsertPoint method. The public insert(const Comparable &) method decides on the new node's size by random choice, then calls the overloading private method insert(const Comparable &,int,bool &) to do all the work. -no_navigation skip_lists.tex The translation was initiated by Thomas Anastasio on 2000-02-19
http://www.csee.umbc.edu/courses/undergraduate/341/fall01/Lectures/SkipLists/skip_lists/skip_lists.html
CC-MAIN-2015-06
en
refinedweb
20 February 2008 10:35 [Source: ICIS news] ?xml:namespace> (adds updates throughout) SINGAPORE (ICIS news)--Samsung Engineering has signed additional procurement and construction contracts with Saudi Kayan Petrochemical worth $341m (€232m) to build a polypropylene (PP) plant as part of a new petrochemicals complex at Al-Jubail, a company spokesman said on Wednesday. Its earlier $49m engineering contract for the project had been converted to a lumpsum turnkey basis, the spokesman said, adding that this was the first of seven projects at Kayan to be converted. Samsung signed the other two contracts with Kayan for the project’s construction and procurement, he said. One of the won (W) 239.2bn ($253.3m) orders was signed with the South Korean group, while the other $88m contract was concluded with Samsung’s subsidiary in Saudi Arabia, the spokesman, who declined to be named, added. The 350,000 tonne/year project, which would be based on Basell's technology, would be completed by the end of August 2009, he said. Samsung also won a contract in May last year to build an amines plant for Kayan and both were in talks to convert this to lumpsum turnkey basis, he added. The PP plant will form part of Saudi Kayan’s $10bn petrochemicals complex at Al-Jubail, which is slated to make 4m tonnes/year of chemicals, including propylene, PP, ethylene glycol and butane-1. Saudi Basic Industries Corp (SABIC) owns a 35% stake in Saudi Kayan, while Kayan has 20%. The remaining 45% is held by public shareholders. ($1 = W944.23/€0.68)
http://www.icis.com/Articles/2008/02/20/9102144/samsung-wins-341m-saudi-pp-plant-deal.html
CC-MAIN-2015-06
en
refinedweb
14 September 2012 11:06 [Source: ICIS news] SINGAPORE (ICIS)--China National Offshore Oil Corp (CNOOC) is expected to put its 1.06m tonne/year oil terminal into operation at Dongying port in ?xml:namespace> The terminal comprises 500,000 cubic metres (cbm) of crude tanks, 420,000cbm of fuel oil tanks, 60,000cbm of gasoline tanks and 80,000cbm of gasoil tanks, according to the source. Supportive facilities including two 50,000 deadweight tonnes (dwt) of crude/fuel oil jetties and two 5,000dwt oil product jetties will become operational at the same time, the source said. CNOOC is now planning three crude pipelines to link Dongying port and its subsidiary refineries in The company is preparing to build a 1.5m tonne/year pipeline to transport crude from Dongying port to Dongying Petrochemical, which runs a 1.2m tonne/year refinery at Binzhou city in Shandong province, said the source. The company also plans to build a similar pipeline to deliver crude to Shandong Haihua Group’s 1m tonne/year refinery at Weifang city in CNOOC is expected to start building a 5m tonne/year crude pipeline from Dongying port to the 2.2m tonne/year refinery operated by CNOOC Offshore Bitumen at the beginning of 2013, a source from CNOOC Offshore Bitumen said. CNOOC has a combined 4.9m tonne/year refining capacity in CNOOC Offshore Bitumen is expected to expand its crude processing capacity to 4.7m tonne/year with a 2.5m tonne/year crude distillation unit (CDU) scheduled to come on line this October, the company source said. Meanwhile, Dongying Petrochemical is negotiating with CNOOC and the local government about the construction of a 5m tonne/year CDU and some secondary units at Dongying city. Shandong Haihua Group’s 5m tonne/year crude refining project at Weifang city is pending approval from
http://www.icis.com/Articles/2012/09/14/9595469/chinas-cnooc-may-start-up-dongying-oil-terminal-in-late.html
CC-MAIN-2015-06
en
refinedweb