text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
---|---|---|---|---|
Exploring the Design Space of Distributed and Peer-to-Peer Systems: Comparing the Web, TRIAD, and Chord/CFS
- Reynard McDowell
- 1 years ago
- Views:
Transcription
1 Exploring the Design Space of Distributed and Peer-to-Peer Systems: Comparing the Web, TRIAD, and Chord/CFS Stefan Saroiu, P. Krishna Gummadi, Steven D. Gribble University of Washington. Introduction Peer-to-peer systems are the latest addition to a family of distributed systems whose goal is to share resources across their participants. Previous members of this family include the WWW, distributed file systems, and even the telephony network. To compare these systems, one can decompose them along the following design axes, which are an extension of those proposed by Shoch [9] and Saltzer [7]: Content name: A name describes what a user is looking for, such as a file name in a file system. Host address: An address describes where a resource is, for example, an IP address describes where a host resides in the Internet. Routing mechanism: A route describes how to get to a destination. A routing mechanism (such as BGP across Internet autonomous systems, or ASs) is used to discover or disseminate routes. Network topology: Topology describes the set of physical or logical links between hosts. Lookup: Bindings between names and addresses are registered in the system. Participants use a lookup mechanism that resolves a name into an address, based on the registered bindings. These design axes represent one possible framework to reason about the architecture of a system. Although this framework is clear in the abstract, in practice real systems often blur the distinction between some of these axes. For example, NAT blurs routing and lookup by introducing a name translation mechanism so that non-routable IP addresses can be bridged to the routable Internet. Additionally, IP addresses are converted into MAC Ethernet addresses in a manner similar to name translation. However, we believe that our decomposition is useful both when designing and analyzing a system, and that, by mapping design choices along these axes, we can learn about the trade-offs made by each system. In this paper, we compare the designs of three different distributed architectures: the WWW, TRIAD [5], and Chord/CFS [] (as a representative of recently proposed peer-to-peer architectures [, 6, 0]). We then derive several performance, security, and robustness implications that result from their design choices.. The World Wide Web The WWW is perhaps the most ubiquitous, popular, and successful distributed system. The WWW enables clients to retrieve hyperlinked content. Names: Web content names are drawn from an infinite space of globally unique Uniform Resource Locators (URLs), which are structured as a fully qualified domain name (FQDN) combined with a locally unique relative URL [, 4]. The right to bind an FQDN to an IP address is controlled by hierarchical delegation, and the right to bind relative URLs is controlled by local policy. Addresses: WWW addresses are globally unique, hierarchically organized IP addresses of Internet hosts (servers, clients, caches, or intermediate routers). There is a finite but large number of IP addresses; addresses are allocated in ranges from a centralized authority, and address assignment rights are delegated locally within these ranges. Routing: Routing in the WWW is a combination of Internet routing protocols, including BGP, IS-IS, and OSPF. Routing decisions are driven by business policy and performance. The ability to route to an IP address is the result of advertising that address on a routing protocol. There is little control over the right to advertise, as there is typically a lack of authentication and access control in routing protocols.
2 Destination mit.edu user Link Destination mit.edu Cost 4 Link Cost mit.edu Figure. The TRIAD architecture. A name request () is forwarded by intermediate s toward the best content replica. Topology: The WWW topology is based on the physical topology of Internet hosts. This topology is roughly hierarchical, consisting of interconnected autonomous systems and subnetworks within them. Lookup: FQDNs within URLs are resolved to IP addresses through the domain name system (DNS); relative URLs are resolved and bound locally by servers. DNS itself is another distributed system; however, the WWW could replace the DNS lookup mechanism with no semantic loss, as is proposed by TRIAD.. TRIAD TRIAD defines a content layer that replaces the Web s address-based routing with a name-based routing protocol. An individual piece of content is advertised by each server replica, so that lookup requests are directed from clients along intermediate routers ( nodes) to servers, and back along the same path. Each node maintains a set of nameto-next-hop mappings, just as an IP router maps address prefixes to next hops. When a request for a content name arrives, a looks up the name and forwards the request toward the best server replica. Once the request reaches a responsible for a server replica, that sends back a response containing the server s address (Figure ). Names: Similar to the Web, TRIAD resources are objects spread across servers. Although TRIAD s content namespace does not require a specific structure, content names are routing table entries, and therefore need to aggregate. Because URLs are hierarchical, TRIAD suggests using the Web s URL namespace for content naming. Addresses: TRIAD s addresses are a composition of two namespaces: globally unique IP addresses of AS, and locally unique IP addresses within each AS. This results in a finite but very large address space. While the inter-as address space is controlled by a centralized authority, each intra-as address space is managed locally by its AS. Routing: TRIAD uses a name-based, BGP-like routing protocol called NBRP, which distributes name suffix reachability messages. The ability to route to a name is the result of advertising that name across the NBRP protocol. Topology: TRIAD s topology can be arbitrary, consisting of logical links between nodes over which NBRP messages flow. For performance reasons, it is suggested that TRIAD s topology should reflect the physical Internet topology. Lookup: TRIAD unifies lookup and routing: resolving a name into an address is achieved by routing the name to its destination. Once the destination address is found, the lookup reply is routed back to the source on the same path.. Chord/CFS All hosts in Chord/CFS-style peer-to-peer systems serve three roles; they act as servers, clients, and intermediate routers. This symmetry of roles has several design implications. Names: In Chord, each piece of content is named with a Chord identifier, obtained by hashing the content into ½ ¼ bits. The content namespace is flat, large, and uniformly populated. Addresses: Chord s address namespace is structurally identical to the content namespace. Addresses are obtained by concatenating a host s IP address with a small virtual host number, and hashing the result into a ½ ¼ bit address. Because addresses have 60 bits, they are probabilistically globally unique in the system. Routing: Since the content and address namespaces are equivalent, routing can be thought of both as address-based, like the Web, or name-based, like TRIAD. Unlike the Web or TRIAD, any participating Chord host acts as an intermediate router. Routing in Chord is simple: each host directs queries to the neighbor whose address is closest to the name according to a pre-determined lexicographic order. Topology: Chord s overlay network topology is a deterministic function of participating peers addresses. Each peer has a successor and predecessor based on a total ordering of addresses, and each
3 WWW TRIAD Chord/ CFS Name URL URL Address IP Address Two Level Hierarchy Chord ID Routing Lookup DNS Name-based Addressbased Name/Addressbased, Implicit, Deterministic Name/Address => Topology Topology Physical, Arbitrary Logical, Arbitrary Logical, Deterministic Figure. Routing and lookup are unified in TRIAD and Chord. Chord s name and address spaces are identical, and its topology is a deterministic function of names/addresses. peer maintains logarithmically-sized finger table of connections to other peers. Although stale finger table entries are tolerable, they act as shortcut routes and their freshness ensures efficient routing. Lookup: Like TRIAD, Chord unifies lookup and routing. Since the content and address namespaces are equivalent, the identity function is sufficient to bind names to addresses: a name is bound to the address which is the content name. Name resolution is done by routing the name through the network..4 Summary A summary of the systems designs is given in Figure. This decomposition has served to illustrate important differences between these systems. For instance, in Chord, binding of names to addresses is done by the identity function and the topology is a deterministic function of host addresses. In both Chord and TRIAD, lookup is unified with routing. We now discuss the implications of these design differences. Names and Addresses In this section, we explore how content names and host addresses are created and bound to each other in our three systems.. The World Wide Web Binding names to IP addresses is controlled by the use of DNS as a lookup mechanism. The hierarchical nature of DNS imposes structure on content names, but it also serves to delegate binding rights to authorities that own subtrees of the namespace. While any Web server can create an infinite number of URLs, binding them to IP addresses is restricted by the ability to register FQDNs within a particular DNS subtree. As a result, a malicious Web server cannot pollute the global namespace by registering a large number of dummy or otherwise harmful names, nor can it attack another Web server s content by duplicating its URLs. There is a similar hierarchical delegation of IP address assignment rights within the Web. An individual may be able to control the assignment of a large but finite number of IP addresses; for instance, MIT owns a class A subnet, allowing it to assign ¾ ¾ addresses, but all within a fixed range. However, it is difficult for a malicious host to hijack an IP address outside of its allocated range, as this involves injecting false routing advertisements into the network. Although they both are hierarchical, the name and address namespaces in the Web are completely independent: control over one does not grant control over the other.. TRIAD In TRIAD, lookup and routing are unified. As a result, the binding of a name to an address is accomplished by advertising a route across the TRIAD network, and lookup is performed by routing a name to its destination. Similar to the Web, the ability to create a name in TRIAD is unrestricted. Restrictions on binding rights must be enforced by the routing infrastructure; to date, this issue remains unresolved. Addresses in TRIAD are the composition of globally routable IP addresses assigned to ASs, and locally routable IP addresses within ASs. The authority to assign routable addresses is therefore split across two levels. Similar to the Web, a centralized naming authority delegates globally visible IP ranges to ASs, and ASs enforce local policies for address assignment. Therefore, individual hosts in TRIAD typically cannot affect globally visible IP assignment. Because name routing in TRIAD involves routing advertisements and routing table formation, the ability to aggregate names is important for scalability. As a result, the content namespace must be hierarchical (or otherwise aggregatable) in practice. For this reason, TRIAD content names are modeled after URLs.. Chord/CFS-style Peer-to-Peer Systems Since the peer-to-peer content namespace is flat, the responsibility for managing content is randomly distributed across the address namespace. The insertion of a name-to-address binding (i.e. publish-
4 ing content) into the system causes some host to accept the responsibility and incur the cost of managing that content. Thus, unless the right to insert a nameto-address binding is controlled, any host can cause unbounded amounts of effort and storage to be expended across the system. Furthermore, attacks on specific victims are also possible. For example, an attacker could overwhelm a targeted victim address with content, or even cause the targeted host to store undesirable or illicit content. In contrast, in the Web and TRIAD, binding a name to an address does not cause the host to store the content, making such attacks impossible. The set of content names associated with an address is also deterministic. If hosts are allowed to select their own addresses, they can use this deterministic mapping to control access to specific content names. In Chord/CFS, the ability to create an address is restricted by limiting hosts to using hashes of their IP addresses concatenated with a small virtual host number. An attacker who has assignment rights over O(number of Chord nodes / max virtual hosts) IP addresses can control arbitrary content in the system. Routing, Lookup and Topology In this section, we discuss the consequences of unifying routing and lookup in TRIAD and Chord/CFS, in contrast to the Web. Furthermore, because Chord s topology is a function of its address space, several unexpected implications emerge affecting the system s redundancy, availability, faulttolerance and security.. The World Wide Web The structure of the WWW is mapped directly onto the Internet s physical topology: Web servers and clients are addressed by their IP addresses, and the routing of data between them is performed using IP routing protocols such as BGP, IS-IS, and OSPF. Infrastructure such as content-delivery networks and caching hierarchies extend the name-toaddress lookup mechanism, but the result of a lookup is still an IP address of the host that will serve the data. In the WWW, routing policy can be selected independent of both physical topology and content. This flexibility allows policy to be driven by efficiency, business rules, or even local physical characteristics. Routing policy can be altered without affecting naming, binding, or content placement. It is common for ISP operators to adjust policy to achieve a financial or traffic balancing goal; however these adjustments are functionally transparent to the rest of the system. It is possible to engineer redundancy (and hence higher availability) at two levels in the Web. At the routing level, redundant physical routes provide alternate paths for data transport between clients and servers. At the name binding level, binding the same name to multiple addresses allows clients or middleware to fail over to an alternate address if one destination becomes unavailable. The endpoints of a Web transfer (servers and clients) are, in general, physically distinct from routers. This physical separation of roles has several benefits. Different degrees of trust can be associated with different roles; for example, core Internet routers are more protected and trustworthy than Web clients or servers. Hosts can be provisioned and optimized for their specific roles; a high-speed router needs different hardware, OS, and software support than a Web server or client. Finally, sideeffects of host failures are isolated with respect to the role they play. A Web server failure does not affect the routability of IP addresses, and a router failure doesn t affect content availability (unless the failure partitions the network).. TRIAD In TRIAD, because routing is the content nameto-address lookup mechanism, routing policy can no longer be selected independently of content. If TRIAD s network topology mirrors the physical topology of the Internet, as suggested by the authors, then an efficient routing policy is enough to enable clients to route requests to their topologically nearest content replica. This only works because TRIAD can route on arbitrary topologies, unlike Chord/CFS-style peer-to-peer systems, as discussed below. TRIAD also supports two levels of redundancy. Multiple name-to-address bindings are attainable by replicating content on additional routable destinations; if one destination fails, the content is available at the replicas addresses. This replication technique is possible because TRIAD routing uses content names rather than host addresses. In addition, from a given source, there may be multiple routes to a destination name. Thus, the failure of a link in TRIAD does not necessarily cause content to become unavailable. Because TRIAD supports routing over arbitrary topologies, it is possible to construct a topology in
5 which content servers are never intermediate nodes in a route, and therefore servers do not need to participate in the routing of requests. Thus, content hosting and routing are still separable roles, enabling the same separations of trust, provisioning, and failure as the Web.. Chord/CFS-style Peer-to-Peer Systems The topology of Chord/CFS-style peer-to-peer systems is a deterministic function of the set of participating addresses. As a side-effect, routing tables need not be advertised across the system, eliminating one cause of overhead. Routing tables, approximated by finger tables in Chord, are constructed by each peer upon its entry to the system, and lazily updated to reflect the departure of its neighbors. If finger tables are kept up-to-date, the carefully chosen topology bounds lookup route lengths by ÐÓ # peersµ. The content name and address namespaces in Chord/CFS are unified, which allows binding to be the identity function: the content name is the address towards which a peer routes requests. When combined with Chord s deterministic topology, this implies that all peers are expected to serve both as routers and content destinations. These roles are inseparable: a peer cannot choose an address that will relieve it of routing responsibilities, and the topology cannot be engineered to relieve content destinations of routing responsibilities. However, roles no longer need to be explicitly assigned, and the topology need not be explicitly constructed; they are determined as peers join and leave the system, vastly simplifying and decentralizing the administration of the system. Redundancy in Chord/CFS can occur at multiple levels. Because binding is the identity function, it is impossible to bind the same content name to multiple addresses. However, a naming convention can assign aliases to any given content name; unlike TRIAD or the WWW, redundancy at this level is not transparent to the user, since it is exposed in the content namespace. A second level of redundancy exists within the overlay itself. There are on the order of ÐÓ # peersµ mutually disjoint routes between any two given addresses. As long as routes fail independently, this provides a high degree of availability to the system. The Chord/CFS network is an overlay that maps down to a physical IP network. Redundancy can be added to the physical network, but since the overlay topology is a function of Chord addresses that involves a randomly distributed hash function, physical locality is diffused throughout the overlay. Accordingly, it is difficult to predict the effect of physical network redundancy on the overlay network. For example, the failure of a network link will manifest itself as multiple, randomly scattered link failures in the overlay. The diffusion of physical links across the logical Chord network tends to amplify the bad properties of a system, but not its good properties. If any link within a lookup path has low bandwidth, high latency, or low availability, the entire path suffers. Conversely, all links within a path must share the same good property for the path to benefit from it. Thus, a single bad physical link can infect many routes. As was measured in [8], 0% of the hosts in popular file-sharing peer-to-peer systems connect to the Internet over modems. Since Chord overlay paths traverse essentially random physical links, a simple calculation reveals that for a network of 0,000 peers with similar characteristics to those in [8], there is a 79% probability that a lookup request encounters at least one modem. As another example, it is possible for a single physical link failure in the Internet to cause a large network partition. Consider a worst-case failure that separates an AS from the rest of the Internet: as long as all Web content within that AS is replicated outside of it, all content is available to all nonpartitioned clients. However, in Chord, the number of failed routes that this single link failure will cause is proportional to the number of Chord addresses hosted within the partitioned AS, and these failed routes will be randomly distributed across both peers and content. A final implication of the deterministic nature of routes in Chord/CFS-style systems is that it is possible for an attacker to construct a set of addresses that, if inserted into the system, will intercept all lookup requests coming from a particular member of the system. Even though mechanisms exist to prevent a peer from selecting arbitrary addresses, if a peer can insert enough addresses, it can (probabilistically) surround or at least become a neighbor of any other peer. 4 Summary We presented a design decomposition of the WWW, TRIAD [5] and Chord/CFS [] (as representative of recent peer-to-peer architectures [, 6, 0]). This decomposition allowed us to describe fundamental system design differences: () in Chord/CFS, the content and address namespaces are equivalent, as opposed to WWW and TRIAD; () Chord s net-
6 Access Control WWW Localized bindings, hierarchical space Namespace and address space are decoupled TRIAD Chord/CFS Global bindings Single host can force others to do work Namespace control equivalent to address-space control Content Replication Achieved through multiple, user-transparent bindings of same name Achieved through multiple, useraware bindings of different names Path Redundancy Some alternate network paths Can provision network for targeted content Many alternate network paths Can t provision, locality is diffused Security Different levels of trust for different roles Servers are routers; routers are servers Single role, single level of trust Failures Router failure doesn t affect content availability Server failure doesn t affect routing Local failures have local effects Server failure = = Router failure Link failures diffuse throughout overlay Figure. Impact of architectural choices to the properties of WWW, TRIAD and Chord/CFS work topology is a deterministic function of its content and address namespace and () unlike the WWW, in both TRIAD and Chord, lookup and routing are unified. These differences have unexpected consequences, some of which can have serious implications to these systems availability, security, redundancy and fault-tolerance (Figure ). References [] T. Berners-Lee, L. Masinter, and M. McCahill. RFC 78 - Uniform Resource Locators (URL), December 994. [] F. Dabek, M. F. Kaashoek, D. Karger, R. Morris, and I. Stoica. Wide-area cooperative storage with CFS. In Proceedings of the 8th ACM Symposium on Operating Systems Principles (SOSP 00), Lake Louise, AB, Canada, October 00. [] P. Druschel and A. Rowstron. Storage management and caching in PAST, a large-scale, persistent peer-to-peer storage utility. In Proceedings of the 8th ACM Symposium on Operating Systems Principles (SOSP 00), Lake Louise, AB, Canada, October 00. [4] R. Fielding. RFC Relative Uniform Resource Locators, June 995. [5] M. Gritter and D. R. Cheriton. An Architecture for Content Routing Support in the Internet. In Proceedings of the rd Usenix Symposium on Internet Technologies and Systems (USITS), San Francisco, CA, USA, March 00. [6] S. Ratnasamy, P. Francis, M. Handley, R. Karp, and S. Shenker. A Scalable Content-Addressable Network. In Proceedings of the ACM SIGCOMM 00 Technical Conference, San Diego, CA, USA, August 00. [7] J. Saltzer. RFC On the Naming and Binding of Network Destinations, August 99. [8] S. Saroiu, P. K. Gummadi, and S. D. Gribble. A Measurement Study of Peer-to-Peer File Sharing Systems. In Proceedings of the Multimedia Computing and Networking Conference (MMCN), San Jose, CA, USA, January 00. [9] J. F. Shoch. Inter-Network Naming, Addressing, and Routing. In Proceedings of IEEE COMPCON, pages 7 79, Washington, DC, USA, December 978. Also in K. Thurber (ed.), Tutorial: Distributed Processor Communication Architecture, IEEE Publ. #EHO 5-9, 979, pp [0] B. Zhao, K. Kubiatowicz, and A. Joseph. Tapestry: An Infrastructure for Fault-Resilient Wide-Area Location and Routing. Technical Report UCB//CSD-0-4, University of California at Berkeley Technical Report, April 00.,
Lecture 15: Addressing and Routing Architecture
Lecture 15: Addressing and Routing Architecture Prof. Shervin Shirmohammadi SITE, University of Ottawa Prof. Shervin Shirmohammadi CEG 4185 15-1 Addressing & Routing Addressing is assigning identifiers
Research on P2P-SIP based VoIP system enhanced by UPnP technology
December 2010, 17(Suppl. 2): 36 40 The Journal of China Universities of Posts and Telecommunications Research on P2P-SIP based VoIP system
IP addressing. Interface: Connection between host, router and physical link. IP address: 32-bit identifier for host, router interface
IP addressing IP address: 32-bit identifier for host, router interface Interface: Connection between host, router and physical link routers typically have multiple interfaces host may have multiple interfaces Indirection Infrastructure
Internet Indirection Infrastructure Ion Stoica, Dan Adkins, Sylvia Ratnasamy, Scott Shenker, Sonesh Surana, Shelley Zhuang istoica, dadkins, sylviar, sonesh, shelleyz @cs.berkeley.edu, [email protected]
CS514: Intermediate Course in Computer Systems
: Intermediate Course in Computer Systems Lecture 7: Sept. 19, 2003 Load Balancing Options Sources Lots of graphics and product description courtesy F5 website () I believe F5 is market leader
GRIDB: A SCALABLE DISTRIBUTED DATABASE SHARING SYSTEM FOR GRID ENVIRONMENTS *
GRIDB: A SCALABLE DISTRIBUTED DATABASE SHARING SYSTEM FOR GRID ENVIRONMENTS * Maha Abdallah Lynda Temal LIP6, Paris 6 University 8, rue du Capitaine Scott 75015 Paris, France [abdallah, temal]@poleia.lip6.fr
Storage Systems Autumn 2009. Chapter 6: Distributed Hash Tables and their Applications André Brinkmann
Storage Systems Autumn 2009 Chapter 6: Distributed Hash Tables and their Applications André Brinkmann Scaling RAID architectures Using traditional RAID architecture does not scale Adding news disk implies
Federal Computer Incident Response Center (FedCIRC) Defense Tactics for Distributed Denial of Service Attacks
Threat Paper Federal Computer Incident Response Center (FedCIRC) Defense Tactics for Distributed Denial of Service Attacks Federal Computer Incident Response Center 7 th and D Streets S.W. Room 5060 Washington,
IP addressing and forwarding Network layer
The Internet Network layer Host, router network layer functions: IP addressing and forwarding Network layer Routing protocols path selection RIP, OSPF, BGP Transport layer: TCP, UDP forwarding table IP
WHITE PAPER. Understanding IP Addressing: Everything You Ever Wanted To Know
WHITE PAPER Understanding IP Addressing: Everything You Ever Wanted To Know Understanding IP Addressing: Everything You Ever Wanted To Know CONTENTS Internet Scaling Problems 1 Classful IP Addressing 3
A Distributed System for Internet Name Service +-------------------------------------------------------------+
Network Working Group Request for Comments: 830 A Distributed System for Internet Name Service by Zaw-Sing Su +-------------------------------------------------------------+ This RFC proposes a distributed
LinkProof DNS Quick Start Guide
LinkProof DNS Quick Start Guide TABLE OF CONTENTS 1 INTRODUCTION...3 2 SIMPLE SCENARIO SINGLE LINKPROOF WITH EXTERNAL SOA...3 3 MODIFYING DNS ON THE EXTERNAL SOA...4 3.1 REFERRING THE A RECORD RESOLUTION
Chapter 3. Enterprise Campus Network Design
Chapter 3 Enterprise Campus Network Design 1 Overview The network foundation hosting these technologies for an emerging enterprise should be efficient, highly available, scalable, and manageable. This
Security in Structured P2P Systems
P2P Systems, Security and Overlays Presented by Vishal thanks to Dan Rubenstein Columbia University 1 Security in Structured P2P Systems Structured Systems assume all nodes behave Position themselves: Concepts and Design
Distributed Systems: Concepts and Design Edition 3 By George Coulouris, Jean Dollimore and Tim Kindberg Addison-Wesley, Pearson Education 2001. Chapter 2 Exercise Solutions 2.1 Describe and illustrate
Creating the Conceptual Design by Gathering and Analyzing Business and Technical Requirements
Creating the Conceptual Design by Gathering and Analyzing Business and Technical Requirements Analyze the impact of Active Directory on the existing technical environment. Analyze hardware and software
An Introduction to Peer-to-Peer Networks
An Introduction to Peer-to-Peer Networks Presentation for MIE456 - Information Systems Infrastructure II Vinod Muthusamy October 30, 2003 Agenda Overview of P2P Characteristics Benefits Unstructured P2P
Exterior Gateway Protocols (BGP)
Exterior Gateway Protocols (BGP) Internet Structure Large ISP Large ISP Stub Dial-Up ISP Small ISP Stub Stub Stub Autonomous Systems (AS) Internet is not a single network! The Internet is a collection
Network Architecture and Topology
1. Introduction 2. Fundamentals and design principles 3. Network architecture and topology 4. Network control and signalling 5. Network components 5.1 links 5.2 switches and routers 6. End systems 7. End-to-end
Based on Computer Networking, 4 th Edition by Kurose and Ross
Computer Networks Internet Routing Based on Computer Networking, 4 th Edition by Kurose and Ross Intra-AS Routing Also known as Interior Gateway Protocols (IGP) Most common Intra-AS routing protocols:
ADDRESSING 101 ==================================================== A name is a unique human-understandable identifier.
ADDRESSING 101 1. What is in an address? An address is a unique computer-understandable identifier. Uniqueness is defined in a domain outside that domain, to retain uniqueness, one needs to have either
Simple Solution for a Location Service. Naming vs. Locating Entities. Forwarding Pointers (2) Forwarding Pointers (1)
Naming vs. Locating Entities Till now: resources with fixed locations (hierarchical, caching,...) Problem: some entity may change its location frequently Simple solution: record aliases for the new address
SOLUTIONS PRODUCTS TECH SUPPORT ABOUT JBM Online Ordering
SOLUTIONS PRODUCTS TECH SUPPORT ABOUT JBM Online Ordering SEARCH TCP/IP Tutorial This tutorial is intended to supply a brief overview of TCP/IP protocol. Explanations of IP addresses, classes, netmasks,
Agenda. Distributed System Structures. Why Distributed Systems? Motivation
Agenda Distributed System Structures CSCI 444/544 Operating Systems Fall 2008 Motivation Network structure Fundamental network services Sockets and ports Client/server model Remote Procedure Call (RPC).
Classful IP Addressing (cont.)
Classful IP Addressing (cont.) 1 Address Prefix aka Net ID defines the network Address Suffix aka Host ID defines the node In Classful addressing, prefix is of fixed length (1, 2, or 3 bytes)! Classful
Overview of Routing between Virtual LANs
Overview of Routing between Virtual LANs This chapter provides an overview of virtual LANs (VLANs). It describes the encapsulation protocols used for routing between VLANs and provides some basic information
Interconnecting IPv6 Domains Using Tunnels
Interconnecting Domains Using Tunnels Version History Version Number Date Notes 1 30 July 2002 This document was created. 2 19 May 2003 Updated the related documents section. This document describes how
Internetworking and Internet-1. Global Addresses
Internetworking and Internet Global Addresses IP servcie model has two parts Datagram (connectionless) packet delivery model Global addressing scheme awaytoidentifyall H in the internetwork Properties.
Internet Protocol Address
SFWR 4C03: Computer Networks & Computer Security Jan 17-21, 2005 Lecturer: Kartik Krishnan Lecture 7-9 Internet Protocol Address Addressing is a critical component of the internet abstraction. To give
Network Virtualization Network Admission Control Deployment Guide
Network Virtualization Network Admission Control Deployment Guide This document provides guidance for enterprises that want to deploy the Cisco Network Admission Control (NAC) Appliance for their campus | http://docplayer.net/1804932-Exploring-the-design-space-of-distributed-and-peer-to-peer-systems-comparing-the-web-triad-and-chord-cfs.html | CC-MAIN-2017-30 | en | refinedweb |
Ngl.free_color
Removes a color entry from a workstation.
Prototype
Ngl.free_color(workstation, color_index)
Argumentsworkstation
An identifier returned from calling Ngl.open_wks.color_index
An integer scalar specifying a color index.
Return valueNone
Description
This function frees the specified color index on the specified workstation. When the color index is freed the color associated with that index is undefined and the result of any subsequent use of that index without redefining it will produce unpredictable results. All other colors and color indices except for the freed one remain unchanged. A new color can be assigned to the freed index by using either Ngl.set_color or Ngl.new_color. See the example below.
See Also
Ngl.new_color, Ngl.set_color, Ngl.get_named_color_index Ngl.draw_colormap, Ngl.retrieve_colormap,
Examples
The following script produces eight labeled color boxes showing how to free a color index and redefine it.
import Ngl # # Open a workstation. # wks_type = "ps" wks = Ngl.open_wks(wks_type,"free_color") # # Re-define the first nine colors in the default colormap # (the rainbow colormap with 190 defined colors) # Ngl.set_color(wks,0,1.,0.,0.) Ngl.set_color(wks,1,0.,1.,0.) Ngl.set_color(wks,2,0.,0.,1.) Ngl.set_color(wks,3,0.,1.,1.) Ngl.set_color(wks,4,1.,0.,1.) Ngl.set_color(wks,5,1.,1.,0.) Ngl.set_color(wks,6,0.,0.,0.) Ngl.set_color(wks,7,1.,0.5,0.) Ngl.set_color(wks,8,1.,1.,1.) # # Define some text resources for subsequent labels. # tx_res = Ngl.Resources() tx_res.txFont = "Helvetica-Bold" tx_res.txFontColor = 8 # # Free color index 2. The color associated with this # index is now undefined and index 2 is a free index. # All other colors and color indices remain unchanged. # Ngl.free_color(wks,2) # # Define gray using Ngl.new_color. This color value will # be assigned to the first available free color index, # namely index 2 that was just freed in this case. # Ngl.new_color(wks, 0.8, 0.8, 0.8) # # Draw and label color boxes using the first eight colors. # Notice that the color associated with color index 2 is # now gray. # ci = 0 delx = 0.50 dely = 0.25 pl_res = Ngl.Resources() for i in xrange(2): xl = i*delx xr = xl+delx for j in xrange(4): yb = j*dely yt = yb+dely x = [xl,xr,xr,xl] y = [yb,yb,yt,yt] pl_res.gsFillColor = ci Ngl.polygon_ndc(wks,x,y,pl_res) Ngl.text_ndc(wks, str(ci), xl+0.5*delx, yb+0.5*dely, tx_res) ci = ci+1 Ngl.frame(wks) Ngl.end() | http://www.pyngl.ucar.edu/Functions/Ngl.free_color.shtml | CC-MAIN-2017-30 | en | refinedweb |
My task was to create an object in class, initialize it and output(using pointer to class). This code compiles perfectly, but the output doesn't appear. I would really appreciate any help, thank you in advance!
#include <iostream>
#include <string>
using namespace std;
class family
{
public:
void setWife(string w)
{w = wife;};
string getWife()
{return wife;};
void setHusband(string h)
{husband = h;};
string getHusband()
{return husband;};
void setSon(string s)
{s = son;};
string getSon()
{return son;};
void setDaughter1(string d1)
{d1 = daughter1;};
string getDaughter1()
{return daughter1;};
void setDaughter2(string d2)
{daughter2 = d2;};
string getDaughter2()
{return daughter2;};
double* getPointer()
{return &pointer;};
void initialize()
{
setWife("Shirley Collin");
setHusband("Donald Collin");
setSon("Collin Collin");
setDaughter1("Harriet Collin");
setDaughter2("Hillary Collin");
}
friend void output(family* Collin);
private:
string wife;
string husband;
string son;
string daughter1;
string daughter2;
double pointer;
};
void output(family* Collin)
{cout << "Husband is " <<Collin->getHusband()<< endl;
cout << "wife is " << Collin ->getWife() << endl;
cout << "son is " << Collin->getSon() << endl;
cout << "daughter1 is " << Collin->getDaughter1() << endl;
cout << "daughter2 is " << Collin->getDaughter2()<< endl;
};
int main()
{family Collin;
Collin.initialize();
family *pointer = new family;
output (pointer);
cin.ignore();
}
family Collin; Collin.initialize();
This constructs an instance of the
family class, and initializes it with the values defined in the
initialize() method.
family *pointer = new family; output (pointer);
This constructs a second instance of the
family class, does not initialize it in any way, and calls the
output() method, to display the contents of the completely uninitialized second instance of this
family class.
This is why this program produces no useful output.
You probably want to replace these four lines with:
family *pointer=new family; pointer->initialize(); output(pointer); | https://codedump.io/share/KO6GAUHX7rVO/1/object-pointer-doesn39t-work | CC-MAIN-2017-30 | en | refinedweb |
29 October 2010 10:08 [Source: ICIS news]
DUSSELDORF (ICIS)--Korean polymer additive major Songwon Industrial has made two strategic moves to strengthen its positions in China and India, the company said on Friday.
Jongho Park, chairman of the board, said his firm had “finalised a 30% stake with the option to increase to 50%” in Tangshan Baiful Chemical of Tianjin, China.
Tangshan makes secondary thioester antioxidants. The deal was signed at a press conference at the K 2010 – the international trade fair for the plastics and rubber industry in ?xml:namespace>
The company said the new joint venture, called Songwon Baifu, would become operational in the first quarter of 2011, subject to legal approvals.
Songwon Baifu will sell thioesters directly to the Chinese market and use Songwon’s global network for sales outside the country. It will have a capacity of 6,000 tonnes/year.
Away from the show, Songwon and HPL India also signed a letter of intent to form a joint venture - Songwon HPL Additives.
HPL will transfer all of its business and assets related to polymer stabilisers to a new company that will be owned 40% by HPL and 60% by Songwon, the company said.
This was expected to be operational by the first quarter of 2011.
Speaking on the sidelines of the meeting, Park said the new ventures would help Songwon meet the growing needs for antioxidants and ultraviolet protection in the rapidly expanding polymer markets of the Middle East and China.
He said Songwon had paid around $20m (€14.4m) in total on the two deals.
( | http://www.icis.com/Articles/2010/10/29/9405676/south-koreas-songwon-industrial-reveals-china-india-investments.html | CC-MAIN-2014-42 | en | refinedweb |
Attachment 'manual.html'Download
Table of Contents
This tool can be used to compile Qt projects, designed for versions
4.x.y and higher. It is not usable for Qt3 and older versions, since some
of the helper tools (
moc,
uic)
behave different.
For activating the tool "qt4", you have to add its name to the Environment constructor, like this
env = Environment(tools=['default','qt4'])
On its startup, the Qt4 tool tries to read the variable
QT4DIR from the current Environment and
os.environ. If it is not set, the value of
QTDIR (in Environment/
os.environ) is
used as a fallback.
So, you either have to explicitly give the path of your Qt4 installation to the Environment with
env['QT4DIR'] = '/usr/local/Trolltech/Qt-4.2.3'
or set the
QT4DIR as environment variable in your
shell.
Under Linux, "qt4" uses the system tool
pkg-config for automatically setting the required
compile and link flags of the single Qt4 modules (like QtCore, QtGui,...).
This means that
you should have
pkg-configinstalled, and
you additionally have to set
PKG_CONFIG_PATHin your shell environment, such that it points to $
QT4DIR/lib/pkgconfig(or $
QT4DIR/libfor some older versions).
Based on these two environment variables (
QT4DIR
and
PKG_CONFIG_PATH), the "qt4" tool initializes all
QT4_* construction variables listed in the Reference
manual. This happens when the tool is "detected" during Environment
construction. As a consequence, the setup of the tool gets a two-stage
process, if you want to override the values provided by your current shell
settings:
# Stage 1: create plain environment qtEnv = Environment() # Set new vars qtEnv['QT4DIR'] = '/usr/local/Trolltech/Qt-4.2.3 qtEnv['ENV']['PKG_CONFIG_PATH'] = '/usr/local/Trolltech/Qt-4.2.3/lib/pkgconfig' # Stage 2: add qt4 tool qtEnv.Tool('qt4')
Based on the requirements above, we suggest a simple ready-to-go setup as follows:
SConstruct
# Detect Qt version qtdir = detectLatestQtDir() # Create base environment baseEnv = Environment() #...further customization of base env # Clone Qt environment qtEnv = baseEnv.Clone() # Set QT4DIR and PKG_CONFIG_PATH qtEnv['ENV']['PKG_CONFIG_PATH'] = os.path.join(qtdir, 'lib/pkgconfig') qtEnv['QT4DIR'] = qtdir # Add qt4 tool qtEnv.Tool('qt4') #...further customization of qt env # Export environments Export('baseEnv qtEnv') # Your other stuff... # ...including the call to your SConscripts
In a SConscript
# Get the Qt4 environment Import('qtEnv') # Clone it env = qtEnv.clone() # Patch it env.Append(CCFLAGS=['-m32']) # or whatever # Use it env.StaticLibrary('foo', Glob('*.cpp'))
The detection of the Qt directory could be as simple as directly assigning a fixed path
def detectLatestQtDir(): return "/usr/local/qt4.3.2"
or a little more sophisticated
# Tries to detect the path to the installation of Qt with # the highest version number def detectLatestQtDir(): if sys.platform.startswith("linux"): # Simple check: inspect only '/usr/local/Trolltech' paths = glob.glob('/usr/local/Trolltech/*') if len(paths): paths.sort() return paths[-1] else: return "" else: # Simple check: inspect only 'C:\Qt' paths = glob.glob('C:\\Qt\\*') if len(paths): paths.sort() return paths[-1] else: return os.environ.get("QTDIR","")
The following SConscript is for a simple project with some cxx files, using the QtCore, QtGui and QtNetwork modules:
Import('qtEnv') env = qtEnv.Clone() env.EnableQt4Modules([ 'QtGui', 'QtCore', 'QtNetwork' ]) # Add your CCFLAGS and CPPPATHs to env here... env.Program('foo', Glob('*.cpp'))
For the basic support of automocing, nothing needs to be done by the
user. The tool usually detects the
Q_OBJECT macro and
calls the “
moc” executable
accordingly.
If you don't want this, you can switch off the automocing by a
env['QT4_AUTOSCAN'] = 0
in your SConscript file. Then, you have to moc your files explicitly, using the Moc4 builder.
You can also switch to an extended automoc strategy with
env['QT4_AUTOSCAN_STRATEGY'] = 1
Please read the description of the
QT4_AUTOSCAN_STRATEGY variable in the Reference manual
for details.
For debugging purposes, you can set the variable
QT4_DEBUG with
env['QT4_DEBUG'] = 1
which outputs a lot of messages during automocing.
The header files with setup code for your GUI classes, are not
compiled automatically from your
.ui files. You always
have to call the Uic4 builder explicitly like
env.Uic4(Glob('*.ui')) env.Program('foo', Glob('*.cpp'))
Resource files are not built automatically, you always have to add
the names of the
.qrc files to the source list for your
program or library:
env.Program('foo', Glob('*.cpp')+Glob('*.qrc'))
For each of the Resource input files, its prefix defines the name of
the resulting resource. An appropriate
“
-name” option is added to the call of the
rcc executable by default.
You can also call the Qrc4 builder explicitly as
qrccc = env.Qrc4('foo') # ['foo.qrc'] -> ['qrc_foo.cc']
or (overriding the default suffix)
qrccc = env.Qrc4('myprefix_foo.cxx','foo.qrc') # -> ['qrc_myprefix_foo.cxx']
and then add the resulting cxx file to the sources of your Program/Library:
env.Program('foo', Glob('*.cpp') + qrccc)
The update of the
.ts files and the conversion to
binary
.qm files is not done automatically. You have to
call the corresponding builders on your own.
Example for updating a translation file:
env.Ts4('foo.ts','.') # -> ['foo.ts']
By default, the
.ts files are treated as
precious targets. This means that they are not
removed prior to a rebuild, but simply get updated. Additionally, they do
not get cleaned on a “
scons -c”. If you
want to delete the translation files on the
“
-c” SCons command, you can set the
variable “
QT4_CLEAN_TS” like this
env['QT4_CLEAN_TS']=1
Example for releasing a translation file, i.e. compiling it to a
.qm binary file:
env.Qm4('foo') # ['foo.ts'] -> ['foo.qm']
or (overriding the output prefix)
env.Qm4('myprefix','foo') # ['foo.ts'] -> ['myprefix.qm']
As an extension both, the Ts4() and Qm4 builder, support the definition of multiple targets. So, calling
env.Ts4(['app_en','app_de'], Glob('*.cpp'))
and
env.Qm4(['app','copy'], Glob('*.ts'))
should work fine.
Finally, two short notes about the support of directories for the Ts4() builder. You can pass an arbitrary mix of cxx files and subdirs to it, as in
env.Ts4('app_en',['sub1','appwindow.cpp','main.cpp']))
where
sub1 is a folder that gets scanned
recursively for cxx files by
lupdate. But like this,
you lose all dependency information for the subdir, i.e. if a file inside
the folder changes, the .ts file is not updated automatically! In this
case you should tell SCons to always update the target:
ts = env.Ts4('app_en',['sub1','appwindow.cpp','main.cpp']) env.AlwaysBuild(ts)
Last note: specifying the current folder
“
.” as input to Ts4() and storing the
resulting .ts file in the same directory, leads to a dependency cycle! You
then have to store the .ts and .qm files outside of the current folder, or
use
Glob('*.cpp')) instead.
Attached FilesTo refer to attachments on a page, use attachment:filename, as shown below in the list of files. Do NOT use the URL of the [get] link, since this is subject to change and can break easily.
- [get | view] (2013-03-30 18:18:27, 12.2 KB) [[attachment:manual.html]]
- [get | view] (2013-03-30 18:18:27, 20.9 KB) [[attachment:manual.pdf]]
- [get | view] (2013-03-30 18:18:27, 10.5 KB) [[attachment:sconsaddons_qt4_manuals_html.zip]]
- [get | view] (2013-03-30 18:18:27, 37.6 KB) [[attachment:sconsaddons_qt4_manuals_pdf.zip]]
You are not allowed to attach a file to this page. | http://www.scons.org/wiki/Qt4Tool?action=AttachFile&do=view&target=manual.html | CC-MAIN-2014-42 | en | refinedweb |
1) It's like polluting a tranditional program's variable space
with stuff the application did not explicitly cause -- it makes
debugging more difficult (and confusing if the results of the
Ant execution is published in a readonly format like a website).
2) The previous statement might seem trivial if you only use Ant
to run build scripts. However, I personally dig Ant because I
can use it to do other kinds of things. In particular, Ant is
the foundation script and launch harness for our test management
system. Being able to remove the "Ant fixture bits" from the
test configuration and other system under test bits (and this
includes Ant components) is really important to us. The more
kruft Ant spews into the "Ant fixture bits" the more difficult
it becomes for a QA person to pick out what's important when
something fails.
Unless we limit what Ant components the QA/Dev team can use
(we *really* don't want to do this), scrubbing what gets captured
as the "Ant fixture bits" becomes difficult.
Is there no way to remove the scoped properties once the target
and/or task container is finished?
OK, enuf whining.
----------------
The Wabbit
At 12:40 PM 10/8/2004, you wrote:
>Ok, here are my responses:
>
> > From: Dominique Devienne [mailto:[email protected]]
> >
>[SNIP]
> > 2) All these uniquely named properties go on living after
> > the macro has executed. That pollutes the namespace.
> >
>
>Yes it does. But I still have to see a good argument on why shall
>that bother anyone. Unless you are talking about millions of executions
>within one project context. You can always mitigate this in
>some very complex build by using <antcall/> as a way fence out
>chuncks of temporary properties. But I would like to see a good
>example in whch this pollution is a real problem.
>
>[SNIP]
>Jose Alberto
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected] | http://mail-archives.apache.org/mod_mbox/ant-dev/200410.mbox/%[email protected]%3E | CC-MAIN-2014-42 | en | refinedweb |
Developing
://
Thanks. Hi Soniya,
We can use oracle too in struts...Hi Hi friends,
must for struts in mysql or not necessary... very urgent....
Hi friend,
I am sending you a link
Hi.. - Struts
/struts/
Thanks. struts-tiles.tld: This tag library provides tiles...Hi.. Hi,
I am new in struts please help me what data write........its very urgent Hi Soniya,
I am sending you a link. This link
hi... - Struts
also its very urgent Hi Soniya,
I am sending you a link. I hope...hi... Hi Friends,
I am installed tomcat5.5 and open...
Thanks.
Amardeep
Hi - Struts
please help me. its very urgent Hi friend,
Some points to be remember...Hi Hi Friends,
I want to installed tomcat5.0 version please...
* Expanded documentation
Thanks
Hi
Hi Hi All,
I am new to roseindia. I want to learn struts. I do not know anything in struts. What exactly is struts and where do we use it. Please help me. Thanks in advance.
Regards,
Deepak
Struts Guide
Struts Guide
- This tutorial is extensive guide to the Struts Framework...
Struts Framework. This tutorial assumes that the reader is familiar with the web
PHP Tutorials Guide with Examples
PHP Tutorials Guide with Examples Hi,
I am a beginners in PHP. I... resourceful link.
Thanks,
Hi,
Please read this PHP tutorial... fundamentals. This PHP beginner guide will be very essential for beginner
Please guide me sir - EJB
Please guide me sir Hi
I am Pradeep singh ,done SCJP 5.0 and SCWCD....
Answers
Hi friend,
you can learn very easy way.
Read for more.../
-----------------------------------------------------------
Thanks
Struts
Struts How Struts is useful in application development? Where to learn Struts?
Thanks
Hi,
Struts is very useful in writing web... applications.
You can learn Struts at our Struts tutorials section.
Answer me ASAP, Thanks, very important
Answer me ASAP, Thanks, very important Sir, how to fix this problem in mysql
i have an error of "Too many connections" message from Mysql server,, ASAP please...Thanks in Advance... me its very urgent....
Hi Friend,
Plz give full details....
For example : Java/JSP/JSF/Struts 1/Struts 2 etc....
Thanks
Struts - Struts
Struts Dear Sir , I m very new to struts how to make a program....
thanks and regards
Sanjeev Hi friend,
For more information.../struts/
Thanks
Struts Validation - Struts
Struts Validation Hi friends.....will any one guide me to use the struts validator...
Hi Friend,
Please visit the following links:
http
Hi
Hi Hi
How to implement I18N concept in struts 1.3?
Please reply to me
struts - Struts
, please reply my posted question its very urgent
Thanks...struts we are using Struts framework for mobile applications,but we are not using jsps for views instead of jsps we planning to use xhtmls.In struts
Hi
Hi Hi this is really good example to beginners who is learning struts2.0
thanks
Struts - Struts
Struts Dear Sir ,
I am very new in Struts and want... provide the that examples zip.
Thanks and regards
Sanjeev. Hi friend... ${min} and ${max}
Thanks
Hi
Hi I want import txt fayl java.please say me...
Hi,
Please clarify your problem!
Thanks
Help Very Very Urgent - JSP-Servlet
requirements..
Please please Its Very very very very very urgent...
Thanks/Regards,
R.Ragavendran..
Hi friend...Help Very Very Urgent Respected Sir/Madam,
I am sorry..Actually
Hi
Hi how to read collection obj from jsp to servlet and from jsp - jsp?
Hi Friend,
Please visit the following link:
Thanks
Very Very Urgent -Image - JSP-Servlet
with some coding, its better..
PLEASE SEND ME THE CODING ASAP BECAUSE ITS VERY VERY URGENT..
Thanks/Regards,
R.Ragavendran... Hi friend,
Code...Very Very Urgent -Image Respected Sir/Madam,
I am
struts
struts hi
i would like to have a ready example of struts using "action class,DAO,and services" for understanding.so please guide for the same.
thanks Please visit the following link:
Struts Tutorials
Hi... - Struts
Hi... Hello,
I want to chat facility in roseindia java expert please tell me the process and when available experts please tell me Firstly you open the browser and type the following url in the address bar Layout Examples - Struts
on this would be very helpful.
Thanks,
Priya Hi priya,
I am...Struts Layout Examples Hi,
Iam trying to create tabbed pages...://
Thanks.
Amarde,
thanks
This is good ok this is write code but i... either same page or other page.
once again thanks
hai... the problem...
state it correctly....
thanks and regards
prashu
Thanks - Java Beginners
Thanks Hi,
Thanks ur sending url is correct..And fullfill requirement..
I want this..
I have two master table and form vendor... and send me...
Thanks once again...for sending scjp link
thanks - Java Beginners
thanks Sir , i am very glad that u r helping me a lot in all these things.... i thank you whole heartedly for this....
i got the concept of the answers of my queries...but
the program regarding twin prime was not able
Thanks - Java Beginners
Thanks Hi Rajnikant,
Thanks for reply.....
I am... and analyze you got good scenario about Interface
Thanks
Rajanikant
Hi... is the advantage of interface and what is the use of interface...
Thanks
Thanks - Java Beginners
Thanks Hi,
Thanks ur sending url is correct..And fullfill... and send me...
Thanks once again...for sending scjp link Hi friend... to visit :
Java programming tutorial for very beginners
. Is there any Java programming tutorial for very beginners?
Thanks
Hi...Java programming tutorial for very beginners Hi,
After completing... applications.
Thanks
Struts
developers, and everyone between.
Thanks.
Hi friends,
Struts is a web... developers, and everyone between.
Thanks.
Hi friends,
Struts...Struts What is Struts?
Hi hriends,
Struts is a web page
Struts Console
visually edit Struts, Tiles and Validator configuration files.
The Struts Console... Struts Console
The Struts Console is a FREE standalone Java Swing
Very Big Problem - Java Beginners
Very Big Problem Write a 'for' loop to input the current meter....
Please give my answer in a for loop only and as soon as you can. Hi... to solve the probelm.
Thanks.. - Java Beginners
its very urgent Hi Ragini
can u please send your complete source code
Thanks
sandeep kumar suman...Hi.. Hi,
I got some error please let me know what is the error
Struts - Struts
Struts Provide material over struts? Hi Friend,
Please visit the following link:
Thanks
struts - Struts
struts when the action servlet is invoked in struts? Hi Friend,
Please visit the following link:
Thanks
Struts - Framework
/struts/". Its a very good site to learn struts.
You dont need to be expert... knowledge of JSP-Servlet, nothing else.
Best of Luck for struts.
Thanks... struts application ?
Before that what kind of things necessary
hi.... - Java Beginners
very urgent
Hi ragini,
First time put the hard code after that check the insert query. Because your code very large. If hart code...hi.... Hi friends
i am using ur sending code but problem
Hi.... - Java Beginners
solution and let me know what is the error its very urgent Hi Ragini...Hi.... Hi Friends
when i compile jsp file then got....
Thanks
struts - Struts
struts how the database connection is establised in struts while using ant and eclipse? Hi friend,
Read for more information.
Thanks
hi roseindia - Java Beginners
).
Thanks/Regards,
R.Ragavendran... Hi deepthi,
Read for more...hi roseindia what is java? Java is a platform independent..., Threading, collections etc.
For Further information, you can go for very good
Struts - Struts
Struts hi,
I am new in struts concept.so, please explain example login application in struts web based application with source code .
what are needed the jar file and to run in struts application ?
please kindly
Hello - Struts
and send me its very urgent
only connect to the database code
Hi...://
Thanks
Amardeep...Hello Hi Friends,
Thakns for continue reply
I want
Thanks for fast reply - Java Beginners
Thanks for fast reply Thanks for response
I am already use html for data grid but i m noot understood how to connect to the data base, and how...
oh well... do not get confused with all that! these are very simple
Hi.... - Java Beginners
Hi....
I hv 290 data and very large form..In one form...; Hi friend,
Some points to be member to solve the problem :
When.../
Thanks
Struts - Struts
of ? Hi friend,
I am sending you a link. This link will help you. Please visit for more information.
Thanks
Struts - Struts
the struts action. Hi friend,
For more information,Tutorials and Examples on Checkbox in struts visit to :
Udaipur city travel guide
Udaipur city travel guide I need a perfect travel guide for udaipur city.
Thanks
struts - Struts
struts Hi,
I need the example programs for shopping cart using struts with my sql.
Please send the examples code as soon as possible.
please send it immediately.
Regards,
Valarmathi
Hi Friend,
Please
Struts - Struts
explaination and example? thanks in advance. Hi Friend,
It is not thread...://
Thanks...Struts Is Action class is thread safe in struts? if yes, how
Guide me to learn Flex
Guide me to learn Flex Guidance to learn Flex
Thanks in Advance.
Kamal
Please visit the following link:
Flex Tutorials
Hii.. - Struts
Hii.. Hi friends,
Thanks for nice responce....I will be very... me URL its very urgent Hi Soniya,
I am sending you a link. I hope...://
Thanks.
Amardeep
Struts - Struts
Struts hi
can anyone tell me how can i implement session tracking in struts?
please it,s urgent........... session tracking? you mean...("");
session.invalidate();
Thank you very much for answer
The second Question is very tuff for me - Java Beginners
The second Question is very tuff for me You are to choose between...; Hi Friend,
Try the following:
public class Small{
public static...);
}
}
Thanks
Programming help Very Urgent - JSP-Servlet
Please please its very urgent..
Thanks/Regards,
R.Ragavendran.. Hi friend,
Read for more information, help Very Urgent Respected Sir/Madam,
Actually my code
struts - Struts
struts how to handle multiple submit buttons in a single jsp page of a struts application Hi friend,
Code to help in solving...://
Thanks
very urgent - Java Server Faces Questions
very urgent Hi sir,
see my code and please tell me mistake. it is very urgent for me.
here is my code:
addmin.jsp:
Users... the solution as soon as possible.
Thanks in advance Hi friend
(very urgent) - Design concepts & design patterns
(very urgent) hi friends,
This is my code in html... replace all those " " with a single sign or character.
thanks in advance
hi srikala
if u want to display submit button in center of web
Struts - Struts
://
Thanks... in struts 1.1
What changes should I make for this?also write struts-config.xml...");
pw.println(e);
}
}
} Hi Friend | http://roseindia.net/tutorialhelp/comment/300 | CC-MAIN-2014-42 | en | refinedweb |
Join us in Chat. Click link in menu bar to join. Unofficial chat day is every Friday night (US time).
0 Members and 1 Guest are viewing this topic.
6v is way to low for the Axon, the Axon needs around 6.25v or so, better feed it 6.5 + though.The power drain is way to much at the voltage regulators min input.
Absolute minimum required voltage is 5.35V. If at any time the voltage drops below this amount, even for a milli-second, the Axon II will reset. The recommended battery voltage is 6V to 7.2V (a 6V 1000mAh+ NiMH battery works great). Maximum voltage at 16V, however most servos will have reduced lifetimes at voltages above ~6V and can quickly fail at voltages ~7.2V+.
#include "sys/axon2.h" // I am using an Axon 2#include "uart.h"#include "rprintf.h"//make UART names more sane#define USB_UART UART1#define USB_ACTIVATE &uart1SendByte#define USB_BAUD (BAUD_RATE)9600// This routine is called once only and allows you to do set up the hardware// Dont use any 'clock' functions here - use 'delay' functions insteadvoid appInitHardware(void){ // Set USB_UART to USB_BAUD uartInit(USB_UART, USB_BAUD); // Tell rprintf to output to UART1 (USB_UART) rprintfInit(&uart1SendByte);}// This routine is called once to allow you to set up any other variables in your program// You can use 'clock' function here.// The loopStart parameter has the current clock value in ěSTICK_COUNT appInitSoftware(TICK_COUNT loopStart){ rprintf("\nAxon 2 initiated!\n"); return 0; // dont pause after}// This routine is called repeatedly - its your main loopTICK_COUNT appControl(LOOP_COUNT loopCount, TICK_COUNT loopStart){ rprintf("Hello world\n"); return 1000000; // wait for 1 second before calling me again. 1000000us = 1 second} | http://www.societyofrobots.com/robotforum/index.php?topic=12151.msg93059 | CC-MAIN-2015-06 | en | refinedweb |
Builder to create a 'BlockStateProperty' loot condition.
This condition compares the the block state obtained from the LootContext and attempts to match it to the given MCBlock. If this comparison succeeds, then the state is further compared according to the rules outlined in the StatePropertiesPredicate.
This condition thus passes only if the block matches the given one and, optionally, if all the state properties match according to the predicate given to this loot condition.
A 'BlockStateProperty' condition requires a block to be correctly built. Properties may or may not be specified.
It might be required for you to import the package if you encounter any issues (like casting an Array), so better be safe than sorry and add the import at the very top of the file.
ZenScriptCopy
import crafttweaker.api.loot.conditions.vanilla.BlockStateProperty;
BlockStateProperty implements the following interfaces. That means all methods defined in these interfaces are also available in BlockStateProperty
Sets the block that should be matched by the loot condition.
This parameter is required.
Returns: This builder for chaining.
Return Type: BlockStateProperty
ZenScriptCopy
BlockStateProperty.withBlock(block as MCBlock) as BlockStateProperty
Creates and sets the StatePropertiesPredicate that will be matched against the state's properties.
Any changes that have already been made to the predicate will be overwritten, effectively replacing the previous predicate, if any.
This parameter is optional.
Returns: This builder for chaining.
Return Type: BlockStateProperty
ZenScriptCopy
BlockStateProperty.withStatePropertiesPredicate(builder as Consumer<StatePropertiesPredicate>) as BlockStateProperty | https://docs.blamejared.com/1.16/en/vanilla/api/loot/conditions/vanilla/BlockStateProperty/ | CC-MAIN-2021-31 | en | refinedweb |
The Futhark Record System
Most programming languages, except perhaps for the most bare-bones, support some mechanism for creating records, although rarely using that term. For the purposes of this post, a record is an object containing labeled fields, each of which contains a value. In C, a struct is a record:
struct point { int x; int y; };
In object-oriented languages, records are further embellished with methods and (sometimes) access control, giving rise to classes. For this blog post, I will stick to the simpler incarnation of records as simply labeled collections of values. It turns out that even this simple idea contains interesting design questions. In the following, I will discuss the various ways records crop up in various programming languages, and describe the design and notation we have chosen for records in Futhark. I should note that there is (intentionally) little novelty in our design - we have mostly picked features that already seemed to work well elsewhere. Futhark is a simple language designed for (relatively) small programs that have to run really fast, so we don’t need language features that support very elaborate data structures. Constructing the compiler is plenty challenging already.
Basics of Records
At its core, a record is a mapping from keys to values. What sets records apart from associative arrays or other key-value data structures is that the keys for a record (the labels of the fields) are determined at compile time. Consider the
point struct defined above - there is no way to dynamically add or remove a field at run-time. In some languages (e.g. Python) the distinction is less clear-cut, but we still tend to organise our values as some where the keys are dynamic, and others where they are static. Records are a useful organising principle, even when they are all just hash tables underneath and there is no type system enforcement. However, as Futhark is a statically typed language, we will need to make decisions about how record types work.
One of the key differences between how languages handle records is whether they use nominal or structural typing. In C, for example, we could define two structs with identical fields:
struct point_a { int x; int y; } struct point_b { int x; int y; }
To the C compiler, these two types are distinct. If a function expects an argument of type
point_a and we call it with a value of type
point_b, we will get a type error. Thus, C is nominally typed: types with different names are completely distinct, except for type aliases defined via
typedef.
In a structural record system, the types
point_a and
point_b would be identical, because they have the same structure. This is sometimes called static duck typing. Go is an example of a language that uses structural types pervasively, but there are also languages that use both nominal and structural types for different parts of the type system. One such language is Standard ML, where records are structurally typed, and algebraic types are nominally typed.
Records in Futhark
The Futhark record system is primarily inspired by the records in Standard ML, and is therefore structurally typed. One particularly nice capability of a structural record system is anonymous record types, which we in Futhark write as a sequence of comma-separated field descriptions enclosed in curly braces. For example,
{x:i32, y:bool} describes a record with two fields:
x, containing a 32-bit integer, and
y containing a boolean. The order in which fields are listed does not matter, but duplicate labels are not allowed.
Anonymous record types may seem strange at first, but quickly become natural. If we can write
(int,bool) to denote an unnamed pair, why not also
{x:i32, y:bool} for an unnamed record? A record is nothing more than a tuple with labels, after all. This also means that there is no need for the language to have a facility for defining named records. The usual type abbreviation mechanism works the same whether the right-hand side is a primitive type, a tuple, or a record:
type point = {x:f64, y:f64}
(In Futhark, the built-in type
f64 is a double-precision floating point number.)
Records are constructed via record expressions:
let some_point: point = {x=1.0, y=2.0}
A record expression is a comma-separated sequence of field assignments, enclosed in curly braces. For example,
{x=1.0, y=2.0} produces a record of type
{x:f64, y:f64}, which is equivalent to the type
point defined above. The order in which fields are listed makes no difference (except in the case of duplicates), and so
{y=2.0, x=1.0} has the same type and produces the same value as
{x=1.0, y=2.0}.
This flexible use of anonymous record types and record expressions would be more difficult in a nominal record system. Consider the following two definitions in a hypothetical nominal record system:
type r1 = {x: bool} type r2 = {x: bool}
When we encounter a record expression
{x=true}, is the result of type
r1 or
r2? Most languages solve this by requiring type annotations. For example, in C, you have to declare the type of a variable when you introduce it, and hence name clashes between struct members it not an issue. In OCaml and Haskell, record labels are scoped: only one
x field label can exist at a time. In the case above, the
x from
r1 would be shadowed by the definition of
r2, and hence
{x=true} would have type
r2. A common consequence is that programmers assign the labels of a type a unique prefix:
type r1 = {r1_x: bool} type r2 = {r2_x: bool}
The need for such prefixes is one of the most common criticisms levied at the Haskell record system (although work is ongoing on fixing it). Altogether, for Futhark, we found the Standard ML approach simpler and more elegant.
Now that we can both describe record types and create record values, the next question is how to access the fields of a record, which we call field projection. In languages descended from C, this is done with dot-notation:
p.x extracts field
x from the record (well, struct)
p. Standard ML (but not OCaml) chooses a different notation, which we have also adopted for Futhark:
#x p. This notation feels more functional to me. You can pass it, in curried form, as the argument to
map, to project an array of records to an array of field values:
map #x ps. This requires an explicit lambda if done with dot-notation. However, this quality is a matter of subjective aesthetic preference (and Elm can do this with dot-notation anyway). A more important reason is ambiguity. Since module- and variable names reside in different namespaces, we can have both a module
p and a variable
p in scope simultaneously. Is
p.x then a module member access, or a field projection?
Other languages solve this ambiguity in a wealth of different ways. C sidesteps the issue by not having modules at all. C++’s namespaces use a different symbol (
::). Java implements modules as static class members, which means there is only one namespace, and either the “record” or the “module” will be in scope. OCaml makes module names lexically distinct by mandating that they begin with an uppercase letter, while variable names must begin with a lowercase letter. While this latter solution is elegant, I do not wish to impose such constraints on Futhark (for reasons I will not go into here). Hence, we are going with the SML notation:
#x p retrieves field
x from
p.
Field projection is not the only way to access the fields of a record. Just as we can use tuple patterns to take tuples apart, so do we have record patterns for accessing the fields of a record:
let {x=first, y=second} = p
This binds the variables
first and
second to the
x and
y fields of
p. Instead of just names,
first and
second could also be patterns themselves, permitting further deconstruction when the fields of a record are themselves records or tuples. For now, all fields of the record must be mentioned in the pattern. As a common-case shortcut, a field name can be listed by itself, to bind a variable by the same name:
let {x,y} = p -- same as let {x=x,y=y} = p
Record patterns can of course also appear as function parameters, although type annotations are necessary due to limitations in the type inference capabilities of the Futhark compiler:
let add_points {x1:f64, y1:f64} {x2:f64, y2:f64} = {x = x1 + x2, y = y1 + y2}
Record Updates
When working with records, it is frequently useful to change just one field of a record, while leaving the others intact. Using the constructs seen so far, this can be done by taking apart the record in a record pattern (or using projection), and constructing a new one:
let incr_x {x:f64, y:f64} = {x = x+1.0, y = y}
This works fine for small records, but quickly becomes unwieldy once the number of fields increases. OCaml supports a
with construct for this purpose:
{p with x = p.x+1.0} (using OCaml’s dot notation for field access). This works fine, and would also function in Futhark, but we opted for a more general construct instead.
So far, record expressions have consisted of comma-separated field assignments. We extend this notation, so that an arbitrary expression can occur in place of a field assignment:
{p, x = #x p}
An expression used like this (here,
p) must itself evaluate to a record. The fields of that record are added to the record constructed by the record expression. For example, we can rewrite
incr_x:
let incr_x (p: {x:f64,y:f64}) = {p, x = #x p + 1.0}
Record expressions are evaluated left-to-right, such that if duplicate fields occur, the rightmost one takes precedence. That means we could introduce a bug by erroneously writing the above expression as:
{x = #x p + 1, p}
Since
p already has a field
x, the result of the field assignment will not be included in the resultant record. This error is easy to make, but fortunately also easy to detect and warn about in a compiler.
These extended record expressions are not just for record updates, but perform general record concatenation. For any two records
r1 and
r2, the record expression
{r1,r2} produces a record whose fields are the union of the fields in
r1 and
r2 (the latter taking precedence).
We do not yet know which programming techniques are enabled by this capability, but we are looking forward to finding out. It seems likely that we will eventually add facilities for partial record patterns (only extracting a subset of fields), as well as some facility for removing fields from records. We may also adopt some form of row polymorphism once the time comes to add full parametric polymorphism to Futhark. But that will have to wait for another blog post. | https://futhark-lang.org/blog/2017-03-06-futhark-record-system.html | CC-MAIN-2021-31 | en | refinedweb |
Step 3: Integrate the Alexa Client Library (VSK Fire TV)
VSK Cloudside Integration
To integrate Alexa into your app, you will need to integrate the Alexa Client Library. The Alexa Client Library allows your application to communicate with Alexa and handle Video Skill API directives intended for your application.
The sample app already has the Alexa Client Library version 1.4.6 integrated. (The Project pane shows an "AlexaClientLib" module.) If the Alexa Client Library has iterated to later versions, you can Update the Alexa Client Library in your Fire TV App. You can also browse the
MainActivity.javaand
VSKFireTVMessageHandler.javafiles in the sample app to see the how the Alexa Client Library is integrated and initialized. Otherwise, proceed to the next step, Step 4: Integrate Amazon Device Messaging (ADM).
- Download the Alexa Client Library
- About the Alexa Client Library
- Update the Alexa Client Library in your Fire TV App
- Add the Alexa Client Library to your Fire TV App
- Configure Proguard
- Initialize the Alexa Client Library from Your App's onCreate
- Next Steps
Download the Alexa Client Library
You can download the Alexa Client Library here:
Note that by downloading the Alexa Client Library, you agree to Amazon's Program Materials License Agreement.
For more details on versions, see the Alexa Client Library Release Notes.
About the Alexa Client Library
The Alexa Client Library allows you to do the following:
- Authenticates with Amazon using Login with Amazon (LWA) for Android.
- Automatically pairs your skill with an Echo device by inferring a relationship from an existing Echo to Fire TV (rather than requiring a customer to add the skill through the Alexa App).
- Manages application lifecycle events and sends those events back to Alexa (which will help Alexa make more intelligent decisions when users interact with voice-enabled Fire TV apps).
- Sends Alexa responses to directives received in your application.
For a Javadoc detailing the fields, methods, and implementation requirements for the Alexa Client Library, see Class AlexaClientManager.
Update the Alexa Client Library in your Fire TV App
To update an existing version of the Alexa Client Library in your Android Studio project:
- Download the Alexa Client Library. After downloading the file, unzip it. The zip folder contains a file called
AlexaClientLib.aar.
- In Android Studio, right-click the AlexaClientLib module and select Reveal in Finder (Mac) or Show in Explorer (Windows).
- Open the AlexaClientLib folder.
- Drag in the new
AlexaClientLib.aarfile, replacing the existing one. When asked if you want to overwrite the file, choose "Replace."
Add the Alexa Client Library to your Fire TV App
To add the Alexa Client Library (
AlexaClientLib.aar) into your Android Studio project, do the following:
- Download the Alexa Client Library from the link above. After downloading the file, unzip it. The zip contains a file called
AlexaClientLib.aarand some documentation.
- In your Android Studio project, go to File > New > New Module.
- Select Import .JAR/.AAR Package and click Next.
- In the File name field, select the
AlexaClientLib.aarfile and click Open, and then click Finish.
- Go to File > Project Structure.
- Under Modules in the left menu, select app.
- Go to Dependencies tab.
- If you don't already see AlexaClientLib in the list of dependencies, click the + button in the bottom and select 3. Module dependency.
- Select the AlexaClientLib from the list.
Configure Proguard
If you're using ProGuard in your Android project, make the following update:
- Locate your proguard rules file.
Add the following configuration:
-libraryjars ../AlexaClientLib # Keep the LWA and Alexa Client Library classes -dontwarn com.amazon.identity.auth.device.** -dontwarn com.amazon.alexa.vsk.clientlib.** -keep class com.amazon.identity.auth.device.** { *; } -keep class com.amazon.alexa.vsk.clientlib.** { *; }
Initialize the Alexa Client Library from Your App's onCreate
The Alexa Client Library must be initialized for the client library to work correctly. Declare
initializeAlexaClientLibrary() in your Application class. Reference the code sample below. Make sure to call
initializeAlexaClientLibrary() from
onCreate().
public class YourApplication extends Application { @Override protected void onCreate() { super.onCreate(); // Initialize the Alexa Client Library first initializeAlexaClientLibrary(); // Initialize ADM initializeAdm(); // (Optional) Enable the VSK client library so that VSK start auto-pairing in background // immediately which will enable your user use Voice service ASAP. // You can delay this step until active user signed-in to your application. AlexaClientManager.getSharedInstance().setAlexaEnabled(true); } private void initializeAlexaClient() { // Retrieve the shared instance of the AlexaClientManager AlexaClientManager clientManager = AlexaClientManager.getSharedInstance(); // Gather your Skill ID final String alexaSkillId = "YOUR_SKILL_ID"; // Create a list of supported capabilities in your skill List capabilities = new ArrayList<>(); capabilities.add(AlexaClientManager.CAPABILITY_CHANNEL_CONTROLLER); capabilities.add(AlexaClientManager.CAPABILITY_PLAY_BACK_CONTROLLER); capabilities.add(AlexaClientManager.CAPABILITY_REMOTE_VIDEO_PLAYER); capabilities.add(AlexaClientManager.CAPABILITY_SEEK_CONTROLLER); // Initialize the client library by calling initialize() clientManager.initialize(getApplicationContext(), alexaSkillId, AlexaClientManager.SKILL_STAGE_DEVELOPMENT, capabilities); } private void initializeAdm() { try { final ADM adm = new ADM(this); if (adm.isSupported()) { if (adm.getRegistrationId() == null) { // ADM is not ready now. You have to start ADM registration by calling // startRegister() API. ADM will calls onRegister() API on your ADM // handler service when ADM registration was completed with registered ADM id. adm.startRegister(); } else { // [IMPORTANT] // ADM down-channel is already available. This is a common case that your // application restarted. ADM manager on your Fire TV caches the previous // ADM registration info and provides it immediately when your application // is identified as restarted. // You have to provide the retrieved ADM registration Id to the Alexa Client library final String admRegistrationId = adm.getRegistrationId(); Log.i("ADM registration Id:" + admRegistrationId); // Provide the acquired ADM registration ID AlexaClientManager.getSharedInstance().setDownChannelReady(true, admRegistrationId); } } } catch (Exception ex) { Log.e(TAG, "ADM initialization is failed with exception", ex); } } }
General Notes
The skill stage is set to
SKILL_STAGE_DEVELOPMENT. Later in the development process, when you're ready to publish your app (in Step 12: Go Live!), you will change
AlexaClientManager.SKILL_STAGE_DEVELOPMENTto
AlexaClientManager.SKILL_STAGE_LIVE.
If an event (for example, a visibility change, such as backgrounding/foregrounding the app) occurs before initialize is called, the app will not respond to voice until the app is force-stopped and restarted.
Declaring Capabilities
The sample code above declares support for these capabilities:
capabilities.add(AlexaClientManager.CAPABILITY_CHANNEL_CONTROLLER); capabilities.add(AlexaClientManager.CAPABILITY_REMOTE_VIDEO_PLAYER); capabilities.add(AlexaClientManager.CAPABILITY_PLAY_BACK_CONTROLLER); capabilities.add(AlexaClientManager.CAPABILITY_SEEK_CONTROLLER);
Alexa will send directives related to these capabilities listed here. For example, if you indicate
AlexaClientManager.CAPABILITY_CHANNEL_CONTROLLER, Alexa will send
ChangeChannel directives when users say "Change the channel to PBS." If you don't have the capability declared, Alexa won't send directives related to the capability. For more detail, see the Discover Directives.
If your app doesn't support a particular capability, omit that capability from the above. Additionally, you can add
AlexaClientManager.CAPABILITY_RECORD_CONTROLLER and
AlexaClientManager.CAPABILITY_KEYPAD_CONTROLLER if your app supports the functionality of these APIs. The capabilities are described in the following table.
If you would like to declare a capability without a predefined constant (e.g.,
List<String> capabilities) in the client library, you can do so by using the exact string name of that capability. For example, you would add the "Launcher" capability for GUI shortcuts as follows:
capabilities.add(\"Alexa.Launcher\");.
Discoverdirective, your Lambda must send a
Discover.Responseback to indicate which capabilities your skill supports. See the Example response in the
Discoverdirective documentation.
Customize the Skill ID
In the previous code, replace
YOUR_SKILL_ID with your own Video Skill ID, which you copied earlier in Step 1: Create a Video Skill and Set Up Devices.
final String alexaSkillId = "YOUR_SKILL_ID";
Enable or Disable the Alexa Client Library When User Sign In or Out
The function
setAlexaEnabled() is used to enable or disable an app instance as a targetable endpoint by the user. It's recommended that you enable Alexa (setting
setAlexaEnabled(true)) only for subscribed customers. (Hence the inline comment in the code that says you can delay this step until the active user signs in to your application.) Do the following:
- Call
setAlexaEnabled(true)when the user logs into your app
- Call
setAlexaEnabled(false)when the user logs out
setAlexaEnabledto false in response to direct user action, but not in response to automatic events, such as HDMI events. Do not call
setAlexaEnabled(false)when your app goes to background, or when it terminates.
The following code shows an example:
public class YourSigninActivity extends Activity { private void onUserSignedIn() { // Enable the AlexaClient when user sign-in AlexaClientManager.getSharedInstance().setAlexaEnabled(true); } private void onUserSignedOut() { // Disable the Alexa Client when user signed-out AlexaClientManager.getSharedInstance().setAlexaEnabled(false); } }
The only exception to this rule is if your app does not have a user login — in other words, your content is available to users regardless of whether they're logged in or have a particular subscription.
Construct a Down-Channel Service
To receive directives from the Alexa Service, you must construct a down-channel service in your app as per the sample reference code above. The down-channel is a stream primarily used to deliver cloud-initiated directives to your Fire TV app. Note the following about the down-channel service.
When your down-channel service is ready, call the
setDownChannelReady() API with parameters as follows:
- The first parameter (
isDownChannelReady) should be
trueto indicate that your down-channel is ready.
- The second parameter (
appInstanceId) should be a string that uniquely identifies your application instance. If you are using Amazon Device Messaging (ADM), use the app's ADM Registration ID for this parameter. (If you prefer not to use ADM, you can use another technology.)
When your down-channel service status changes, call the
setDownChannelReady() API each time with the proper status flag value.
Initialize the ADM service (
initializeAdm();) at application creation time as shown below. The preceding code shows a sample ADM reference implementation.
Provide the application instance ID when your down channel is ready.
public class YourADMHandler extends ADMMessageHandlerBase { @Override protected void onRegistered(final String registrationId) { // [IMPORTANT] // ADM down channel is ready. Set the downchannel as ready with acquired ADM registration Id. AlexaClientManager.getSharedInstance().setDownChannelReady(true, registrationId); } }
falseduring application initialization unless you are certain the current user does not have access to content. Toggling this state off and on will result in Alexa unregistering and re-registering the endpoint, which will have unintended consequences to your skill.
Next Steps
Continue on to the next step: Step 4: Integrate Amazon Device Messaging (ADM).
(If you run into any issues that prevent you from continuing, see Troubleshooting.)
Last updated: Feb 16, 2021 | https://developer.amazon.com/zh/docs/video-skills-fire-tv-apps/integrate-alexa-client-library.html | CC-MAIN-2021-31 | en | refinedweb |
Feed files¶ (versions). An Identity is a way to recognise an Implementation (e.g. a cryptographic digest). A Retrieval method is a way to get an Implementation (e.g. by downloading from a web site). A Command says how to run an Implementation as a program. A Dependency indicates that one component depends on another (e.g. Gimp requires the GTK library). A Binding says how to let the program locate the Implementations when run. A Constraint limits the choice of a dependency (e.g. Gimp requires a version of GTK >= 2.6).
Terminology
Originally the word 'interface' was used to mean both 'interface' and 'feed', so don't be confused if you see it used this way.
Introduction¶
Feed files are introduced in the Packager's Documentation. They have the following syntax (
? follows optional items,
* means zero-or-more, order of elements is not important, and extension elements can appear anywhere as long as they use a different namespace):
<?xml version='1.0'?> <interface xmlns='' min- * <feed src='../img/...' langs='...' ? * <replaced-by ? [group] * [implementation] * [entry-point] * </interface>
min-injector-version
- This attribute gives the oldest version of 0install that can read this file. Older versions will tell the user to upgrade if they are asked to read the file. Versions prior to 0.20 do not perform this check, however. If the attribute is not present, the file can be read by all versions.
uri
- This attribute is only needed for remote feeds (fetched via HTTP). The value must exactly match the expected URL, to prevent an attacker replacing one correctly-signed feed with another (e.g., returning a feed for the
shredprogram when the user asked for the
backupprogram).
<name>
- a short name to identify the interface (e.g. "Foo")
<summary>
- a short one-line description; the first word should not be upper-case unless it is a proper noun (e.g. "cures all ills"). Supports localization.
<description>
- a full description, which can be several paragraphs long (optional since 0.32, but recommended). Supports localization.
- the URL of a web-page describing this interface in more detail
<category>
- a classification for the interface. If no type is given, then the category is one of the 'Main' categories defined by the freedesktop.org menu specification. Otherwise, it is a URI giving the namespace for the category.
<needs-terminal>
- if present, this element indicates that the program requires a terminal in order to run. Graphical launchers should therefore run this program in a suitable terminal emulator.
<icon>
- an icon to use for the program; this is used by programs such as AddApp and desktop integration. You should provide an icon of the type
image/png(
.png) for display in browsers and launchers on Linux. For Windows apps you should additionally provide an icon of the type
image/vnd.microsoft.icon(
.ico).
<feed>
- the linked feed contains more implementations of this feed's interface. The
langsand
archattributes, if present, indicate that all implementations will fall within these limits (e.g.
arch='*-src'means that there is no point fetching this feed unless you are looking for source code). See the
<implementation>element for a description of the values of these attributes.
<feed-for>
- the implementations in this feed are implementations of the given interface. This is used when adding an optional extra feed to an interface with
0install add-feed(e.g. a local feed for a development version).
<replaced-by>
- this feed's interface (the one in the root element's
uriattribute) has been replaced by the given interface. Any references to the old URI should be updated to use the new one.
Groups¶
A group has this syntax:
<group version='...' ? released='...' ? main='...' ? self-test='...' ? doc-dir='...' ? license='...' ? released='...' ? stability='...' ? langs='...' ? arch='...' ? > [requires] * [group] * [command] * [binding] * [implementation] * [package-implementation] * </group>
All attributes of the group are inherited by any child groups and implementations as defaults, but can be overridden there. All dependencies (
requires), bindings and commands are inherited (sub-groups may add more dependencies and bindings to the list, but cannot remove anything).
Implementations¶
An implementation has this syntax (an unspecified argument is inherited from the closest ancestor
<group> which defines it):
<implementation id='...' local-path='...' ? [all <group> attributes] > <manifest-digest [digest] * /> * [command] * [retrieval-method] * [binding] * [requires] * </implementation>
id
- A unique identifier for this implementation. For example, when the user marks a particular version as buggy this identifier is used to keep track of it, and saving and restoring selections uses it. However, see the important historical note below.
local-path
- If the feed file is a local file (the interface
uristarts with
/) then the
local-pathattribute may contain the pathname of a local directory (either an absolute path or a path relative to the directory containing the feed file). See the historical note below.
version
- The version number. See the version numbers section below for more details.
main(deprecated)
- The relative path of an executable inside the implementation that should be executed by default when the interface is run. If an implementation has no
mainsetting, then it cannot be executed without specifying one manually (with
0install run --main=MAIN). This typically means that the interface is for a library. Note:
mainis being replaced by the
<command>element.
self-test(deprecated)
- The relative path of an executable inside the implementation that can be executed to test the program. The program must be non-interactive (e.g. it can't open any windows or prompt for input). It should return with an exit status of zero if the tests pass. Any other status indicates failure. Note:
self-testis being replaced by the
<command>element.
doc-dir
- The relative path of a directory inside the implementation that contains the package's documentation. This is the directory that would end up inside
/usr/share/docon a traditional Linux system.
released
- The date this implementation was made available, in the format
YYYY-MM-DD. For development versions checked out from version control this attribute should not be present.
stability
- The default stability rating for this implementation. If not present,
testingis used. See the stability section below for more details.
langs
- The natural language(s) which this package supports, as a space-separated list of languages codes (in the same format as used by the
$LANGenvironment variable). For example, the value
en_GB frwould be used for a package supporting British English and French. Supported since 0.48. Note that versions before 0.54 require the region separator to be
_(underscore), while later versions also allow the use of
-for consistency with the
xml:langformat.
arch
- For platform-specific binaries, the platform for which this implementation was compiled, in the form
os-cpu. 0install knows that certain platforms are backwards-compatible with others, so binaries with
arch="Linux-i486"will still be available on
Linux-i686machines, for example. Either the
osor
cpupart may be
*, which will make it available on any OS or CPU. If missing, the default is
*-*. See also: Valid architecture names.
license
- License terms. This is typically a Trove category. See the PyPI list for some examples (the leading
License ::is not included).
The
manifest-digest element is used to give digests of the .manifest file using various hashing algorithms (but see the historical note below). Having multiple algorithms allows a smooth upgrade to newer digest algorithms without breaking old clients. Each non-namespaced attribute gives a digest, with the attribute name being the algorithm.
Example
<manifest-digest
For non-local implementations (those without a
local-path attribute), the
<implementation> element contains a set of retrieval methods, each of which gives a different way of getting the implementation (i.e. of getting a directory structure whose digest matches the ones given).
Currently, 0install.
Unrecognised elements inside an implementation are ignored.
Historical note about id¶
0install >= 0.45 generally treats the ID as a simple identifier, and gets the local path (if any) from the
local-path attribute and the digests from the
<manifest-digest>.
0install < 0.45 ignores the
local-path attribute and the
<manifest-digest> element. If the ID starts with
. or
/ then the ID is also the local path; otherwise, it is the single manifest digest.
For backwards compatibility, 0install >= 0.45 will treat an ID starting with
. or
/ as a local path if no
local-path attribute is present, and it will treat it as an additional digest if it contains an
= character.
Therefore, if you want to generate feeds compatible with past and future versions:
- If you have a digest, set the ID to
sha1new=...and put the sha256 digest in the
<manifest-digest>.
- If you have a local implementation then set both
idand
local-pathto the pathname.
Commands¶
The
main attribute above provides a simple way to say how to run this implementation. The
<command> element (supported since 0.51, released Dec 2010) provides a more flexible alternative.
<command name='...' path='...' ? > [binding] * [requires] * [runner] ? <arg> ... </arg> * <for-each item-from='...' separator='...'? > ... </for-each> * </command>
name
- By default, 0install executes the
runcommand, but the
--commandoption can be used to specify a different one. 0test runs the
testcommand (replacing the old
self-testattribute) and 0compile runs the
compilecommand (replacing the
compile:commandattribute).
path
- The relative path of the executable within the implementation (optional if
<runner>is used).
Additional arguments can be passed using the
<arg> element. Within an argument,
${name} is expanded to the value of the corresponding environment variable. These arguments are passed to the program before any arguments specified by the user.
If an environment variable should be expanded to multiple arguments, use
<for-each>. The variable in the
item-from attribute is split using the given separator (which defaults to the OS path separator,
: on POSIX and
; on Windows) and the arguments inside the element are added for each item. The current item is available as
${item}. If the variable given in
item-from is not set or is empty, no arguments are added. See below for an example. Versions of 0install before 1.15 ignore
<for-each> elements and their contents.
Command-specific dependencies can be specified for a command by nesting
<requires> elements. For example, an interpreter might only depend on libreadline when used interactively, but not when used as a library, or the
test command might depend on a test framework.
Command-specific bindings (0install >= 1.3) create a binding from the implementation to itself. For example, the
test command may want to make the
run command available in
$PATH using
<executable-in-path>.
The
<runner> element introduces a special kind of dependency: the program that is used to run this one. For example, a Python program might specify Python as its runner.
<runner> is a subclass of
<requires> and accepts the same attributes and child elements. In addition, you can specify arguments to pass to the runner by nesting them inside the
<runner> element. These arguments are passed before the path of the executable given by the
path attribute.
Example
<command name='run' path="causeway.e-swt"> <runner interface=''> <arg>-cpa</arg> <arg>$SWT_JAR</arg> <for-each <arg>${item}</arg> </for-each> </runner> </command>
In this case, 0install will run the equivalent of
/path/to/e-interpreter -cpa /path/to/swt.jar $EXTRA_E_OPTIONS /path/to/causeway.e-swt.
Package implementations¶
This element names a distribution-provided package which, if present, is a valid implementation of this interface. The syntax is:
<package-implementation package='...' distributions='...' ? main='...' ? version='...' ? > [command] * [requires] * </package-implementation>
Support for distribution packages was added in version 0.28 of 0install. Earlier versions ignore this element.
If the named package is available then it will be considered as a possible implementation of the interface. If
main is given then it must be an absolute path.
If the
distributions attribute is present then it is a space-separated list of distribution names where this element applies. 0install >= 0.45 ranks the
<package-implementation> elements according to how well they match the host distribution and then only uses the best match (or matches, if several get the same score). See Distribution integration for a list of supported distributions.
Earlier versions of 0install ignore the
distributions attribute and process all of the elements. requester may get.
Package implementations still inherit attributes and dependencies from their parent group. The
doc-dir and
license attributes may be given, but
version and
released are read from the native packaging system.
If
version is given then only implmentations matching this pattern are used (0install >= 2.14). This allows multiple
<packages-implmentation> elements for a single distribution package, which is useful if different versions have different requirements. See Constraints for the syntax.
Retrieval methods¶
A retrieval method is a way of getting an implementation.
The most common retrieval method is the
<archive> element:
<archive href='...' size='...' extract='...' ? dest='...' ?. If
dest is given (0install >= 2.1), then the archive is unpacked to the specified subdirectory. It is an error to specify a target outside of the implementation directory (e.g.
../foo or attempting to follow a symlink that points out of the implementation).
Note that the
extract attribute cannot contain
/ or
\ characters, so it can only be used to extract a top-level directory. It is intended for archives that contain their own name as the single top-level entry.
The type of the archive is given as a MIME type in the
type attribute (since 0install version 0.21). If missing, the type is guessed from the extension on the
href attribute (all versions). Known types and extensions (case insensitive) are:
application/zip(
.zip)
application/x-tar(
.tar)
application/x-compressed-tar(
.tar.gzor
.tgz)
application/x-bzip-compressed-tar(
.tar.bz2or
.tbz2)
application/x-xz-compressed-tar(
.tar.xzor
.txz) - since version 0.43, since version 2.11 on Windows
application/x-lzma-compressed-tar(
.tar.lzmaor
.tlzma)
application/x-lzip-compressed-tar(
.tar.lzor
.tlz) - since version 2.18, Windows only
application/x-zstd-compressed-tar(
.tar.zst) - since version 2.18, Windows only
application/x-ruby-gem(
.gem) - since version 1.0-rc1
application/x-7z-compressed(
.7z) - Windows only
application/vnd.rar(
.rar) - since version 2.18, Windows only
application/vnd.ms-cab-compressed(
.cab)
application/x-msi(
.msi) - Windows only
application/x-deb(
.deb) - not supported on Windows
application/x-rpm(
.rpm) - not supported on Windows
application/x-apple-diskimage(
.dmg) - not supported on Windows.
You can also fetch individual files (0install >= 2.1). This is useful for e.g. jar files, which are typically not unpacked:
<file href='...' size='...' dest='...' executable='true|false' ? />
The file is downloaded from
href, must be of the given
size, and is placed within the implementation directory as
dest.
If
executable is set to
true (0install >= 2.14.2) the file is marked as executable after download.
Recipes¶
An implementation can also be created by following a
<recipe>:
<recipe> ( <archive ...> | <file ...> | <rename ...> | <remove ...> | <copy-from ...> ) + </recipe>
In this case, each child element of the recipe represents a step. To get an implementation by following a recipe, a new empty directory is created and then all of the steps are performed in sequence. The resulting directory must have the digest given in the implementation's
<manifest-digest>. A recipe containing only a single archive is equivalent to just specifying the archive on its own. If a recipe contains an unrecognised element then the whole recipe must be ignored.
<archive ...>
- Causes the named archive to be fetched and unpacked over the top of whatever is currently in the temporary directory. It supports the same attributes as when used outside of a recipe.
<file ...>
- Causes the named file to be fetched and saved over the top of whatever is currently in the temporary directory (0install >= 2.1). It supports the same attributes as when used outside of a recipe.
<rename source='...' dest='...'>
- Renames or moves a file or directory (0install >= 1.10). It is an error if the source or destination are outside the implementation.
<remove path='...'>
- Delete the file or directory from the implementation (0install >= 2.1). It is an error if the path is outside the implementation.
<copy-from id='...' source='...' ? dest='...' ?>
- Copies files or directories from another implementation, e.g., for applying an update to a previous version (0install >= 2.13). The specified id must exactly match the id attribute of another implementation specified elsewhere in the same feed. You can specify the source and destination file or directory to be copied relative to the implementation root. Leave them unset to copy the entire implementation.
Tip.
Dependencies¶
A
<requires> element means that every implementation within the same group (including nested sub-groups) requires an implementation of the specified interface when run. 0install will choose a suitable implementation, downloading one if required.
<requires interface='...' importance='...' ? version='...' ? os='...' ? distribution='...' ? source='true|false' ? use='...' ? > [ constraints ] * [ bindings ] * </requires>
The constraint elements (if any) limit the set of acceptable versions. The bindings specify how 0install should make its choice known (typically, by setting environment variables).
The
use attribute can be used to indicate that this dependency is only needed in some cases. By default, 0install >= 0.43 will skip any
<requires> element with this attribute set. Earlier versions process all
<requires> elements whether this attribute is present or not. 0test >= 0.2 will process dependencies where
use="testing", in addition to the program's normal dependencies. This attribute is deprecated - it's usually better to use a
<command> for this.
The
importance attribute (0install >= 1.1) can be either
essential (the default; a version of this dependency must be selected) or
recommended (no version is also an option, although selecting a version is preferable to not selecting one).
The
version attribute (0install >= 1.13) provides a quick way to specify the permitted versions. See the Constraints section below.
The
distribution attribute (0install >= 1.15) can be used to require the selected implementation to be from the given distribution. For example, a Python library available through MacPorts can only be used with a version of Python which is also from MacPorts. The value of this attribute is a space-separated list of distribution names. In addition to the official list of distribution names, the special value
0install may be used to require an implementation provided by 0instal (i.e. one not provided by a
<package-implementation>).
The
os attribute (0install >= 1.12) can be used to indicate that the dependency only applies to the given OS (e.g.
os="Windows" for dependencies only needed on Windows systems).
The
source attribute (0install >= 2.8) can be used to indicate that a source implementation is needed rather than a binary. This may be useful if you want to get e.g. header files from a source package. Note that if you select both source and binary implementations of an interface, 0install does not automatically force them to be the same version.
A
<restricts> element (0install >= 1.10) can be used to apply constraints without creating a dependency:
<restricts interface='...' version='...' ? os='...' ? distribution='...' ? > [ constraints ] * </restricts>
Internally,
<restricts> behaves much like
<requires importance='recommended'>, except that it doesn't try to cause the interface to be selected at all.
Constraints¶
Constraints appear on
<requires>,
<restricts>,
<package-implementation> and
<runner> elements. They restrict the set of versions from which 0install may choose an implementation.
Since 0install 1.13, you can use the
version attribute on the dependency element. The attribute's value is a list of ranges, separated by
|, any of which may match.
Example
<restricts interface='' version='2.6..!3 | 3.2.2..'/>
This allows Python versions 2.6, 2.7 and 3.3, but not 2.5 or 3.
Each range is in the form
START..!END. The range matches versions where
START <=
VERSION <
END. The start or end may be omitted. A single version number may be used instead of a range to match only that version, or
!VERSION to match everything except that version.
There is also an older syntax which also works with 0install < 1.13, where a child node is used instead:
<version not-before='...' ? before='...' ? >
not-before
- This is the lowest-numbered version that can be chosen.
before
- This version and all later versions are unsuitable.
Example
<version not-
allows any of these versions: 2.4, 2.4.0, and 2.4.8. It will not select 2.3.9 or 2.6.
This older syntax is not supported with
<packager-implementation>.
Bindings¶
Bindings specify how the chosen implementation is made known to the running program. Bindings can appear in a
<requires> element, in which case they tell a component how to find its dependency, or in an
<implementation> (or group), where they tell a component how to find itself.
Environment bindings¶
<environment name='...' (insert='...' | value='...') mode='prepend|append|replace' ? separator='...' ? default='...' ? /> *
Details of the chosen implementation.
Usually, the (badly-named)
insert attribute is used, which adds a path to a file or directory inside the implementation to the environment variable. For example,
<environment name='PATH' insert='bin'/> would perform something similar to the bash shell statement
export PATH=/path/to/impl/bin:$PATH.
Alternatively, you can use the
value attribute to use a literal string. For example,
<environment name='GRAPHICAL_MODE' value='TRUE' mode='replace'/>. This requires 0install >= 0.52.
If
mode is
prepend (or not set), then the absolute path of the item is prepended to the current value of the variable. The default separator character is the colon character on POSIX systems, and semi-colon on Windows. This can be overridden using
separator (0install >= 1.1)..
The following environment variables have known defaults and therefore the
default attribute is not needed with them:
Executable bindings¶
These both require 0install >= 1.2.
<executable-in-var name='...' command='...' ? /> <executable-in-path name='...' command='...' ? />
These are used when the program needs to run another program.
command says which of the program's commands to use; the default is
run.
<executable-in-var> stores the path of the selected executable in the named environment variable.
Example
If a program uses
$MAKE to run make, you can provide the required command like this:
<requires interface=""> <executable-in-var </requires>
<executable-in-path> works in a similar way, except that it adds a directory containing the executable to
$PATH.
Example
If the program instead just runs the
make command, you would use:
<requires interface=""> <executable-in-path </requires>
It is preferable to use
<executable-in-var> where possible, to avoid making
$PATH very long.
Implementation note
On POSIX systems, 0install will create a shell script under
~/.cache/0install.net/injector/executables and pass the path of this script.
Generic bindings¶
Custom bindings can be specified using the
<binding> element (0install >= 2.1). 0install will not know how to run a program using custom bindings itself, but it will include them in any selections documents it creates, which can then be executed by your custom code. The syntax is:
<binding path='...' ? command='...' ? ... > ... </binding>
If
command is given, then 0install will select the given
<command> within the implementation (which may cause additional dependencies and bindings to be selected). Otherwise, no command is selected.
Any additional attributes and child elements are not processed, but are just passed through. If your binding needs a path within the selected implementation, it is suggested that the
path attribute be used for this. Other attributes and child elements should be namespaced to avoid collisions.
Example
The EBox application launcher allows each code module to specify its dependencies, which are then available in the module's scope as getters. The ebox-edit application depends on the help library like this:
<requires interface=""> <binding e: </requires>
Versions¶.1
- 1.2-pre
- 1.2-pre1
- 1.2-rc1
- 1.2
- 1.2-0
- 1.2-post
- 1.2-post1-pre
- 1.2-post1
- 1.2.1-pre
- 1.2.1.4
- 1.2.2
- 1.2.10
- 3
0install:(contains 1.2.0, 1.2.1, 1.2.2, ...)(contains 2.0.0, 2.0.1, 2.2.0, 2.4.0, 2.4.1, ...)
The integers in version numbers must be representable as 64-bit signed integers.
Note
Version numbers containing dash characters were not supported before version 0.24 of 0install and so a
version-modifier attribute was added to allow new-style versions to be added without breaking older versions. This should no longer be used.
Stability¶
The feed file also gives a stability rating for each implementation. The following levels are allowed (must be lowercase in the feed files):
stable
testing
developer
buggy
insecure
Stability ratings. 0install.
When to use 'buggy'¶.
Entry points¶
(only used on the Windows version currently)
Entry points allow you to associate additional information with
<command> names, such as user-friendly names and descriptions. Entry points are used by the Zero Install GUI to help the user choose a command and by the desktop integration system to generate appropriate menu entries for commands. An entry point is not necessary for a command to work but it makes it more discoverable to end-users.
Entry points are top-level elements and, unlike commands, are not associated with any specific implementation or group. One entry point represents all commands in all implementations that carry the same name. An entry point has this syntax:
<entry-point * </group>
command
- the name of the command this entry point represents
binary-name
- the canonical name of the binary supplying the command (without file extensions); this is used to suggest suitable alias names.
app-id
- the Application User Model ID; used by Windows to associate shortcuts and pinned taskbar entries with running processes.
<needs-terminal>
- if present, this element indicates that the command represented by this entry point requires a terminal in order to run.
<suggest-auto-start>
- if present, this element indicates that this entry point should be offered as an auto-start candidate to the user.
<suggest-send-to>
- if present, this element indicates that this entry point should be offered as a candidate for the "Send To" context menu to the user.
<name>
- user-friendly name for the command. If not present, the value of the
commandattribute is used instead. Supports localization.
<summary>
- a short one-line description; the first word should not be upper-case unless it is a proper noun (e.g. "cures all ills"). Supports localization.
<description>
- a full description, which can be several paragraphs long. Supports localization.
<icon>
- an icon to represent the command; this is used when creating menu entries. You should provide an icon of the type
image/png(
.png) for Linux apps and
image/vnd.microsoft.icon(
.ico) for Windows apps.
Localization¶
Some elements can be localized using the
xml:lang attribute.
Example
<summary xml:cures all ills</summary> <summary xml:heilt alle Krankheiten</summary>
When choosing a localized element Zero Install will prefer
xml:lang values in the following order:
- Exactly matching the users language (e.g.,
de-DE)
- Matching the users with a neutral culture (e.g.,
de)
en-US
Metadata¶
All elements can contain extension elements, provided they are not in the Zero Install namespace used by the elements defined here. 0install:
dc:creator
- The primary author of the program.
dc:publisher
- The person who created this implementation. For a binary, this is the person who compiled it..
Digital signatures¶
When a feed is downloaded from the web, it must contain a digital signature. A feed feed. The signature block must start on a new line, may not contain anything except valid base64 characters, and nothing may follow the signature block. XML signature blocks are supported from version 0.18 of 0install and may be generated easily using the 0publish command.
Local interfaces are plain XML, although having an XML signature block is no problem as it will be ignored as a normal XML comment.
Valid architecture names¶
The
arch attribute is a value in the form
OS-CPU. The values come from the
uname system call, but there is some normalisation (e.g. because Windows doesn't report the same CPU names as Linux). Valid values for OS include:
*
Cygwin(a Unix-compatibility layer for Windows)
Darwin(MacOSX, without the proprietary bits)
FreeBSD
Linux
MacOSX
Windows
Valid values for CPU include:
*
src
i386
i486
i586
i686
ppc
ppc64
x86_64
armv6l
armv7l
aarch64
The
if-0install-version attribute¶
To make it possible to use newer features in a feed without breaking older versions of 0install, the
if-0install-version attribute may be placed on any element to indicate that the element should only be processed by the specified versions of 0install.
Example
<group> <new-element <fallback if- </group>
In this example, 0install 1.14 and later will see
<new-element>, while older versions see
<fallback>. The syntax is as described in Constraints.
Attention
0install versions before 1.13 ignore this attribute and process all elements.
Well-known extensions¶
The following are well-known extensions to the Zero Install format:
- Capabilities (provides information for desktop integration of applications)
Future plans¶
- The extra meta-data elements need to be better specified.
- As well as before and not-before, we should support after and not-after.
- It should be possible to give a delta (binary patch) against a previous version, to make upgrading quicker.
- It should be possible to scope bindings. For example, when a DTP package requires a clipart package, the clipart package should not be allowed to affect the DTP package's environment. | https://docs.0install.net/specifications/feed/ | CC-MAIN-2021-31 | en | refinedweb |
Outlook Interop Exception
unable to cast com object of type 'microsoft office interop outlook applicationclass
Trying to automate Outlook as
Microsoft.Office.Interop.Outlook.Application myApp = new Microsoft.Office.Interop.Outlook.ApplicationClass(); Microsoft.Office.Interop.Outlook.NameSpace mapiNameSpace = myApp.GetNamespace("MAPI");
and getting following exception at second line i.e.
GetNamespace: No such interface supported (Exception from HRESULT: 0x80004002 (E_NOINTERFACE)).
I am using
.Net Framework 4 and
Outlook 2013.
Is this the only solution!!! as I am trying to avoid making any changes to the registry.
Edit
this didn't fix the problem.
Edit If the referenced library is Office 15 and the installed library is 10 or 12, would it work?
I've been stumped by this problem for days. This worked for me:
I just realized that Outlook 2013 is 64 bit... And my C# app had in Project Properties -> Build "Any CPU" as platform target and a check-mark in "Prefer 32-bit".
I changed the Platform target to x64 and it worked!
Exception Interface (Microsoft.Office.Interop.Outlook), represents the parent Outlook application for the object. Read-only. C++. Copy. public: property Microsoft::Office::Interop::Outlook::Application ^ Application� Outlook Interop Exception - HRESULT 0xCA140115. Microsoft Office for Developers > Outlook for Developers. Outlook for Developers
The exception looks quite obvious, this should work
var myApp = new Microsoft.Office.Interop.Outlook.Application();
you just can't get a cast exception with that line above.
Exception.Application Property (Microsoft.Office.Interop.Outlook , Interop.Outlook._Application'. This operation failed because the QueryInterface call on the COM component for the interface with IID '{00063001-� Hello, I'm working on a client tool that should send mail. I have decided to use Outlook Interop to do so and it works fine if outlook is running otherwise I get the: "Operation aborted (Exception from HRESULT: 0x80004004 (E_ABORT))" Exception on the ResolveAll() method
This exception usually happens when you had two versions of Office installed and then uninstalled one. Run a repair installation for your still installed Office, this should fix the registry keys.
(Exception from HRESULT: 0x80040155) - Outlook , Interop.Outlook.ApplicationClass' to interface type 'Microsoft.Office. error: Interface not registered (Exception from HRESULT: 0x80040155). (Exception from HRESULT: 0x80029C4A (TYPE_E_CANTLOADLIBRARY) Follow the below mentioned steps to remove the invalid registry key in order to resolve this Unable to cast COM object of type 'Microsoft.Office.Interop.Outlook.ApplicationClass' issue: Close all instances of Microsoft Office product.
have you tried this?
Microsoft.Office.Interop.Outlook.Application myApp = new Microsoft.Office.Interop.Outlook.Application(); Microsoft.Office.Interop.Outlook.NameSpace mapiNameSpace = myApp.GetNamespace("MAPI");
When trying to use Outlook integration with the desktop version and , Interop.Outlook.ApplicationClass' to interface type 'Microsoft.Office.Interop. Outlook._Application'. (Exception from HRESULT: 0x80029C4A� fix of the 9.5 version didn't work for me, as there was no excessive 9.5 field.
In my case, the problem was that when I downgraded to Outlook 2010 I have modified the installation location (i.e. not in the default ProgramFiles folder). However, windows didn’t update accordingly the value in ‘HKEY_CLASSES_ROOT\TypeLib{00062FFF-0000-0000-C000-000000000046}\9.4\0\win64’.
After manually updating the value and pointing it to the right location, the problem was resolved.
Error: "Unable to cast COM object of type 'Microsoft.Office.Interop , InvalidCastException: Unable to cast COM object of type 'System.__ComObject' to interface type 'Microsoft.Office.Interop.Outlook.MAPIFolder'. This operation� However, i tried this and the exception is thrown once again var myApp = new Microsoft.Office.Interop.Outlook.Application(); var mapiNameSpace = myApp.GetNamespace("MAPI"); – bjan Aug 26 '13 at 11:26
"System.InvalidCastException" error or emails get stuck in the Outbox, Interop.Outlook._Application.' This operation failed because the QueryInterface call on the COM component for the interface with IID '{00063001-0000-0000- C000� Outlook Error: Could not load file or assembly 'Microsoft.Office.Interop.Outlook, Version=12.0.0.0 Outlook Error: HRESULT: 0x80029C4A (TYPE_E_CANTLOADLIBRARY)) CRM for Outlook Add-In keeps disabling
Error 'Library not registered (Exception from HRESULT , Interop.Outlook.ApplicationClass' to interface type 'Microsoft.Office.Interop. Outlook._Application'. (Exception from HRESULT: 0x80029C4A� I simply cannot get C# in VS 2017 to work with Outlook 2016 Interop. I have tried checking the registry, repairing Office, etc. I tried it on my assistants computer, and it doesn't work either.
Outlook issue, Interop.Outlook 15.0.0.0. When running in Debug mode everything works fine but crashes in Release mode with Exception: Unable to cast COM object of type� Hi! I am having a problem with the code in my Revit Add-in. In the code I use the Microsoft.Office.Interop.Word to make some changes in a Word document and then save it. All this works perfectly on my machine and on other 4 test machines. For some reasons one of the users is getting an error: With R
- trying stackoverflow.com/questions/4656360/…
- Wow, this fixed my issue after upgrading Office. The error message isn't really all that clear, weird thing is that I was using 64bit before and it was working fine. Thanks!
- The exception is being thrown while getting namespace i.e.
myApp.GetNamespace. However, i tried this and the exception is thrown once again
var myApp = new Microsoft.Office.Interop.Outlook.Application(); var mapiNameSpace = myApp.GetNamespace("MAPI");
- there is no question. I was offering what may be a solution to the problem :) | https://thetopsites.net/article/55444425.shtml | CC-MAIN-2021-31 | en | refinedweb |
How to create a custom class template for Xml Serializable classes?
Ok, so you don’t always want a default class template for every type of class. I have to create a bunch of classes that implement Serializable and if the class template assumed this, that would be great. However, I don’t want my default class template to assume this.
So here is what I did broken down into four simple steps.
- Open or create a c# project.
- Create a class file.
- Add the text and the variables to replaced.
- Export the item as a template.
Step 1 – Open or create a c# project.
Ok, so any project will do. I used an existing project, but you can create a new one if you want. Any C# project should allow this to happen.
Step 2 – Create a class file.
In one of my C# projects in Visual Studio, I created a new class called XmlClass.cs.
Step 3 – Add the text and the variables to replaced
I put the following text into my new class:
using System; using System.Collections.Generic; using System.Xml.Serialization; namespace $rootnamespace$ { [Serializable] public class $safeitemrootname$ { #region Member Variables #endregion #region Constructors /* * The default constructor */ public $safeitemrootname$() { } #endregion #region Properties #endregion #region Functions #endregion #region Enums #endregion } }
Step 4 – Export the item as a template
- In Visual Studio, chose File | Export Template. This starts a wizard that is extremely easy to follow.Note: If you have unsaved files in your project, you will be prompted to save them.
- Chose Item template, select your project, and click Next.
- In the next screen there was a tree view of check boxes for all my objects. I checked the box next to my XmlClass.cs.
- In the next screen, provide references.Note: I added only System and System.Xml.
- In the next screen, provide a Template name and a Template description.
- Click finish.
You should now have the option under My Templates when you add a new item to your project.
This class will be useful and will save you and your team some typing when you are in the class creation phase of your project and you are creating all your Serializable classes.
Copyright ® Rhyous.com – Linking to this page is allowed without permission and as many as ten lines of this page can be used along with this link. Any other use of this page is allowed only by permission of Rhyous.com. | https://www.wpfsharp.com/2010/06/01/how-to-create-a-custom-class-template-for-xml-serializable-classes/ | CC-MAIN-2021-31 | en | refinedweb |
Scala tries to help the programmer in many ways. Verbose and repetitive code can be often syntactic-sugarized into concise statements. This is very convenient but encourages the programmer to produce write-only code. Let’s talk about types. In many contexts, you can is very good at inferring types. Consider
val n = 1
This Is the most obvious case, you don’t need to specify
Int because the language manages to infer the type of the variable from the type of the expression to the right of the equal sign.
Compare with:
val x = foo(n).map( bar )
This is quite simple for the compiler since it has in hand all the involved types and can quickly find the type that
x must have to have a well-defined statement of the language.
Dissimilar from the compiler, the human reader is required to go to lookup
foo and
bar to understand what is the type of
x.
Scala may infer types in even more complex cases, such as the function return type. E.g.:
def twice( x: Int ) = 2*x
The language is perfectly capable of decorating the definition of
twice with the
Int return type by looking at the function body. Also when things get more complicated the language manages to find its way –
def foo(n: Int) = { some code if( condition ) { other code <here a return value> } else { obscure code yy match { case A => codecode <another return value> case B => morecodecode <yet another return value> } } }
Without complaining the language infers the return type for
foo. If all the return expressions have the same type, then that is the return type, otherwise, the language looks up the inheritance tree of the different type for a common ancestor.
You got the picture. But consider the example below –
def innocentLookingFunction( n: Int ) = { if( n < 3 == 0 ) { List( n ) } else { Array( n ) } }
What is the return type? Guess what…
java.io.Serializable. This is because
Array is a java type, not a Scala specific type and
Serializable is the first common ancestor for
List and
Array. Funnier:
def anotherInnocentLookingFunction( n: Int ) = { if( n < 3 == 0 ) { List( n ) } else { Map( n -> n.toString ) } }
Has the following return type:
scala.collection.immutable.Iterable[Any] with PartialFunction[Int,Any]{def seq: scala.collection.immutable.Iterable[Any] with PartialFunction[Int,Any]{def seq: scala.collection.immutable.Iterable[Any] with PartialFunction[Int,Any]}}… enjoy.
Since programmers are usually unreliable, especially when bound to work long and stressful hours, mistakes happen and an oversight could turn in a really odd behavior or compilation failure at a completely different place.
Summing it up – there are basically two problems with letting the compiler infer types. First, the code may be obfuscated if the type cannot be easily figured out by the human reader. Second the inferred type might not be what was intended by the writer, leading to subtle bugs.
lesson learned Always declare the type of objects unless when the type is obviously clear by looking at a single line of code. By obvious I mean obvious in the expression, not in naming. | https://www.maxpagani.org/2019/12/05/our-fathers-faults-scantly-typed-language/ | CC-MAIN-2021-31 | en | refinedweb |
Build a rails app
In this post, I’m going to break down how I created my rails app.
- First and foremost, you need to really breakdown your app, how you want it look like, what do you want your users experience to be. I decided to go with the wineries of California since I live in the area. My users could post a winery that they visited, and other users could comment on these wineries. Also the wineries are associated to a region that they belong to. I think it could be a great way to discover new wineries.
- Using the resource generator gives you even more to start with — running
rails generate resource User name username emailmagically (not actually magically) creates:
1) a migration in db/migrate/20190714193807_create_users.rb
class CreateUsers < ActiveRecord::Migration[5.2]
def change
create_table :users do |t|
t.string :name
t.string :username
t.string :emailt.timestamps
end
end
end
2) the User model in app/models/user.rb
class User < ApplicationRecord
end
3) a User controller in app/controllers/users_controller.rb
class UsersController < ApplicationController
end
4) opens up all of the routes in config/routes.rbRails.application.routes.draw do
resources :users
endTop-tip: add '--no-test-framework' in order not to create tests.
- Thinking about your models and their relationships is really important. I decided to go as followed:
- Model Winery
- belongs_to :user
- belongs_to :region
- has_many :comments
- has_many :users, through: :comments
- Model User
- has_many :wineries
- has_many :comments
- has_many :commented_wineries, through: :comments, source: :winery
- has_many :regions, through: :wineries
- has_secure_password
- Model Comment
- belongs_to :user
- belongs_to :winery
- Model Region
- has_many :wineries
4. Once all your models are defined, and that your migrations are up to date, you can test it out in your rails console by create a new winery for example and make sure it is associated with a user.
Top-tip: You can type ‘rails c -s’ if you do not want to create data that will stay.
5. Also an important step is to set up your validations in your models. For example:
- Model User
...
validates :name, presence: true, uniqueness: true
validates :email, presence: true
6. Your routes need to be set in your config>routes.rb. I decided to go for 3 nested routes as so:
resources :wineries do
resources :comments, only: [:new, :create, :index]
end
resources :comments
resources :users do
resources :wineries, only: [:new, :create, :index]
end
resources :regions do
resources :wineries, only: [:new, :create, :index, :show]
end
7. Your controllers will follow a pattern like followed:
class WineriesController < ApplicationControllerdef new
@winery = Winery.new
enddef create
@winery = Winery.new(winery_params)
if @winery.save
redirect_to wineries_path(@region)
else
render :new
end
enddef index
@wineries = Winery.all
enddef show
@winery = Winery.find_by(id: params[:id])
enddef edit
@winery = Winery.find_by(id: params[:id])
enddef update
@winery = Winery.find_by(id: params[:id])
if @winery.update(winery_params)
redirect_to winery_path(@winery)
else
render :edit
end
enddef destroy
@winery.destroy
flash[:notice] = "#{@winery.name} was deleted"
redirect_to wineries_path
endprivatedef winery_params
params.require(:winery).permit(:name, :website, :phone, :description, :region_id, region_attributes: [:name])
end
Note that they will need to be modified depending of your associations. Also you can DRY up your code by using rails helper methods like so:
class WineriesController < ApplicationController
before_action :find_winery, only: [:show, :edit, :update, :destroy]...private
def find_winery
@winery = Winery.find_by(id: params[:id])
end
8. In your view templates, Rails provides form helpers that gives us a shortcuts to make our lives as developers a tad easier.
- form_tag: This generates an HTML form for us and lets you specify options you want for your form.
Example of form_tag :
<%= form_tag url_for(action: ‘create’), method: “post” do %>
<%= label_tag ‘Title’ %>
<%= text_field_tag ‘title’, @post.title %>
<%= label_tag ‘Body’ %>
<%= text_area_tag ‘body’, @post.body %>
<%= label_tag ‘Author’ %>
<%= text_field_tag ‘author’, @post.author %>
<%= submit_tag “Create Post” %>
<% end %>
- form_for: form_for method follows RESTful conventions on its own. It accepts the instance of the model as an argument where it makes assumptions for you (which is why it can be seen to be preferred over form_tag). form_for prefers the argument that you’re passing in to be an active record object. This will easily make a create or edit form.
Example of form_for :
<%= form_for @winery do |f| %>
<%= f.label :name %>
<%= f.text_field :name %>
<%= f.submit %>
<% end %>
We use form_for with a specific model and use form_tag when you don’t have a model for it.
9. Partials: Rails allows us to create some partials in order to keep our codes DRY. The best example is in the view template, for example my wineries/new and /edit forms. Both will be similar so I created a partial wineries/_form.
<%= form_for winery do |f| %>
...<%= f.submit class: 'red darken-3 btn btn-default' %>
<% end %>
And in my new and edit:
<%= render partial: "form", locals: { winery: @winery } %>
Conclusion:
What I wish I had done was take more time to define my models and associations. This is the most important part of the project. I didn’t have a clearly idea of what my final result should look like so I struggled with my relations and got lost at some point where I had to start my project from scratch. Planning for this project is key! | https://matthieubertrand5.medium.com/build-a-rails-app-6eb76d1084e6?responsesOpen=true&source=---------9---------------------------- | CC-MAIN-2021-31 | en | refinedweb |
GREPPER
SEARCH
SNIPPETS
USAGE DOCS
INSTALL GREPPER
All Languages
>>
Whatever
>>
interfaces are used as
“interfaces are used as” Code Answer
interfaces are used as
whatever by
Hilarious Herring
on Jun 08 2021
Comment
0
It is used to achieve total abstraction. Since java does not support multiple inheritance in case of class, but by using interface it can achieve multiple inheritance . It is also used to achieve loose coupling.
Add a Grepper Answer
Whatever answers related to “interfaces are used as”
c# interface properties
c# interface property
c# interfaces
creating the functional interface in java
define a custom interface jav
doest all the methos in interface need to implement c#
functional interface in java 8 example
functional interface java
how to declare a interface in java
how to implement a interface in java
interface
Interface
interface function
interface in framework
interface in java
Interface in java
interface property implementation c#
java define interface
java implement interface
java interface
java interfaces
syntax for java interfaces
what is interface
what is interface testing
where do you use interface
where do you use interface in framework
where do you use interface in your framework
Which of the below is true about Interfaces? Pick ONE option Interface can contain constructors A class can implement just one interface An interface can declare public and protected method only An interface cannot have instance variables
why we use interface in java
Whatever queries related to “interfaces are used as”
can a interface implement another interface
use of interface
what is interface and it's use
are interface classes
interfaces definition
How can we implement an interface from another interface?
using interfaces\
Which of the following are true about interfaces.
interfaces or types
what is interface why is it needed
Why we use interface
we should program against interfaces
when to use interface
interfaces are useful in a situation that all properties should be implemented
using interface in a class
interfaces syntax
when to use interface properties
"Which of the following is/are User Interfaces?"
why interface is used
using ? in interfaces
using interfaces
interfaces are used as
More “Kinda” Related Whatever Answers
View All Whatever Answers »
ServicesBinding.defaultBinaryMessenger was accessed before the binding was initialized.
WebForms UnobtrusiveValidationMode requires a ScriptResourceMapping for jquery
godot use enum for export var
a get or set accessor expected unity
Constructor vs Method
cannot resolve constructor arrayadapter
arbitrary type meaning
maven dependency tree
Entities exist of type <em class="placeholder">Shortcut link</em> and <em class="placeholder">Shortcut set</em> <em class="placeholder">Default</em>. These entities need to be deleted before importing
eclipse autocomplete
aggregate functions are not allowed in WHERE
Property 'firstName' has no initializer and is not definitely assigned in the constructor
instance vs object
automapper ninject
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation
enum to int java
object instantiation vs construction
.what is the difference between function and procedure
Property 'products' has no initializer and is not definitely assigned in the constructor.
list view builder
spring jackson mapper ignore unknown fields
mvc return view from different controller
Skip model accessor
eclipse create getter and setter
advantage of selenium
let keyword in kotlin
yaml constants
readonly vs const
object references an unsaved transient instance
string to enum java
Xml Declaration
Can an abstract class extend another abstract class
parameter vs argument
serializefield for animator
config.factory method syntax rails
[[{{node case/cond/else/_10/case/cond/cond_jpeg/else/_105/case/cond/cond_jpeg/decode_image/DecodeImage}}]] [[MultiDeviceIteratorGetNextFromShard]] [[RemoteCall]] [Op:IteratorGetNext]
how to store more data than doublec#
resolve dependencies in abstract class
can we put multiple controllers in one controller folder in hmvc
microsoft DCE locator
hanks passcode dependency
uiteenzetting of beschouwing
Base class as3
singleton vs transient vs scoped
Dynamic Programming Recurrence base case
similarity between abstract class and interface java
nsubstitute configure property
gms2 object method
public static void Load
godot export var
Why shoudnt I throw garbage in creeks
the system.systemexception class is the base class for all predefined system exception in c#?
Rentrée des classes
objectmapper convertvalue to list of objects
what is the effect of keeping a constructor private
lhapüasdjadj+#asd
system.io Path Combine
view symbols in object files
k8s env variable own name
var vs let vs const
product type scala
duplicate symbol '_OBJC_CLASS_$_FCM' in:
Please specify classes allowed for unserialization in 2nd argument
What is instantiation OOP?
Do not access Object.prototype method 'hasOwnProperty' from target object
laravel module create controller
scala type class
Elegant solution to duplicate, const and non-const, getters?
Mokito "has following stubbing(s) with different arguments"
If ngModel is used within a form tag, either the name attribute must be set or the form control must be defined as 'standalone' in ngModelOptions.
Syntax error, insert "}" to complete ClassBody
void vs non void methods
framework trigger
updateorcreate
public vs private functions
unity no suitable method found to override
godot declare variable return type
cannot resolve println intellij
Diff between ViewEncapsulation.Native, ViewEncapsulation.None and ViewEncapsulation.Emulated
random kotlin cannot create instance of abstract class
where do you use overloading in framework
mockito dependency maven
viewmodelproviders deprecated
variable gameobject unity
int to enum java
Unexpected use of an unbound generic name
from: too many arguments
what is generic programming
Cannot find getter for field. private int
procrasinate
populate set in builder
final messageText = message.data['text'];
why we need to use extend class
string reference not set to an instance of a string.
cloudformation conditional property
instantiate vs initialize
dynamic attribute in selenium
object references an unsaved transient instance - save the transient instance before flushing
unity abstract class vs interface
godot declare variable type
getting Instance of 'Future<dynamic>' instead of value
schedule class for batch class in salesforce
types of locators in selenium
instanceof vs typeof
how to define variable in psark
what is static block
kotlin operator
YANG model example
disadvantages and advantages of selenium
disadvantages of selenium
creating object of type nested
umbraco nested content
what happens if you use both implicit and explicit wait
Enum
where do you use abstraction in your project
locators in selenium
junit test ordering
initialise meaning
abstraction in oops
export and export default
annotations in testng
functions in vb.net
get element by class
difference between object oriented design and function oriented design
required a bean of typ
ng class
dictionary in golang
ignore type declaration
overloading vs overriding
unity get all gameobjects
difference between abstract and interface
apex get object fields
Class 'Product' has no 'objects' member
Primitive Type vs. Reference Type
multithreading
rails scopes
what is abstraction
how get class name
laravel factory
what is polymorphism
action class in selenium
What is the difference between static (class) method and instance method?
mvc load view from another controller
how u can achieve multiple inheritances
singleton
purpose of data structure
add multiple classList
voidgenerator
java.lang.NoClassDefFoundError: com/fasterxml/jackson/annotation/JsonView
what is jackson databind
Can a class extend multiple abstract classes
constructor overloading
isinstance several variables
apex convert set to list
definition of generic
<!DOCTYPE>
what are join types
what is static methods and variables
create public key from private
Can abstract methods have static keyword
build jar with dependencies maven
what is static method
enum in solidity
lumen make model
serialization vs deserialization
how to show enum in class diagram
java how to override a private method
bisection method
collection framework
instance of
what is deserialization
encapsulation in oops
creating json object in java
abstraction definition
why main method static
Property 'form' has no initializer and is not definitely assigned in the constructor.
post mapping in spring boot
rails add reference
ng class multiple class names
log4j.properties
abstract class vs interface
Variable is used before being assigned
how to static_cast
instantiate unity in parent
model mapper maven
storing a double in sharedpreferences
inner class vs outer class
Difference between HashMap and HashTable
get reference field entities
can non static method access static variable
XML Serialization
multiple form in one class djanog
generate getter and setter in eclipse
what is an abstract data type
structure within structure
can you run non static method on main
The three terms used to describe an object in object-oriented programming are attributes, behavior, and:
overloading and overriding
the ad size and ad unit id must be set before loadad is called
how to check which class a object is from
formControlName must be used with a parent formGroup directive.
Creating a new laravelproject
assigment constant variable
overriding vs overloading
what is invocationcount in testng
difference between overloading and overriding
let url = moduleWrapResolve(specifier, parentURL)
Duplicate class com.google.common.util.concurrent.ListenableFuture
stateful batch apex in salesforce
generate entities symfony
r (merge OR concatenate) (multiple OR several OR many) "csv file*"
gdscript enum
binding element 'name' implicitly has an 'any' type
inheritance
Cannot fit requested classes in a single dex file (# methods: 65710 > 65536)
Create a class named Employee with id, name, department as private data members. Assign these data member values through method named Employee Record. Make a method to display all the record and also create a Calculate Salary Method.
quartz initialize-schema
at retrofit2.DefaultCallAdapterFactory$ExecutorCallbackCall$1.onFailure(DefaultCallAdapterFactory.java:96)
what is runtime polymorphism
attach function in laravel
call a method imperatively in lwc
variable scope
error: cannot find derive macro `Deserialize` in this scope
generate getters and setters intellij
turn off lazy loading in entity framework
create view
overriding
how to use list of item in modelMapper
client side validation mvc tutorialsteacher
spring delete mapping
doctype
Console.ReadKey();
static resource in lwc
where do you use interface
page factory in selenium
What happens if you mix implicit wait and explicit wait in a Selenium
abstraction
difference between interface and abstract class
abstract vs interface
spring controller method to run before every call
new model angular
ngclass
difference between structure and a class
spring.jpa.hibernate.ddl-auto=
when we should use abstract class
@:structInit example
parameterized in framework
google calendar api getting protected propertys
[LWC error]: Invalid template iteration for value "undefined" in [object:vm undefined (12)]. It must be an Array or an iterable Object.
where do you implement static block in framework
constructor
protobuf mark filed deprecated
Failed to load class of driverClassName com.sap.db.jdbc.Driver Hikari
What is the correct declaration syntax for the version of an XML document?
gdscript dictionary basic example
currying scala
model addattribute in spring mvc not working
instance of keyword
How to use Maps in cucumber?
TypeScript Class Inheritance Example
L.Control.Draw is not a constructor
multiple mapping @requestmapping
batch class in salesforce
java create set with values
Implicit conversion of keys from strings is deprecated. Please use InMemory or LocalFileReference classes.
Static and Instance Variable?
difference between driver.getwindowhandle() and driver.getwindowhandles()
thymeleaf string to lowercase
invalid data after declaration
how to store value in array in controller and pass to view
class instance local variable
netplan multiple interfaces
Data path "" should NOT have additional properties(es5BrowserSupport
actions in selenium
warning: implicit declaration of function âgetlineâ
what is page object model design pattern
what is singleton design pattern
combinelatest deprecated
viewmodelproviders is deprecated
what is factory annotation in testng
Return type of overloaded method should be same or not?
Error: Chunk.entrypoints: Use Chunks.groupsIterable and filter by instanceof Entrypoint instead
salesforce map set values
jackson serialize xml
parameterization in cucumber
when to use builder pattern
data provider in your framework
thymeleaf, checking if context variable exists
C# using variables inside strings
how to set jar path maven
What is a singleton class in Android?
how to create a variable in thymeleaf
xunit theory inlinedata complex object
No qualifying bean of type 'com.richoco.demo.Customers' available
Can we have multiple classes in single file
A class member cannot have the 'const' keyword.
interface in framework
django view
enum haxe
what is interface
Class '_InternalLinkedHashMap<String, dynamic>' has no instance getter 'data'
jackson databind
interfaces are used as
what is maven lifecycle
manually validate class
final method and abstract method
what is data abstraction
default value for @Value
observable pipe change one property
spring xml configuration
Local Variable and Instance Variable?
variable types
set operations
an enum switch case label must be the unqualified name of an enumeration constant
where do you use interface in your framework
laravel-enum
Bean Factory vs application context
how to use stored procedure in dapper
System.NullReferenceException: 'Object variable or With block variable not set.'
class vs interface
Argument 2 passed to Illuminate\Routing\UrlGenerator::__construct() must be an instance of Illuminat e\Http\Request
Class constructor cannot be invoked without new
operator overloading is also called which polymorphism
how to call one constructor from the other constructor
C# Data Type
collection variables postman
@modelattribute example in spring boot
generic funtions
what is polymorphism in oops
how to create a class with a constructor
The argument type 'Future<dynamic>' can't be assigned to the parameter type 'void Function()'
CreateThread
what is the superclass in exceptions
static methods and variables
utm builder
access class variable from another class python
thread function
where we write business logic in mvc
QUOTED_IDENTIFIER
where design pattern factory is used
creating multiple classes in file
PHP OOP - Abstract Classes
const constructor dart
difference between varchar and varchar2
.obj format
procedural abstraction examples
difference between abstract class vs interface
a["default"].serialize(n)
UnboundLocalError: local variable 'obj' referenced before assignment
how to unsubscribe from observable created by http
What is a Conceptual Data Model?
what is inheritance
abstract class
why we used mvvm instead of mvc
interface
Can we overload the constructors
sanitize
what is serialization
operator overloading
set vs map
semaphore syntax
mvc view vs partial view
why string builder
unity object not set to an instance of an object
hashing in data structure
difference between abstract vs interface
The primary purpose of maven is to provide uniform, easy, and standardized:
what is static variable
inheritance vs composition
stringbuilder
how to generate equals and hashcode in intellij
difference between factory and dataprovider
what is slr model
making a modelserializer field required
paging entity framework core
configurationEditingMain
normal class friend with template
The type initializer for 'System.Linq.Dynamic.ExpressionParser' threw an exception.
datatype intable daatbase ccheck
bem naming
namespace but used like a type
how do you implement data provider in your framework
what is the difference between a property and method
can a variable be declared in abstract
access the namespace members using namespace member function
TypeScript Class Inheritance
ObjectSerializer
Type '()' cannot conform to 'View'; only struct/enum/class types can conform to protocols foreach
Hibernate, how do we define the primary key value generation logic as auto?
static block vs instance block vs constructor
flutter + const_constructor_param_type_mismatch
mixing implicit wait and explicit wait in selenium
mongorepository spring boot custom query
class, id, tag, tag
how to use abstraction in your project
Jtl\Connector\Core\Http\JsonResponse::prepareAndSend() must be an instance of Jtl\Connector\Core\Rpc\ResponsePacket, null given,
Property 'productForm' has no initializer and is not definitely assigned in the constructor.
required a bean of type 'org.springframework.jdbc.core.JdbcTemplate
#define in c
Java Chores task
disadvantage of page object model
scalable meaning
lit element class example
Trying to get property 'token' of non-object dompdf
Which are the resources that can be used only once?
haskell semigroup definition
services.AddDbContext<Projet_AdoNetContext>(options => options.UseSqlServer(Configuration.GetConnectionString("ProjetADONet")));
mockito any class class
137
spyon observable
automapper dependency injection
Function ReflectionType::__toString() is deprecated static::$kernel->boot()
how to register filter class in WebApplicationInitializer
instantiate object inside of object Unity
in scala, can a lazy data be a variable
c# declarative fluent syntax
mlpclassifier paralell
Error creating bean with name 'dataSource Script DatabaseInitializer' defined in class path resource [org/springframework/boot/autoconfigure/sql/init/DataSourceInitializationConfiguration.class]
Property 'apiKey' does not exist on type '{ production: boolean; apiUrl: string; siteName: string; token: string; customerUrl: string; }'
data and hash required
how to declare interface object in angular 11
hibernate vs suspend
decodeMorse
dependOnInheritedWidgetOfExactType<_InheritedProviderScope<>>() or dependOnInheritedElement() was called before .initState() completed.
extracting json data haskell
proteced class can access can be accessed in other class
myObj.key
WorkspaceApi in lwc
page factory class in selenium
public class CustomAdapter extends SimpleAdapter
factory vs dataprovider
why is called c#
apex add single quote
add object to object dynamically
using modal form to update register with thymeleaf
insomnia insert variable
apex best practices in salesforce
check for changed model fields in djnago signal
difference between dynamic and var
The instance member 'x' can't be accessed in an initializer.
diagrama de clases y el diagrama de objetos
Dot project request template
what is the <parent> tag maven
can we write multiple trigger on an object salesforce
TypeScript Class Private Field Example
enum type
how to do two constructors with super
calculate how many classes needed to complete 75 %
invoke method from another method in same object
how did you implement page object model
Laravel : Pass dynamic variables to routes
the modifier '{0}' can't be applied to the body of a setter
salesforce DmlException
pool map iterator
Cannot use object of type Customer as array + Prestashop 1.7
mvn versions set
leaflet core
haskell monoid definition
Class must be declared abstract or implement methods 'hasVerifiedEmail'
hash table with chainning
can i be a model
registerDefaultInstance(Ljava/lang/Class;Lcom/google/protobuf/GeneratedMessageLite;)
defining variables
name of pure object oriented programming language
must be an instance of
queryselect get type of elment class or id
what are the access specifiers
collection type extension
add key to object only when there is value
Registration serializer
unity basic public options
resultsetmetadata
annotate() r
custom class level annotation in spring
one to one best practices
enumeratepython
page object model
kotlin hashmap to object
Convert any class to a keyvaluepair
public CustomAdapter(Context context, ArrayList<HashMap> data, int resource, String[] from, int[] to)
dataprovider vs factory
eachfeature leaflet
enum reject
@JsonDeserialize use bean
AnnotationException: @OneToOne or @ManyToOne
FirestoreRecyclerOptions
instance of operator
spring preauthorize two roles
keys(self) Return a list of all resource names of this widget.
A simple example of a friend class
spring thymeleaf configure error page
data provider
defineFeature(feature, test => { declaration exception
The attribute name of [ *ngFor ] must be in lowercase.(attr-lowercase) in VSCode
listIterator
update the same custom field without duplicates
public FileStreamResult RenderImage(int id) { MemoryStream ms = null ; var item = db.Products.FirstOrDefault(p => p.ProductID == id); if (item != null) { ms = new MemoryStream(item.Image); } return new FileStreamResult(ms, item.ImageType); }
abstract class vs non abstract class
findone in spring boot 2.4.1
@builder annotation in spring boot
where T : class, new()
apply class and removing it after miliseconds
mockobject
deprecate protobuf field
handling synchronization in selenium
superset symbol
ruby class variable
model based and code based test case prioritization
Using Variables with Multiple Instances of a Python Class
dynamically change class
variadic functions
rust vec of generics types
how to build a generic tree
how to create instance of the class created with constructor
are classes are final by default in apex
difference between iterative and prototype model
Maps in cucumber
what are the access modifiers
single quotes in environment variables
setbounds vs setsize
does polymorphism allow reuse of code?
when should i use struct rather than class in c#
URL.Action inside model
use variable in hibernate query request
difference between inheritance and polymorphism
Viewmodel provider
yaml define empty object
Trying to get property of non-object
MultipartConfig dependency
selenium get by many class
super(context, data, resource, from, to);
getelementsbyclassname add class
unity named parameter
hibernate set multiple parameter
what is the need of abstract class
self.assertEquals(view.func, signup)...Class-based viw
xamarin constructor injection
string constant pool
make syntactically valid names
what are the different data binding
how to trait inheritance
http annotations in spring
is string a primitive data type/use of getclass()
Can we add a non-abstract method into abstract class?
cannot invoke initializer for type with no arguments
mapping solidity example
hql with case sum inside constructor
Utils is not a constructor MuiPickersUtilsProvider
public class PetDAO { public List<Pet> findNamedPetsBySpecies
neomorphism
The instance member 'data' can't be accessed in an initializer.
USAGE: PATH (<path expr> | (<class path expr> [WHERE <where clause>])) [<verb clause>]
mapping page entity to dto
java set operations
external dtd declaration in xml
strategy pattern definitiom
instance block
convert array to Illuminate\Http\Request
ruby new class params
moq return mockset
what is class instance local variable
add class queryselector
Non static method 'denies' should not be called statically
svn exclude bin obj folder
despondency.
Food Hut
kotlin anonymous class from interface
immediateFuture to test classes
how to convert func to predicate
how should you use content_for and yield
The annotation validator be forget to add a "use" statement for this annotation? in symfony
@DynamicInsert(value=true)@DynamicUpdate(value=true) used in hibernate
power bi model object names must be non-empty
How to use POJO in cucumber
makefile call another target
dbml name attribute of the Type element is already used by another type.
set failed: value argument contains an invalid key ($key) in property 'shopping-cards.-MchUSNJJuczbp8aNDTG.items.-MbLJBAhawE-98MJy0_F.product'. Keys must be non-empty strings and can't contain ".", "#", "$", "/", "[", or "]"
understanding context.context go
UICollectionViewDiffableDataSource fetchedResultsController
make a string bean spring xml
Error: The method 'inheritFromWidgetOfExactType' isn't defined for the class 'BuildContext'.
autocomplete in java jtextfeild
internal hashcode
property is object oriented equivelent of
data annotation for passwords
A non-serializable value was detected in an action
use of polymorphism in real world
ORDS parameters
expected a string (for built-in components) or a class/function (for composite components) but got: undefined
Register multiple implementations
@jsondeserialize annotation
Method not found: 'Void Microsoft.EntityFrameworkCore.Storage.RelationalTypeMapping
strongly typed vs loosely typed
add members to method code
AttributeError: module 'rest_framework.serializers' has no attribute 'ModelSerializers'
collection types
laravel {{variable}} not being rendered
java.lang.object[4]
razor hash
Inheritance_Training
Use of getclass()in string
Is it mandatory for an abstract class to have abstract methods?
schema.fieldsetmember in salesforce
concat in laravel 8
nameless objects
All new primitives assigned what type of material:
extend vs include ruby
public static boolean routeExists
jasmine spyon argument of type is not assignable to parameter of type 'prototype'
Insert a custom object
inheritance programmiz
putarray sharedpreferences
interface base1 void method();
C# method declaration
Spring Boot Hibernate remove underscore naming strategy
explicit return in __init_
call cll to java
constructors vs blocks
jackson config ignore unkonw properties
package controlflow; public class ControlFlow { public static void main(String[] args) { int income =120_000; boolean hasHighIncome; if (income > 100_000); hasHighIncome = true; else hasHighIncome = false;
creating your own symfony choice type
typescript typeof interface property
sql dynamically
What are the differences between getText vs getAttribute
spring boot entitymanager example
java thread yield
what is the squence of annotations
salesforce case force assignment rules with code
Lost Variables
What is the use of @Listener annotation in TestNG?
using The composite tool in planet data
linqkit predicatebuilder or and nested combined predicates
assign pulic variable while instantiate
where do you use constructor in framework
python nested object to dict
oop encapsulation example
airbnb object assignment
read only karma abstract
synchronization in selenium
export controller
get data from Instance of Future<DocumentSnapshot>
memberproperties kotlin
System.Collections.Specialized.BitVector32.CreateMask
realtionships in salesforce
threadpoolsize in testng
hibernate xml template
enum declaration
what are the collections types
A partial class allows ________
why use generic in real life
golang hashmap example
blazor parameter escape /
getOntClass
scala function dor function
Can you add instance or static variables into abstract class
ignore merge pom.xml
instance.getBinding guice
qwertyuiop
why do we use Object Constructors
object property with space
C# Data Types
swagger required field in object
make 2 models user and wishlist with fillable than casting
trait class has consttoctor
private bool MinimapAutoSpot bf4
spring environment null pointer exception
static vs instance
Objects cannot be compared:
sealed
locator types
how to get value from property file in spring xml file
add a 'protected' constructor or the 'static' keyword to the class declaration program.cs
odoo orm methods
Non static method 'setOptions' should not be called statically.
inherit tree attribute odoo
install eslint-config-airbnb
spring boot hibernate entitymanager
what is the sequence of annotations
hibernate repository findby multiple fields
witch dependency is needed for "ArraysUtils" for the Import?
set bean properties
listener annotation in TestNG
optional arguments
razor escape symbol
lnclass
How to activate an entity listener for all entities
app script comparison existing data
how to use constructors in framework
collections
final implement a matrial notifier
ienumerable vs iqueryable vs ilist vs icollection
where do you use interface in framework
advantages of page factory in selenium
kotlin class constructor
Which of the below is true about Interfaces? Pick ONE option Interface can contain constructors A class can implement just one interface An interface can declare public and protected method only An interface cannot have instance variables
pass data from one provider model to anothe
edittext into double
how to put quotation marks in a string vb.net
Where do you use Set, HashMap, List in your framework?
triggering point of framework
publish and subscribe library in lwc
ContentDefinitionManager.AlterTypeDefinition
b) this.context = context;
aspectj after returning
withClientIdClassName
recurrence relation from code
Generic Exception Handler
why use generic in real life in programming
static initialization blocks in java with example
threading target with arguments
can you declare an abstract method in a non abstract class
major functionalities of soap protocol class
why is newDocumentBuilder red
create a type from an object
objectify context issue site:stackoverflow.com
Can you add instance variables into abstract class
make class instance in xaml
can't find main(String[]) method in class
java switch on what classes
how to call ruby private methods
object vs class
Could not initialize class com.intellij.pom.java.LanguageLevel
mvn force online
crud operation without entity framework in mvc
golang methods for primitive type
difference between static and instance variable
Using Validation Annotations
An object called by this method can not have any new properties being added. But it can change existing property values as long as writable metadata is true.
many to many typeorm load only id
where do you use inheritence in your framework
java concat example
Assignment separate from declaration
xfce super key
formbuilder
A value of type 'void' can't be returned from method 'goBack' because it has a return type of 'bool'.
hospital class diagram
code for inheritance using header file
ParserError: Expected a state variable declaration. If you intended this as a fallback function or a function to handle plain ether transactions, use the "fallback" keyword or the "receive" keyword instead.
is overriding only works with inherited methods?
What are @Factory and @DataProvider annotation?
static
strict scalar types
Java’s generic programming does not apply to the primitive types. True or False?
where to use constructors in framework
an object reference is required for the non-static field
method overloading in framework
PHP Notice: Trying to get property of non-object
The keyword synchronized can be used in which of the following types of blocks: Pick ONE OR MORE options Instance methods Static methods Static classes Code blocks inside static methods
annotation for enum hibernate validator
operator overloading outside class
What’s class cast exception?
string(230) "{"language":"en","subType":"","compositeType":"","errorCode":"accessDenied.signature Mismatch","exception":"test_8HG2BT","message":"Invalid API authentication. Signature Mismatch","type":"AccessDeniedException","transient":false}"
<mvc:resources
springboot validator manually validate
Which type of inheritance must be used so that the resultant is hybrid?
The serializable class QuoteList does not declare a static final serialVersionUID field of type long
classe statica informatica
object values discord
point class
addClass to enfant div
java find in json
one and only one in er diagram
declare dictionary in golang
key characteristics of rest
java classes
jackson annotations serialization date
Can you add static variables into abstract class
add javaparser maven
multiple classes in single file
how to call a class with ihostingenvironment parameter
do not access object.prototype method 'hasownproperty' from target object no-prototype-builtins solution
controll +resource+model
Argument 1 passed to Doctrine\Inflector\Inflector::singularize() must be of the type string, null given,
ue4 check class type
@required annotation
instance method and static method
static::
inheritance in your framework
Assignment separate from declaration - Default values
classful addressing and classless addressing
field ‘’ declared as a function
trucker bean
Class based views
@modelattribute
astype oandas
how to choose locator in selenium
enum naming convention
passing parameters in testng
import mimeapplication
User::factory()->create(
two different keys for one Unity
unity instantiate a prefab by path
abstractin
what is the meaning of iterable object
Animator.StringToHash parameter does not exist
abstract factory patrn
how do you implement static block in framework
hoe to add variable to object in GDscribt
scope in typeorm
save data to a textediting controller dynamically
djb2 hash function c explained
ngClass withing ngFor
map string interface golang
intellij plugin run in android
What is Singleton and where do you use in your framework?
randomforestclassifier
object oriented life cycle model
viewmodelproviders.of(this)
Deep clone of multi-level object
Generic constraint on constructor function
tsEnum
Skip model mutator
Makefile $@ $^ $<
How to post Dynamic list to asp mvc controller
unity nullreferenceexception object reference not set to an instance of an object
calling constructor in intellij idea
spring mvc taglib
test class for wrapper class in salesforce
symbols with object.assign
What all access modifiers are allowed for top class
Consider the following classes and function prototype testdome solution
@entity dependency maven
Adding values to a dictionary via inline initialization of its container
instance variables
inheritance in solidity
C++ constructor
what is the conflict called that occurs in onself
can we have abstract class having no abstract method in java?
define enum for customView
classes and onj
if (!firebase.apps.length) { firebase.initializeApp({}); }else { firebase.app(); // if already initialized, use that one }
Mechanism of action of valproate
resources.getstring vs context.getstring
java nested class
singleton design pattern example
jessica gomez for oregon governor
how to structure an entity framework data layer class
What is dynamic id value
core likema
ogogle classro-om
UserModel._default_manager.filter(**{case_insensitive_username_field:username}).exists()):
List of Pydantic model. List[BaseModel]
types of saml respone
int and const
elment - - modifier
how to pass property between blazor components
object fromentries example
eclipse properties file utf-8
parameterized scope rails code
simple factory pattern
Proper Case django template
where do you use abstraction in your framework
Hibernate: How does SetMaxresult work in pagination
delphi tstrings uses
pass variable to thread target
setting default values for all fields of a model
why did you choose page object model
init obserable with values
can overloaded methods have different access modifiers
create a prefer-web-inf-classes
generatedata
:app:checkDebugDuplicateClasses
hibernate session factory
difference between app.put and app.patch
roperty 'form' has no initializer and is not definitely assigned in the constructor.
What access modifiers can be used for class
unity inheritance
access modifier overloaded method
constructor with base
Count the number of classes an element has?
intellij cant find classes
lint global object
angular compiler extend option
apex icomparable
where do you use polymorphism in framework
create a structure using a bank acoo
difference between property binding and interpolation
typeorm is not assignable to parameter of type 'PrimaryGeneratedColumnNumericOptions'.
Structural Pattern Matching
Operator '<' cannot be applied to types 'View' and 'RegExp'
batch apex list of custom object in apex salesforce
dapper install
how to create a rectangle class in java
dynamic id
how to use existing service instance in injector.create
mvvmlight System.Exception: Cannot find resource named 'Locator'. Resource names are case sensitive.
list comprehenstsion using lambda funcion
factory annotation in testng
path.resolve one parameter
Write a program to print the down arrow pattern in java
appCreation
find usages of method intellij
mapstruct.org
bem naming elment - - modifier
where do you use static block in framework
binding.pry commands
class example in java
get class properties with space case
primitive (immutable) data types:
save multiple models at once
defining constants in elixir module
sum type scala
parameterization in selenium
jol-core printing memory use for object
example AbstractCacheManager
class
how to use abstraction in your framework
Multi threading Models
disadvantages of page object model
constructor of class that extends another class
data base spring
IList instantiation
[ProtoContract(SkipConstructor=true)]
objectclass= types
Instantiate ui
first-class status programming
strongly typed language using Model in view
enum use
Implementation restriction: ContentDocumentLink requires a filter by a single Id on ContentDocumentId or LinkedEntityId using the equals operator or multiple Id's using the IN operator.
anonymous inner class can extend exactly one class and implement exactly one interface.
how to import model_to_dict
jquery objects
5.2.9: How Many Names?
difference between controller and helper in lightning component
class, interface, enum, or record expected
static block
make controller resource
polymorphism in framework
variable sized object may not be initialized
Using a Views in .net and c#
different states on same model odoo
how to iterate map in apex salesforce
Class inheritance and encapsulation
depreciation methods
singleton design pattern
dapper
Encapsulation
define abstract data type
object of class stdclass could not be converted to string
singleton pattern
inheritance in oops
Diamond inheritance
dagger
spring boot exception handling
angulae schematics
what are the data types
'builtin_function_or_method' object is not subscriptable
php interface
overloading constructors
what access modifiers can be used for methods
@Override
maven lifecycle components
Serialization and Deserialization
how to use by viewModels()
controller to render static data symfony
application development models
servletcontext and servletconfig
facialmaskclassifiermaster
defining collection variables
annotations order in testng
scoped vs singleton vs transient
servicecollection addsingleton vs addtransient
how do you use data provider
difference between structure and union
dynamically load partial view mvc
why can we implement multiple interfaces
enum multiple values
driver.get method
what is a static method
object meaning in programming
parameterization
what are the collection variables
datapack structure
php class
godot print enum name
godot saving enum names in variable
instantiate prefab
lua class example
<generator object <genexpr> at 0x0000024F369E6F90>
Expected a state variable declaration. If you intended this as a fallback function or a function to handle plain ether transactions, use the "fallback" keyword or the "receive" keyword instead.
Return type of override method should be same or not
static and public
objectMapper read list of type
constructor vs static block
instance methods
disadvantages of robot class in selenium
Duplicate interface definition for class 'FBSDKAccessToken'
tutorialspoint apex
package com.couchbase.client.core.deps.com.fasterxml.jackson.annotation does not exist
public static void setPointSize
CompileOptions.annotationProcessorPath` property instead.
reactivecommand iobservable
read rdata and assign different name to objects
what is defaultWriteObject
Create classes using xsd tool online
import a class dynamically
Invalid "key" attribute value in lwc
mcv application variable race condition
Create a public Action field named DisplayResult that takes an int parameter and a Func<int, int> parameter. Initialize it with an anonymous method delegate that takes an int result parameter, and a Func<int, int> named operation.
XmlBeanDefinitionStoreException: Line 8 in XML document from class path resource [spring-mvc.xml] is invalid; nested exception is org.xml.sax.SAXParseException
view matches in opemmvg
duplicate data found in servlet
xml documentation generic types
property.resources find
r typeof class mode
entity framework generate script
@Model.First() is not working
find a class name that maches a pattern
How To Call different Namespace Class method From Different Namespace in Iris + Intersystems
como passar um enum para auraenabled
How do you define a copy constructor or assignment for childerns of abstruct class
var viewer = el("#viewer")
Converting/Parsing an Enumeration to a String More Generically
how to displau authentification failure message thymeleaf
releaseObject
concat variables in logic app to create blob path
The type com.querydsl.core.types.Predicate cannot be resolved. It is indirectly referenced from required .class
serbian to english
google traductor
google t
trad
traduction espagnol
traduction
spanish english
english to german
traductor español ingles
english to latin
traductor
google tradutor
translat
traducteur
google trad
googl trad
google translate to japanese
eng to hin
teranslate
google calculator
google trasnslate
google translate english to chinese
google translator
english to polish
latin to english
kworker
oversetter
translate néerlandais
japanses to english
spanish to englihs
googleübersetzer
rotar in english
translate engels ne
google translate english to german
spanish to english translator
japanese in english
google traduction
english to japanese
google transalte
german english
traductor google
japanese translator
malay to english
pashto to urdu
Minecraf
activate virtualenv windows
word reference
traversy media
border for container in flutter
flutter navigate to new screen
host file windows 10
area of parallelogram
alter user password postgres
internet explorer
m to cm
flutter setup macos
samsung sam
how to open anaconda in terminal
get windows product key cmd
delete docker containers
textfield border flutter
npm start reset cache
vi line number
oop principles
visual studio comment out block
Can't bind to 'ngForOf' since it isn't a known property of 'div'.
drawer images boostrap
fontawesome cds
flutter textfield cursor color
default page +load +onstartup +springboot
Permissions sur le fichier de configuration incorrectes, il ne doit pas être en écriture pour tout le monde !
how to use fread to move through a file
self driving car
little snitch 5.0.4 download mac
w3schools bootstrap 5
what are http methods
assembly code hello world mac
eachfeature leaflet
the first printed book
is natural and inner join same
what is degree and cardinality in database
update multiple rows dataframe
ertugrul
x plane multiplayer
8002401381 who
docker compose command build
vscode fedora
xml agregar imagen
pulp fiction
htdocs folder
findall(sort sort) example
material ui align button right
flutter convert double to int
android layout background image xamarin
awesome-phonenumber
can i have more than one app engine in one project
compare two arrays
vs code disable auto file save
vs code search entire project
Jest did not exit one second after the test run has completed.
const arr = new Uint8Array(fileReader.result).subarray(0, 4);
error Cannot set property 'visibility' of undefined
how to uncomment in vscode block
bootstrap 4 login free
vba next for loop early
nitrous acid
kafkacat topic info
an extension to share text in chrome
cookay clicker
route guards flutter
d3 scalemagma()
if sma_20 > sma_50: if context.aapl not in open_orders order_target_percent(context.aapl,1.0)#order_target_percent(card,% of profoil)'''
avl tree gfg
editable div
reply to a message discord.py
contact form 7 in page template
unzip specific folder linux
http response status codes
latex chapter name and subsection overlap headers
webtools
curl data from file
simple linear regression machine learning
label.settext pyqt5
getx routing
world of warcraft
perfect chords + lyrics
graphql not null
jackson annotations serialization date
Shirley Temple
61 packages are looking for funding run `npm fund` for details
have_posts()
docker unable to push repo access denied
svelte autocomplete
WAP to generate marksheet of a student by taking marks from the user. (individual grade and percentage for each subject, also find overall grade and percentage)
Git - Can we recover deleted commits? [duplicate]
ssl certificate error handling in selenium
update flutter version using zip file
topazbarziv
rvm stderr: bash: /usr/local/rvm/bin/rvm: No such file or directory
Pendimic
Data de Efetivação
making link in Jupyter
curl output to a file
how to convert 30fps to 60fps using ffmpeg
get distance between coords fivem
how is the best cricket player in the world
composer switch to version 1
Create an index in Mongodb
compound interest formula kotlin
strcmp code
forme juridique personne morale de droit public
is not invokable.
aws_launch_configuration volume tag
webpack bootstrap navbar eroor
invoke method from another method in same object
permission denied: bin/cake bake
activate environment in virtualenvwrapper
functions
brython svg
android studio make button round
get keyword in javascript
retrieve data from firebase realtime database flutter
bot idealista
Create Custom Post Type WordPress
vs code open folder from terminal in same window
wordpress theme directory uri
rust seed state
create new project symfony
change app name in flutter
pymongo insert if not exists
linking to aws bucket endpoint from salesforce
dataprovider vs factory
countvectorizer in nlp
reset database active record
remove role spatie
how to add woocommerce cart counter
replace empty numbers in dataframe
site:uk.popularphotolook.com inurl:"03163699930"
roflcopter ascii
date format golang
pollythistic
equals stack overflow
border radius to card flutter dart
spanish to englihs
csv colab
element wise mean and std
signal smoothing in matlab
android bottomsheet fragment curved
phpstorm jump to line
furry hentai
babel not found
table of contents lytex
cleanmypc activation code
useful apt commands
boostrap vue open modal in child component
typeorm pagination
how to update pip in anaconda prompt
chmod directory and subdirectories
update widget kivy
update pip module
xdebug: [step debug] could not connect to debugging client. tried: localhost:9003 (through xdebug.client_host/xdebug.client_port) :-(
merge sort array
farthest place from hungary
Mississauga
How to play a notification sound on websites?
modal overlay appears mat-menu item
variadic macros
How to get a Docker container's IP address from the host
wpf class DelegateCommand : ICommand
Virtuoso_Gravy
how do polar bears adapt
vscode shortcut toggle sidebar visibility
kibana elasticsearch url environment variable
what is a radioactive isotope bbc bitesize
end method
how to save file to /var/www/html/
Program to find occurrence of a word in file
ionic 5 change app name
amp opt in for features
Bolsonaro presidente
Import .tar.gz gzipped mysql dumps in one line
ByRef argument type mismatch
compression
fedora docker
str object has no attribute len
why do diodes fail
cisco encryption command
add object to object dynamically
clone an environment conda
ionic toolbar options elipsis
logging in api
unable to resolve 'react-native-gesture-handler'
Syntax to Create User Defined Functions in SQL
begin proc near means
magento 2 how to update custom product attribute programatically
translateY
ue5 what does specular do
automatic semicolon insertion
cell between quotation marks google spreadsheet
Django url with primary key
object randome movement with navmesh
Radius and Shadow to a UIView with Interface Builder
bootstrap 4 text-uppercase
Amanda uses "Influence consideration" as a marketing objective for her Google Display Ads campaign.
jep1x
how to pluck email from users records
brackeys character controller code
scheduleatfixedrate
google authorization error 403 disallowed_useragent cordova
jae fhi wahfuiaweh
clean build in android studio
how to remove starting padding of checkbox in android
lexicographical permutation nayuki.io
gatsby graphql queries grouping vs aliases
chartcss
how to move a row in word
apache don't list directory contents
berkshire hathaway class b stock
adobe
mdi icon size and color
551a6ca27feab07d53bba655094291891ec3310f40cd455157ae14633be8b0f5
modulo golang
autoplay owl carousel
index in ng
. | https://www.codegrepper.com/code-examples/whatever/interfaces+are+used+as | CC-MAIN-2021-31 | en | refinedweb |
disappeared, and the storyteller has lost his audience. Although there is a substitute chutney stirrer on the premises, she does not suffice. Saleem Sinai is confused and angry. He is also a model of ambivalence: a Muslim who lauds himself for being steeped in Hindu legend, he employs a geometric metaphor to describe the reigning ambivalence he has lost. He is the uppermost point of an isosceles triangle: supported by two deities, the "wild god of memory" and the "lotus-goddess of the present." He has become a hybrid, a third term, an invention out of conflict.
Saleem cites the summer of 1956 when Egyptian president Gamal Nasser (1918–70) "sank ships at Suez" and Jamila Singer, inexplicably, began to set fire to shoes—anyone and everyone's shoes in the Methwold Estates. He recalls in his sister's strange habit on an earlier occasion in which another monkey had been implicated in the smell of burning leather, the fire at the godown that changed the course of the Sinai family's history.
Saleem takes up the history of his sister's emotions, a girl, as he describes her, hungry for love yet governed by a fear of getting what she wanted: abnormally worried that anyone loving her was playing a trick.
Saleem also has recognized his own particular reaction to his place in the family. Living in a world of exceptionally high expectations, he grew up in terror of not meaning, of not succeeding. And he was aware of how accidents defined him. After light-hearted advice to his critically ill grandmother, he was credited with genius, with saving her life. As a result, he came to define greatness not as intention or wanting or learning how but as a gift from God. He saw as his "only hope" the mantle of greatness miraculously falling on his shoulders.
Saleem used the washing chest in Amina's bathroom as refuge from the pressure of family expectations. Hiding among her soiled linens, he came to know his mother's secret. Amina had been receiving telephone calls to which she listened intently and then declared them wrong numbers. Saleem witnessed Amina's deception, when, hidden in his refuge in the laundry chest, he observed Amina in the privacy of her bathroom. The calls were from her long-lost husband, Nadir Khan. Behind the locked door of the bathroom, Amina wept and caressed herself and called his name. When Saleem was discovered, he was punished with silence, in which he discovered his true purpose in life: he found he could receive messages, voices in his head that he initially imagined were from an archangel. He believed that revealing his gift would bring approval from his parents and reinstate his primacy, or importance, in the family. Instead, he was punished, sentenced to silence.
When Amina found sympathy for her punished child, he was past reveling in his disclosure and never told her about his gift. Amina, on the night in which Saleem is discovered in the washing chest, dreamed of Ramram Seth and the no longer puzzling prophesy of Saleem's birth: "Washing will hide him ... voices will guide him!"
The comic scene that opens Book 2 is as much about writing and storytelling, memory and interpretation, as it is about the Sinai family history and the history of India. The commentaries about language begin at the level of the word. For example, it is difficult to imagine how to add up the parallels of monkey and leather and burning that connect the Brass Monkey's bad habit, her need to burn shoes, to the scene that initiates her father's change of fortune: the allusion to the stone-pitching monkeys of the Old Fort who discard the bribe money and initiate the cause for the warehouse burning.
It is the diction rather than a bringing together of ideas that establishes the parallels. That is, it is difficult to compare the Brass Monkey's motives with those of the terrorists who burn the warehouse, but readers do see repetition in the individual words. It is only the smell of burning leather that joins the two scenes. Thus, readers have a statement about sensory experience that initiates memory and stokes the imagination. This is the formula for the novel: childhood experience, sensation rather than intellect, initiations in which the observer must make up everything he or she doesn't know. This is Saleem's method, a making sense from sensation, from childhood observation in tandem with India's coming-of-age practices and choices. As to the Brass Monkey and her acts of terrorism, it seems a stretch to link her to religious warfare—unless that comes later.
If some writers' talents lie in authenticity and others for the enchantments of narrative, part of the charm of this text is the way in which it pushes at the reader's imagination. From early on, Midnight's Children was seen as the first postcolonial novel, a work that introduced the Western reader to the effects of colonialism from the point of view of the colonized. Part of the excitement and variety of this text in the odd constellations of images, representing the play of the imagination and the work of time and place on the individual imagination, is that it operates from the point of view of the colonized rather than from positions and voices of power. To the uninitiated Western reader as well as to the citizen of India whose life changed with partition, this text operates where cracks or fissures in logic and ordinary language demonstrate the exciting range of the individual imagination. The reader—like the author and like the not-so-innocent 31-year-old Saleem at age nine—recovers the fully playful imagination of childlike enchantment with the world and the need to make up everything he doesn't know or hasn't yet formally learned.
Embodied sensation is everything, and understanding begins in the interpretation of sensation. This is a text of cracks, fissures, holes, oozing snot noses, aromatic chutneys, soiled sheets, and bodily fluids, the world of the infant making sense through sensation. The smell of leather burning, to return to the first example here, is part of the sensory world of the text, part of memory that demands the interpretation of the novice.
The demand for silence as punishment for Saleem's transgression, then, is entirely appropriate. As Saleem well knows, one justifies one's existence, one means something so long as one keeps talking, inventing and reinventing the self. He has, however, witnessed the unspeakable, his mother's nakedness and her sexuality. And not surprisingly, what he has seen has initiated his sexuality. The scene of his departure from the laundry chest, given the diction of the scene, is a moment of birth. The pajama cord has been lodged in his nose and separated as he tumbles out, complete with cord and caul. He has left the security of the washing chest, what he called a "hole in the world," a place where the cares of the world are absent. That he cannot return (to the womb) makes a boisterous joke about the Freudian diagnosis of an unresolved Oedipal complex in which an attraction to the maternal body would explain his impotence. It is the sensory level of the text, expressed word by word, the sensational reading in this chapter examining the birth of sensation.
The continuing story of the rebirth, actually the coming of age of the Indian nation, coincides with Saleem's rebirth: allusions to the strains and achievements of the Indian nation's coming of age in this chapter are noted in a few words: "Nasser sank ships at Suez" and the "country's Five Year Plan." The former was the instance in which Jawaharlal Nehru stepped onto the world stage when his leadership played an important role in the Suez crisis. In calling on a collaboration of nations, Nehru birthed a new political entity, a coalition of unaffiliated nations. In the latter case, Nehru's attempt in generating two Five Year Plans to codify and invent a material future in practical concerns for India was an important beginning: in the first primarily an agricultural plan, and in the second a matter of schools and industrial development. Word by word, he issued the proposals, the building blocks of a new nation. | https://www.coursehero.com/lit/Midnights-Children/book-2-accident-in-a-washing-chest-summary/ | CC-MAIN-2021-31 | en | refinedweb |
Frequent readers of this blog know that I do not like printf (see “Why I don’t like printf()“), because the standard
printf() adds a lot of overhead and only causes troubles. But like small kids, engineers somehow get attracted by troubles ;-). Yes,
printf() and especially
sprintf() are handy for quick and dirty coding. The good news is that I have added a lightweight
printf() and
sprintf() implementation to my set of components: the XFormat component. And best of all: it supports floating point formatting :-).
XFormat Processor Expert Component
The sources of XFormat have been provided to my by Mario Viara, who contacted me by email:
“Hello Erich,
My name is Mario Viara and I’m and “old” embedded software engineer, Il write for two different things the first is very simple, i know you do not like printf in your project but during debug and to log information it is very usefull we have a very tiny implementation that you can find attached to this mail and if you want you can add it to your wonderful processor expert library.”
This morning finally I have turned this into a Processor Expert component. And I have to say: that’s indeed a simple and cool
printf() and
sprintf() implementation. All what I did is doing some re-formatting, added the necessary interfaces and voilà: a new component is born 🙂
Component Methods and Properties
The component is very simple, and exposes three methods:
xvformat(): this is the heart of the module, doing all the formatting. This one uses a pointer to the open argument list.
xformat(): high level interface to
xvformat()with an open argument list.
xsprintf():
sprintf()implementation which uses
xformat()and uses a buffer for the result string
There is only one setting: if floating point have to be supported or not. This setting creates a define which can be checked in the application code.
sprintf() Usage
Using it for to do a
printf() into a provided buffer (
sprintf()) is very easy, e.g. with
char buf[64]; XF1_xsprintf(buf, "Hex %x %X %lX\r\n",0x1234,0xf0ad,0xf2345678L);
which will store the follow string into buf:
Hex 1234 F0AD F2345678
This string then can be printed or used as needed.
printf() Usage
Using
printf() and for example to print the string to a console requires that a ‘printer’ function is provided. This ‘printer’ function (actually a function pointer) needs to be provided by the application.
First, I define my function which shall output one character at a time. It uses as first argument a void parameter (arg) with which I can pass an extra argument. My example implementation below passes the command line Shell standard I/O handler I’m using in my projects (see “A Shell for the Freedom KL25Z Board“). That handler itself is a function pointer:
static void MyPutChar(void *arg, char c) { CLS1_StdIO_OutErr_FctType fct = arg; fct(c); }
Next, I implement my
printf() function I can use in my application: it uses the same interface as the normal printf(). It unpacks the variable argument list and passes it to
xvformat() function, together with the
MyPutChar() function:
unsigned int MyXprintf(const char *fmt, ...) { va_list args; unsigned int count; va_start(args,fmt); count = XF1_xvformat(MyPutChar, CLS1_GetStdio()->stdOut, fmt, args); va_end(args); return count; }
Then I can call it in my application like printf():
(void)MyXprintf("Hello %s\r\n","World");
The component creates as well a define so the application knows if floating point is enabled or not. Below is a test program which uses some of the formatting strings:
static void MyPutChar(void *arg, char c) { CLS1_StdIO_OutErr_FctType fct = arg; fct(c); } unsigned int MyXprintf(const char *fmt, ...) { va_list args; unsigned int count; va_start(args,fmt); count = XF1_xvformat(MyPutChar, CLS1_GetStdio()->stdOut, fmt, args); va_end(args); return count; } static void TestXprintf(void) { (void)MyXprintf("Hello world\r\n"); (void)MyXprintf("Hello %s\r\n","World"); (void)MyXprintf("Not valid type %q\r\n"); (void)MyXprintf("integer %05d %d %d\r\n",-7,7,-7); (void)MyXprintf("Unsigned %u %lu\r\n",123,123Lu); (void)MyXprintf("Octal %o %lo\r\n",123,123456L); (void)MyXprintf("Hex %x %X %lX\r\n",0x1234,0xf0ad,0xf2345678L); #if XF1_XCFG_FORMAT_FLOAT (void)MyXprintf("Floating %6.2f\r\n",22.0/7.0); (void)MyXprintf("Floating %6.2f\r\n",-22.0/7.0); (void)MyXprintf("Floating %6.1f %6.2f\r\n",3.999,-3.999); (void)MyXprintf("Floating %6.1f %6.0f\r\n",3.999,-3.999); #endif } static void TestXsprintf(void) { char buf[64]; XF1_xsprintf(buf, "Hello world\r\n"); CLS1_SendStr((unsigned char*)buf, CLS1_GetStdio()->stdOut); XF1_xsprintf(buf, "Hello %s\r\n","World"); CLS1_SendStr((unsigned char*)buf, CLS1_GetStdio()->stdOut); XF1_xsprintf(buf, "Not valid type %q\r\n"); CLS1_SendStr((unsigned char*)buf, CLS1_GetStdio()->stdOut); XF1_xsprintf(buf, "integer %05d %d %d\r\n",-7,7,-7); CLS1_SendStr((unsigned char*)buf, CLS1_GetStdio()->stdOut); XF1_xsprintf(buf, "Unsigned %u %lu\r\n",123,123Lu); CLS1_SendStr((unsigned char*)buf, CLS1_GetStdio()->stdOut); XF1_xsprintf(buf, "Octal %o %lo\r\n",123,123456L); CLS1_SendStr((unsigned char*)buf, CLS1_GetStdio()->stdOut); XF1_xsprintf(buf, "Hex %x %X %lX\r\n",0x1234,0xf0ad,0xf2345678L); CLS1_SendStr((unsigned char*)buf, CLS1_GetStdio()->stdOut); #if XF1_XCFG_FORMAT_FLOAT XF1_xsprintf(buf, "Floating %6.2f\r\n",22.0/7.0); CLS1_SendStr((unsigned char*)buf, CLS1_GetStdio()->stdOut); XF1_xsprintf(buf, "Floating %6.2f\r\n",-22.0/7.0); CLS1_SendStr((unsigned char*)buf, CLS1_GetStdio()->stdOut); XF1_xsprintf(buf, "Floating %6.1f %6.2f\r\n",3.999,-3.999); CLS1_SendStr((unsigned char*)buf, CLS1_GetStdio()->stdOut); XF1_xsprintf(buf, "Floating %6.1f %6.0f\r\n",3.999,-3.999); CLS1_SendStr((unsigned char*)buf, CLS1_GetStdio()->stdOut); #endif }
Summary
With the XFormat Processor Expert component I have a small
printf() like implementation which is a good alternative to the bloated (and fully featured)
sprintf() and
printf() implementation. It uses a flexible callback mechanism so I can write the text to any channel (bluetooth, wireless, disk, …) I want. The new component is now a member of my components on GitHub (see “Processor Expert Component *.PEupd Files on GitHub“).
Thanks Mario!
Happy Printing 🙂
PS: for ‘normal’ string manipulation functions, see the Utility Processor Expert component, available on the same GitHub repository.
Perhaps Mario could contribute here a comparison between XFormat and the printf() subset used by newlib nano, in terms of code size, speed, etc.
Thanks for posting this – looks useful. Formatted printing has always been the bane of embedded software.
Hi, the printf example you have done in another post is not working for me in KDS 1.1.1 and I can’t seem to find the XFormat component. Has it been removed? What would be another alternative?
Great info in your web! Thanks for sharing.
Thank you!
Hi Alex,
which project are you using? I verified that is working for me with KDS v1.1.1
Erich
Hi,
I have tried to do as you explained in another post to use printf in KDS 1.1.1 but the MCU is not sending anything to the OpenSDA MCU. Also, I can’t find that XFormat component in KDS 1.1.1.
Has it been removed?
What can I do in this case?
Great info in your web site.
Thank you.
Hello,
the XFormat component is not part of Kinetis Design Studio, it is one of the McuOnEclipse components. Have you installed them from SourceForge? See
I hope this helps,
Erich
Hi,
Thank you for your quick answer. Sorry I didn’t know that. I’ve downloaded them. Great stuff. I’m planning to use Kinetis as my new working platform since Microchip has code limited compiles and no ARM core, TI has very expensive software tools, Atmel has no multiplatform software, Cypress (Has awesome platform and software which is even better than Processor Expert IMHO) but they are expensive and no multiplatform and the chose was between Freescale and NXP. Both support mbed and has free multiplatform tools which is awesome, but I prefer the Processor Expert option and the bigger selection of MCUs.
By the way, I have a FRDM-KL25Z which when I changed the firmware of the OpenSDA MCU to use it with mbed I couldn’t get KDS to recognize it so I had to change it back. Is there anyway to be able to have support for boths at the same time? I guess the K64 board can do that, right? I have an LPCXpresso V2 board and does it too. Is there any possibility to achieve the same with the KL25Z?
I really appreciate people sharing info like you. As soon as I get confortable with Freescale I will make tutorials for my youtube channel: Twistx77.
Thank you once again. Have a nice weekend.
Alex.
Hi Alex,
The FRDM-K64F has a different firmware than the FRDM-K24F. What mbed needs is a firmware which recognizes .bin (binary) files. In my view mbed would have been much better off if they would have used s19 instead of bin files, then you would not see that problem. The FRDM-KL25Z (and many other boards) accept S19 files. A solution for you would be that you convert the mbed bin files into S19 files, and then you don’t need to switch.
Erich
LikeLiked by 1 person
Hi, I tested your example and it works perfectly. I don’t see any differences from mine. I will do it again from scratch to figure out what is the problem with mine.
I also found all the examples you have in your github for CW which is awesome.
Thank you and sorry for the inconvenience.
Hello Erich,
Great component, and thank you for making it available to the public.
I am not sure if you are aware of it or not but there is a bug on the Xformat component.
if you try to print negative floating point numbers such as:
-0.56, -0.01, -0.85, etc the xformat outputs the number but not the ‘-‘ sign.
if you then try to print -1.56, -2.01, -10.85, etc it works just find.
I have floating point enable in the component.
So any number with -0.XX or -.XX fails to print the ‘-‘ character.
Any ideas?
Thank you.
Hello,
many thanks for reporting this problem, and indeed: I can reproduce it 😦
I have commited a fix for it here (which is very easy):
It will be included in the next component release. Until then, you could do that one line change on your machine too. Let me know if you need any assistance.
Hello,
Oh, that is great!
Thank you for quick Fix.
Awesome!
Erich,
i am seeing this issue:
almost same xsprintf statements, but i swap the last 2 arguments. the last %d always results in ‘0’. i tried %x as well and same result.
XF1_xsprintf(debugMess,”index %d, addr 0x%x : time %d code %d\n”,i,NVM_ERROR_PAGE+i,time,code);
//SerialPuts(Type_00_Console,debugMess);
XF1_xsprintf(debugMess,”index %d, addr 0x%x : time %d code %d\n”,i,NVM_ERROR_PAGE+i,code,time);
//SerialPuts(Type_00_Console,debugMess);
debugMess after 1st XF1_xsprintf;
Name : debugMess
Details:”index 0, addr 0x8f000 : time 19 code 0\n\070942′ ‘0x175902f4’) :\0 CAN ID: 39170942\n\n\0000, com_sp 0.000, com_v -0.011, IDLE, voc_only: algo_init, — 0.000 (0.000), [-1:4] – 0.000 (0.000)\n”, ‘\0′
Default:0x1fffbc1c
Decimal:536853532
Hex:0x1fffbc1c
Binary:11111111111111011110000011100
Octal:03777736034
debugMess after 2nd XF1_xsprintf;
Name : debugMess
Details:”index 0, addr 0x8f000 : time 2147483647 code 0\n\00x175902f4’) :\0 CAN ID: 39170942\n\n\0000, com_sp 0.000, com_v -0.011, IDLE, voc_only: algo_init, — 0.000 (0.000), [-1:4] – 0.000 (0.000)\n”, ‘\0’
Default:0x1fffbc1c
Decimal:536853532
Hex:0x1fffbc1c
Binary:11111111111111011110000011100
Octal:03777736034
Hi David,
Few ideas and questions:
Could it be that you have a possible stack overflow? Can you try with an increased stack size? Because it requires around 100 bytes on the stack.
And did you include the header file âXF1.hâ before you are using it? Because it uses an open argument list this is very important.
I hope this helps,
Erich
Pingback: McuOnEclipse Components: 30-Oct-2016 Release | MCU on Eclipse
Hi Erich,
Thank you for sharing XFormat, it’s very useful for me!
I just want to share my suggestion about this component:
(1) Add snprintf():
/*
** ===================================================================
** Method : XF_xsnprintf (component XFormat)
** Description :
** snprintf() like function
** Parameters :
** NAME – DESCRIPTION
** * buf – Pointer to buffer to be written
** max_len – Max output buffer size
** * fmt – Pointer to formatting string
** argList – Open Argument List
** Returns :
** — – number of characters written, negative for
** error case
** ===================================================================
*/
struct StrOutBuffer {
char * s;
unsigned space;
};
static void putCharIntoBufMaxLen(void *arg, char c) {
struct StrOutBuffer * buff = (struct StrOutBuffer *)arg;
if (buff->space > 0) {
buff->space–;
*(buff->s)++ = c;
}
}
static int xsnprintf(char *buf, unsigned max_len, const char *fmt, va_list args) {
int res;
struct StrOutBuffer out = { buf, max_len };
if (max_len 0) res = out.s – buf;
return res;
}
int XF_xsnprintf(char *buf, unsigned max_len, const char *fmt, …)
{
va_list args;
int res;
va_start(args,fmt);
res = xsnprintf(buf, max_len, fmt, args);
va_end(args);
return res;
}
(2) Add GCC extension:
/* GCC have printf type attribute check. */
#ifdef __GNUC__
#define PRINTF_ATTRIBUTE(a,b) __attribute__ ((__format__ (__printf__, a, b)))
#else
#define PRINTF_ATTRIBUTE(a,b)
#endif /* __GNUC__ */
unsigned XF_xformat(void (*outchar)(void *,char), void *arg, const char * fmt, …) PRINTF_ATTRIBUTE(3,4);
int XF_xsprintf(char *buf, const char *fmt, …) PRINTF_ATTRIBUTE(2,3);
int XF_xsnprintf(char *buf, unsigned max_len, const char *fmt, …) PRINTF_ATTRIBUTE(3,4);
Cheers,
-Engin
Hi Engin,
Could you send me that piece of source by email to my email address listed on, as posting sources like this on the web will loose things. For example your code does not include any of the parsing parts.
Thanks!
Erich
Hi Engin,
cool idea about the GNU function attributes ()!
I did not know this, always learning something new 🙂
Erich
It’s due to HTML tagging in the source code, sorry about that!
I have sent the code directly to your e-mail.
Thanks you!
Hi Engin,
thank you so much! I’m adding this to the component.
Erich
Pingback: Using FreeRTOS with newlib and newlib-nano | MCU on Eclipse
Pingback: Using the GNU Linker Script to know the FLASH and RAM Areas in the Application | MCU on Eclipse
So is this implementation thread-safe?
Yes, it is thread-safe.
Thank you, Erich! As always, you are a great help.
I believe that it is, based on the discussion that it uses the stack and quick inspection of the code, but would feel more confident if I could get confirmation.
Hi,
I’ve been having an issue formatting floating point numbers. Specifically, if I use %2.6f or %2.7f, I just get the integer part of the number, whereas using %f, I get the first six digits after the decimal point.
As an example, I tried the following code:
float Lat = 34.123456;
float Lon = 35.567891;
strlLength = XF1_xsprintf((char*) s,”GPS = %2.7f, %2.7f\r\n”,Lat,Lon);
The result is:
s = “GPS = 34, 35”
Any ideas?
Thanks!
LikeLiked by 1 person
Actually this is a bug (or missing feature) in the original XFormat code. I realize that the version on GitHub () has it fixed, so I have now updated (and fixed) the Processor Expert component for it. I applogize for that issue, and I have sent you the new component to your gmail address. Of course this fix will be in the next update from my side.
Thanks for reporting!
Erich
LikeLiked by 1 person
Pingback: McuOnEclipse Components: 1-Apr-2018 Release | MCU on Eclipse
Hi, Erich
Very interesting your alternative to printf. I’m looking for something like that. In my case, I need to print a string through UART into a Terminal. You call XF1_xvformat like that: XF1_xvformat(MyPutChar, CLS1_GetStdio()->stdOut, fmt, args);. I suppose that in MyPutChar function I should load my UART buffer with each character of my string. But what about CLS1_GetStdio()->stdOut function? What does this function do exactly and what function should I pass as the second parameter to send my data to UART transmit buffer instead of a variable?
Sorry for my question, I’m not experienced about Standard I/O functions.
Thanks and best regards!
Marco Coelho
Hi Marco,
xvformat() can be used with two parameters: the output function plus an optional argument with the output function. In my case I pass the stdio output channel (stdOut) as argument. It would be possible to do this without such an argument.
In your case using an UART: simply pass your UART putChar function as the first argument, and NULL if you don’t need the second one.
I hope this helps,
Erich
Hi Erich,
Interesting ‘bug’ came up last week. I got a Hard Fault because a xsprintf(buff,” %64s “, pchars);
incremented the pchars pointer until it was not pointing at memory.
pchars was supposed to point to a string (in flash), and this time it was pointing at erased flash memory, which is all xFF. but the xsprintf happily keeps copying pchars til it finds a x00 or generates a Hard Fault. So it also appears like it would have filled memory starting at buff with 0xff, and probably ran out of RAM at buff before it ran out of the FLASH that pchars was pointing to.
I rewrote with a memcpy – something it maybe should have been in the first place since it wasn’t really doing any formatting, but it surprised me that the xsprintf could run off the end of the buffer – I had thought you liked it better because it was safer than printf. This incident just reminded me that printf is not really safe at all, including the x varient.
Brynn
I guess the real problem is that I should always use the xsnprintf or snprintf versions in my embedded code, and never the unbounded versions. In fact, a smart coder should probably never use the unbounded versions ever.
Brynn
Yes, exactly. The ‘n’ version is always preferred and is what I’m using. the XFormat version is better compared to the normal printf()/sprintf() because it is reentrant and has a much smaller footprint.
Pingback: Tutorial: How to Optimize Code and RAM Size | MCU on Eclipse | https://mcuoneclipse.com/2014/08/17/xformat-a-lightweight-printf-and-sprintf-alternative/?like_comment=39816&_wpnonce=4a117ffd59 | CC-MAIN-2021-31 | en | refinedweb |
- Open Standards Support and Naming Conventions
- Namespace and Naming Conventions
- Migration and Backward Compatibility
Namespace and Naming Conventions
The Active Directory's naming conventions serve a number of important purposes. First, all directories are based on the concept of a namespace; that is, a name is used to resolve the location of an object. When name resolution occurs, it converts the namespace locator to the specific object, such as a printer on the third floor. Everything hangs off the Active Directory using the namespace to identify and locate persons, places, and things within an enterprise.
Naming Conventions
In keeping with the namespace concept, the Active Directory uses four name types to recognize every object:
The distinguished name (DN) defines the domain and related container(s) in which an object resides. It matches several defined attributes to the descriptionthe basic ones are DomainComponent (DC), OrganizationalUnit (OU), and CommonName (CN). For example, a representation of a printer within the EntCert.com domain and sales organizational unit, physically located on the third floor, would have the distinguished name DC=COM,DC=EntCert, OU=sales,CN=Printer,CN=3rdfloorprinter.
The relative distinguished name (RDN) is really an attribute of the object itself. In the above example, CN=3rdfloorprinter is relative to its parent, CN=printer. The RDN is compared to the DN.
The Globally Unique Identifier (GUID) avoids duplication of objects, and ensures uniqueness. It is a 128-bit number assigned to an object at the time of creation, and stored with it. This permits applicationsfor example, a word processor file with embedded graphical objectsto retrieve an updated version of a drawing by use of the GUID that is stored within the document.
The user principal name (UPN) is considered a "friendly" naming convention. It combines the user account name with the DNS domain name in which the account exists. The name [email protected] is the UPN for user account bob within the EntCert.com domain tree.
Additional use of Industry Naming Standards
The Active Directory supports a number of industry standard formats to facilitate greater interoperability with other directory services. Microsoft's own Universal Naming Convention (UNC), which takes the form \\EntCert.com\Administration\budget.doc, is the base naming convention. The Active Directory also incorporates RFC 822, which defines the common Internet e-mail-naming structure of [email protected]. It supports the familiar HTTP Uniform Resource Locator (URL), which takes the form page, as well. Finally, borrowing from the X.500 communication protocol, the Active Directory incorporates the LDAP URL, as defined in the draft of RFC 1779, which breaks the name into very specific subunits that might look like this:
//EntCert.com/CN=myname,OU=mybranch,OU=myproduct,DN=divisionarea.
The initials provide greater object specification. CN is the user's first and last name. The two hierarchical OUs distinguish the branch of the company and the specific product area. Finally, DN defines the work area, such as engineering, accounting, or marketing.
Active Directory Use of LDAP
The Lightweight Directory Access Protocol (LDAP), defined by the Internet Engineering Task Force RFC 2251, is a simplified version of the X.500 DAP. Microsoft's Windows 2000 utilizes LDAP versions 2 and 3. LDAP permits the exchange of directory data between services. For example, because Novell Directory Services data is LDAP-compliant, it can be passed to and from the Active Directory.
LDAP is utilized to access information from compliant directory services such as the Active Directory. LDAP searches the Active Directory for information about stored objects and their attributes. It uses both distinguished and relative distinguished names to locate the object, and works closely with DNS throughout this process. DNS provides resolution to locate the appropriate Active Directory domain controller; LDAP resolves the object itself. The process follows these basic steps:
A client queries DNS for an LDAP server (Active Directory domain controller).
The client queries the domain controller for information about the object.
If the requested object is not in the domain, but the domain controller knows there are child domains or a forest domain, it issues a referral to contact the other domain controllers until object resolution is achieved.
The client is returned the object information.
LDAP's C language API (RFC 1823) permits the development of application enhancements and the building of related applications. Developing interfaces using the LDAP C API is perhaps the easiest way to provide interoperability between LDAP directory-compliant services. Existing LDAP directory service auxiliary applications migrated to communicate with Windows 2000 Active Directory may not need modification. If API code changes are required as well, they are typically for the identification of objects unique to the Active Directory.
In addition to the aforementioned conventions introduced in Windows 2000, the Windows 2000 Active Directory implementation adds a number of additional facilities to its LDAP support:
Active Directory now has support for dynamic store entries, as defined by the Internet Engineering Task Force (IETF) standard protocol RFC 2589. Entries in the directory can additionally be assigned Time-To-Live (TTL) values that determine when the entries will be automatically deleted.
Transport Layer Security (TLS) connections to Active Directory over LDAP are now protected using the IETF standard TLS security protocol (RFC 2830).
The Digest Authentication mechanism (DIGEST-MD5 SASL authentication) over LDAP connection Active Directory is supported (RFC 2829).
Virtual List Views (VLV) LDAP extensions, as defined by the Working Group of IETF, are also supported.
[lb]Support for Dynamic Auxiliary classes and instances are also supported.
NOTE
An extension of LDAP is its replication technology protocol LDUP (LDAP Duplication/Update Protocol), an open standards specification that is not embraced by the Active Directory. LDUP does a complete real-time replication of directory elements that requires that compliant directory services adhere to very specific rules and data structures. Violation of any of the rules such that a directory entry is not understood can lead to cascading errors throughout the directory. Rather than LDUP, the Active Directory utilizes a synchronization methodology to populate and update directories.
NOTE
Microsoft decided not to include major portions of the X.500 protocol in the Active Directory primarily because of its dependence on an OSI networking layer and a general lack of public interest. The excluded elements are the Directory Access Protocol, the Directory System Protocol, the Directory Information Shadowing Protocol, and the Directory Operational Binding Management Protocol. LDAP is the most significant element of X.500 incorporated in Windows 2000 and Windows 2000. | http://www.informit.com/articles/article.aspx?p=27023&seqNum=3 | CC-MAIN-2019-09 | en | refinedweb |
#include <hallo.h> * Peter De Wachter [Sat, Aug 24 2002, 01:25:54AM]: > It has the following license: > -- UNACE-SOURCE v1.2b (extract-util) -- > the source may be distributed and used, > but I,Marcel Lemke, retain ownership of > the copyrights to the source. > --------------------------------------- > > I think this makes it non-free (it doesn't explicitly allow > modifications), but it may help to figure out the algorithm. Worse. I considered packaging it (*) but stopped after discussing the license issue. You are only allowed to use the source, nothing is said about the binary created. Looks like the qmail-src story. (*) while extending the support in the unp Gruss/Regards, Eduard. -- Linux - aus klaren Quellen wird ein starker Strom. | https://lists.debian.org/debian-devel/2002/08/msg01531.html | CC-MAIN-2019-09 | en | refinedweb |
JavaScript Basics Before You Learn React
Nathan Sebhastian
Jan 14
Updated on Feb 09, 2019
・9 min read
In an ideal world, you can learn all about JavaScript and web development before you dive into React. Unfortunately, we live in a not-perfect world, so chomping down on ALL of JavaScript before React will just make you bleed hard. If you already have some experience with JavaScript, all you need to learn before React is just the JavaScript features you will actually use to develop React application. Things about JavaScript you should be comfortable with before learning React are:
- ES6 classes
- The new variable declaration let/const
- Arrow functions
- Destructuring assignment
- Map and filter
- ES6 module system
It's the 20% of JavaScript features that you will use 80% of the time, so in this tutorial I will help you learn them all.
Exploring Create React App
The usual case of starting to learn React is to run the
create-react-app package, which sets up everything you need to run React. Then after the process is finished, opening
src/app.js will present us with the only React class in the whole app:;
If you never learned ES6 before, you'd think that this class statement is a feature of React. It's actually a new feature of ES6, and that's why learning ES6 properly would enable you to understand React code better. We'll start with ES6 classes.
ES6 Classes
ES6 introduced class syntax that is used in similar ways to OO language like Java or Python. A basic class in ES6 would look like this:
class Developer { constructor(name){ this.name = name; } hello(){ return 'Hello World! I am ' + this.name + ' and I am a web developer'; } }
class syntax is followed by an identifier (or simply name) that can be used to create new objects. The
constructor method is always called in object initialization. Any parameters passed into the object will be passed into the new object. For example:
var nathan = new Developer('Nathan'); nathan.hello(); // Hello World! I am Nathan and I am a web developer
A class can define as many methods as the requirements needed, and in this case, we have the
hello method which returns a string.
Class inheritance
A class can
extends the definition of another class, and a new object initialized from that class will have all the methods of both classes.
class ReactDeveloper extends Developer { installReact(){ return 'installing React .. Done.'; } } var nathan = new ReactDeveloper('Nathan'); nathan.hello(); // Hello World! I am Nathan and I am a web developer nathan.installReact(); // installing React .. Done.
The class that
extends another class is usually called child class or sub class, and the class that is being extended is called parent class or super class. A child class can also override the methods defined in parent class, meaning it will replace the method definition with the new method defined. For example, let's override the
hello function:
class ReactDeveloper extends Developer { installReact(){ return 'installing React .. Done.'; } hello(){ return 'Hello World! I am ' + this.name + ' and I am a REACT developer'; } } var nathan = new ReactDeveloper('Nathan'); nathan.hello(); // Hello World! I am Nathan and I am a REACT developer
There you go. The
hello method from
Developer class has been overridden.
Use in React
Now that we understand ES6 class and inheritance, we can understand the React class defined in
src/app.js. This is a React component, but it's actually just a normal ES6 class which inherits the definition of React Component class, which is imported from the React package.
import React, { Component } from 'react'; class App extends Component { // class content render(){ return ( <h1>Hello React!</h1> ) } }
This is what enables us to use the
render() method, JSX,
this.state, other methods. All of this definitions are inside the
Component class. But as we will see later, class is not the only way to define React Component. If you don't need state and other lifecycle methods, you can use a function instead.
Declaring variables with ES6
let and
const
Because JavaScript
var keyword declares variable globally, two new variable declarations were introduced in ES6 to solve the issue, namely
let and
const. They are all the same, in which they are used to declare variables. The difference is that
const cannot change its value after declaration, while
let can. Both declarations are local, meaning if you declare
let inside a function scope, you can't call it outside of the function.
const name = "David"; let age = 28; var occupation = "Software Engineer";
Which one to use?
The rule of thumb is that declare variable using
const by default. Later when you wrote the application, you'll realize that the value of
const need to change. That's the time you should refactor
const into
let. Hopefully it will make you get used to the new keywords, and you'll start to recognize the pattern in your application where you need to use
const or
let.
When do we use it in React?
Everytime we need variables. Consider the following example:
import React, { Component } from 'react'; class App extends Component { // class content render(){ const greeting = 'Welcome to React'; return ( <h1>{greeting}</h1> ) } }
Since greeting won't change in the entire application lifecycle, we define it using
const here.
The arrow function
Arrow function is a new ES6 feature that's been used almost widely in modern codebases because it keeps the code concise and readable. This feature allows us to write functions using shorter syntax
// regular function const testFunction = function() { // content.. } // arrow function const testFunction = () => { // content.. }
If you're an experienced JS developer, moving from the regular function syntax to arrow syntax might be uncomfortable at first. When I was learning about arrow function, I used this simple 2 steps to rewrite my functions:
- remove function keyword
- add the fat arrow symbol
=>after
()
the parentheses are still used for passing parameters, and if you only have one parameter, you can omit the parentheses.
const testFunction = (firstName, lastName) => { return firstName+' '+lastName; } const singleParam = firstName => { return firstName; }
Implicit return
If your arrow function is only one line, you can return values without having to use the
return keyword and the curly brackets
{}
const testFunction = () => 'hello there.'; testFunction();
Use in React
Another way to create React component is to use arrow function. React take arrow function:
const HelloWorld = (props) => { return <h1>{props.hello}</h1>; }
as equivalent to an ES6 class component
class HelloWorld extends Component { render() { return ( <h1>{props.hello}</h1>; ); } }
Using arrow function in your React application makes the code more concise. But it will also remove the use of state from your component. This type of component is known as stateless functional component. You'll find that name in many React tutorials.
Destructuring assignment for arrays and objects
One of the most useful new syntax introduced in ES6, destructuring assignment is simply copying a part of object or array and put them into named variables. A quick example:
const developer = { firstName: 'Nathan', lastName: 'Sebhastian', developer: true, age: 25, } //destructure developer object const { firstName, lastName } = developer; console.log(firstName); // returns 'Nathan' console.log(lastName); // returns 'Sebhastian' console.log(developer); // returns the object
As you can see, we assigned firstName and lastName from
developer object into new variable
firstName and
lastName. Now what if you want to put
firstName into a new variable called
name?
const { firstName:name } = developer; console.log(name); // returns 'Nathan'
Destructuring also works on arrays, only it uses index instead of object keys:
const numbers = [1,2,3,4,5]; const [one, two] = numbers; // one = 1, two = 2
You can skip some index from destructuring by passing it with
,:
const [one, two, , four] = numbers; // one = 1, two = 2, four = 4
Use in React
Mostly used in destructuring
state in methods, for example:
reactFunction = () => { const { name, email } = this.state; };
Or in functional stateless component, consider the example from previous chapter:
const HelloWorld = (props) => { return <h1>{props.hello}</h1>; }
We can simply destructure the parameter immediately:
const HelloWorld = ({ hello }) => { return <h1>{hello}</h1>; }
Map and filter
Although this tutorial focuses on ES6, JavaScript array
map and
filter methods need to be mentioned since they are probably one of the most used ES5 features when building React application. Particularly on processing data.
These two methods are much more used in processing data. For example, imagine a fetch from API result returns an array of JSON data:
const users = [ { name: 'Nathan', age: 25 }, { name: 'Jack', age: 30 }, { name: 'Joe', age: 28 }, ];
Then we can render a list of items in React as follows:
import React, { Component } from 'react'; class App extends Component { // class content render(){ const users = [ { name: 'Nathan', age: 25 }, { name: 'Jack', age: 30 }, { name: 'Joe', age: 28 }, ]; return ( <ul> {users .map(user => <li>{user.name}</li>) } </ul> ) } }
We can also filter the data in the render.
<ul> {users .filter(user => user.age > 26) .map(user => <li>{user.name}</li>) } </ul>
ES6 module system
The ES6 module system enables JavaScript to import and export files. Let's see the
src/app.js code again in order to explain this.;
Up at the first line of code we see the import statement:
import React, { Component } from 'react';
and at the last line we see the
export default statement:
export default App;
To understand these statements, let's discuss about modules syntax first.
A module is simply a JavaScript file that exports one or more values (can be objects, functions or variables) using the
export keyword. First, create a new file named
util.js in the
src directory
touch util.js
Then write a function inside it. This is a default export
export default function times(x) { return x * x; }
or multiple named exports
export function times(x) { return x * x; } export function plusTwo(number) { return number + 2; }
Then we can import it from
src/App.js
import { times, plusTwo } from './util.js'; console.log(times(2)); console.log(plusTwo(3));
You can have multiple named exports per module but only one default export. A default export can be imported without using the curly braces and corresponding exported function name:
// in util.js export default function times(x) { return x * x; } // in app.js import k from './util.js'; console.log(k(4)); // returns 16
But for named exports, you must import using curly braces and the exact name. Alternatively, imports can use alias to avoid having the same name for two different imports:
// in util.js export function times(x) { return x * x; } export function plusTwo(number) { return number + 2; } // in app.js import { times as multiplication, plusTwo as plus2 } from './util.js';
Import from absolute name like:
import React from 'react';
Will make JavaScript check on
node_modules for the corresponding package name. So if you're importing a local file, don't forget to use the right path.
Use in React
Obviously we've seen this in the
src/App.js file, and then in
index.js file where the exported
App component is being rendered. Let's ignore the serviceWorker part for now.
//index.js file();
Notice how App is imported from
./App directory and the
.js extension has been omitted. We can leave out file extension only when importing JavaScript files, but we have to include it on other files, such as
.css. We also import another node module
react-dom, which enables us to render React component into HTML element.
As for PWA, it's a feature to make React application works offline, but since it's disabled by default, there's no need to learn it in the beginning. It's better to learn PWA after you're confident enough building React user interfaces.
Conclusion
The great thing about React is that it doesn't add any foreign abstraction layer on top of JavaScript as other web frameworks. That's why React becomes very popular with JS developers. It simply uses the best of JavaScript to make building user interfaces easier and maintainable. There really is more of JavaScript than React specifix syntax inside a React application, so once you understand JavaScript better — particularly ES6 — you can write React application with confident. But it doesn't mean you have to master everything about JavaScript to start writing React app. Go and write one now, and as opportunities come your way, you will be a better developer.
And by the way, you might wanna check out this cool articles from Toptal:
Who's looking for open source contributors? (Dec 31st edition)
Find something to work on or promote your project here. Please shamelessly pro...
Arrow functions, apart from their aesthetics, have this property called
lexical scoping. This explains lexical scoping better than I ever will :)
In short, arrow functions follow the scope of the caller's scope, rather than having its own. function() {} functions' scope can change to whatever by calling .bind() - basically how JS prototype works.
Maybe you deliberately omitted this because in React world arrow functions are mostly used for shorter declaration. But hey, I figured the post might give a wrong impression that arrow function is only for cleaner code :P
Short test code:
`
Ah yes, sorry if I skip the part about this. Thanks for adding it here, Jesse :)
'Both declarations are local, meaning if you declare let inside a function scope, you can't call it outside of the function.'
a var declared inside a function is also not global. this is not what is different about let. the difference is that let has block scope for example:
let x = 1;
if (x === 1) {
let x = 2;
console.log(x);
// expected output: 2
}
console.log(x);
// expected output: 1
Oh sorry, what I actually meant is block scope not function scope. If you call x outside of the if scope it will return 2 with var. I'll fix that as soon possible. Thanks David :)
Great post Nathan. Easy to understand. Only thing i couldn't get my head around is the Destructuring concept coz i haven't used or heard it before. Hopefully i'll get better understanding after some use.
Thanks.
Thanks dan, don't worry too much about destructuring, in simple application, we used it for shortening the syntax for getting state value only. For example, if you have this state initialized
Then when you want to get the value, do:
Instead of:
Now in my example tutorial, I have also included destructuring assignment into a new variable, like:
But to tell you the truth, I never used this kind of assignment, so just consider it an extra knowledge that might be useful sometime 😆
yap...its super important to Understand JavaScript Fundamental before using any frameworks. One should learn how JavaScript works, its life cycle, What Prototypal inheritance is, Classical vs Prototypal model, What Closure is etc...
Thank you for this post! I am actually working on a new web project, and this time I decided to use React instead of JS vanilla. It helped me a lot :D
You're welcome Rospars. Glad could help you out. Good luck with your project :)
Great post, thanks Nathan.
Great selection of features. Great article! I would add the spread operator as well. It's useful for making shallow copies of arrays and objects.
I've used
{...this.props}to pass a copy of props in react without naming them all, and then using destructuring to extract them in the component. See this stackoverflow question for a good description.
Ah certainly Steve, spread and rest operator would be a great addition. I'm just afraid the article would be too long when I wrote this. Thanks for your comment :)
Could I translate your post into Korean? I'm working as a front-end developer but mainly maintaining the old legacy. Next version would use React. So, I'm studying React.
Sure, go ahead
Thanks :)
Thank you Nathan for a great and informative post
Good article for the beginner who want to learn reactJS.
Thanks Nathan.
Hey, great article, I knew most of this, but the part of modules have clarified me all I didn't have end up understanding.
BTW, I think there is an error here, no?
// in util.js
export default function times(x) {
return x * x;
}
// in app.js
export k from './util.js';
console.log(k(4)); // returns 16
That second export should be an import, shouldn't be?
ah yes, thanks for the correction Alex :) glad could help you learn something
Great post!
I'd like to also mention .forEach() and .reduce() which are not quite as commonly used as .map() and .filter() but still worth noting. :) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/nsebhastian/javascript-basics-before-you-learn-react-38en | CC-MAIN-2019-09 | en | refinedweb |
Companies providing cloud computing
Companies providing cloud computing Hi,
Which all companies providing cloud computing?
Thanks
Hi,
Please see the page Companies offering Cloud Computing.
Thanks
struts
struts Hi
how struts flows from jsp page to databae and also using validation ?
Thanks
Kalins Naik
Thanks - Java Beginners
either same page or other page.
once again thanks
hai... then it displays data on the same page else
on the other Page.
Thanks
Vineet...Thanks Hi,
thanks
This is good ok this is write code but i
Thanks This is my code.Also I need code for adding the information on the grid and the details must be inserted in the database.
Thanks in advance
Struts
Struts What is Struts?
Hi hriends,
Struts is a web page... developers, and everyone between.
Thanks.
Hi friends,
Struts is a web... developers, and everyone between.
Thanks.
Hi friends,
Struts how to handle multiple submit buttons in a single jsp page of a struts application Hi friend,
Code to help in solving...://
Thanks
Struts - Struts
://
Thanks...Struts Hello !
I have a servlet page and want to make login page in struts 1.1
What changes should I make for this?also write struts-config.xml
Redirection in struts - Struts
sendredirect can we forward a page in struts Hi
There are more ways to forward one page to another page in struts.
For Details you can click here:
Thanks
struts internationalisation - Struts
to :
Thanks...struts internationalisation hi friends
i am doing struts iinternationalistaion in the site
Struts Layout Examples - Struts
Struts Layout Examples Hi,
Iam trying to create tabbed pages using the struts layout tag.
I see the tab names on the page but they cannot...://
Thanks.
Amardeep
Doubts on Struts 1.2 - Struts
visit for more information.
Thanks...Doubts on Struts 1.2 Hi,
I am working in Struts 1.2. My requirement is to display data in a multiple selected box from database in a JSP page.
Can
struts validation problem - Struts
struts validation problem i used client side validation in struts 2... it on the top of the page ya right side of the component.
Que: how to configure...-example.shtml
Thanks
Struts Tag Lib - Struts
://
Thanks...Struts Tag Lib Hi
i am a beginner to struts. i dont have...
Defines a tag library and prefix for the custom tags used in the JSP page Reference
Struts Reference
Welcome to the Jakarta Online Reference page, you will find everything you
need to know to quick start your Struts Project. Here we are providing you
detailed Struts Reference. This online struts
java struts error - Struts
java struts error
my jsp page is
post the problem... on struts visit to :
Thanks... loginaction page is
package com.ssss.struts;
import org.apache.struts.action. my form...\
PLs help...
Thanks in advance
struts
struts i
Facing Problem with submit and cancel button in same page - Struts
u please help me out.I have placed submit and cancel button in the jsp page... friend,
Read for more information.
Thanks
struts
can easily learn the struts.
Thanks...struts how to start struts?
Hello Friend,
Please visit the following links:
Hi.. - Struts
/struts/
Thanks. struts-tiles.tld: This tag library provides tiles...Hi.. Hi,
I am new in struts please help me what data write in this file ans necessary also...
struts-tiles.tld,struts-beans.tld,struts
Page Refresh - Struts
Page Refresh Hi How Can we control the page does not go to refersh after its forward again from action class.in my form i have one dropdownlist... in same page and fwd the action to same page
Struts - Framework
on struts visit to :
Thanks... struts application ?
Before that what kind of things necessary...,
Struts :
Struts Frame work is the implementation of Model-View-Controller
java - Struts
/struts
Thanks...java how can i get dynavalidation in my applications using struts... :
*)The form beans of DynaValidatorForm are created by Struts and you configure
multiple configurstion file in struts - Struts
in jsp page/ How to to do the whole configuration.
thanks
Arat Hi... the solution.
I have three configuration file as 'struts-config.xml','struts-module.xml' and 'struts-comp.xml'.I have three jsp pages as 'index1.jsp','index2.jsp
show, hide, disable components on a page with struts - Struts
show, hide, disable components on a page with struts disabling a textbox in struts.. in HTML its like disable="true/false" how can we do it in struts
java - Struts
java Hi,
I want full code for login & new registration page in struts 2
please let me know as soon as possible.
thanks,. Hi...
Thanks
Struts + HTML:Button not workin - Struts
Struts + HTML:Button not workin Hi,
I am new to struts. So pls... in same JSP page.
As a start, i want to display a message when my actionclass...:
http
Error - Struts
Error Hi,
I downloaded the roseindia first struts example... create the url for that action then
"Struts Problem Report
Struts has detected...
This is my index page
Struts Resources
forum
etc.
At RoseIndia.net Struts Resource page you will find...
Struts Resources
RoseIndia.Net is the ultimate Struts Resources for the web development
community using Struts
struts
struts how to make one jsp page with two actions..ie i need to provide two buttons in one jsp page with two different actions without redirecting to any other page validation not work properly - Struts
Struts validation not work properly hi...
i have a problem with my struts validation framework.
i using struts 1.0...
i have 2 page which... act as my 1st page & when it load it has 1 link [Add record] which will open
java - Struts
java This is my login jsp page::
function...:
Submit
struts... friend,
Check your code having error :
struts-config.xml
In Action
java - Struts
,struts-config.xml,web.xml,login form ,success and failure page also...
code...java hi..,
i wrote login page with hardcodeted username and password ,but when i submit the page ,i give blank page...
one hint: error-jdbc
the following link:
Struts JDBC
Thanks...Struts-jdbc <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<%@ taglib uri="/WEB-INF/struts-html.tld" prefix="html
Struts - Struts
/struts/
Thanks...Struts Dear Sir , I m very new to struts how to make a program in struts and how to call it in action mapping to to froward another location
java - Struts
config
/WEB-INF/struts-config.xml
1
action
*.do
Thanks.../loginpage.jsp
login page ::
function validate(objForm...!
struts-config.xml iteraor
Struts iteraor Hi, I am making a program in Struts 1.3.8 in which i... in tabular form. These are my action class and jsp page with iterator...;/body>
</html>
thanks
Struts 2 Video Tutorial
tutorial page.
View the Struts 2 Video Tutorial.
Thanks...Struts 2 Video Tutorial I think its easy to learn from Struts 2 Video Tutorial. What is the url of Struts 2 Video Tutorial on roseindia.net website
Struts Guide
Struts Guide
- This tutorial is extensive guide to the Struts Framework...
Struts Framework. This tutorial assumes that the reader is familiar with the web
java - Struts
java hi i m working on a project in which i have to create a page in which i have to give an feature in which one can create a add project template... you want in your project.
Do you want to work in java swing?
Thanks
program code for login page in struts by using eclipse
program code for login page in struts by using eclipse I want program code for login page in struts by using eclipse
Hello - Struts
not using jsp & servlets
I want to make icon and click icon open fist page of project....open pop-up menu and access all page store in the database... visit to :
Thanks
Struts - Struts
://
Thanks...Struts Is Action class is thread safe in struts? if yes, how...
connecting to database - Struts
information via the database in my web page.
Thanks
Tayo Hi friend... to MS SQL Server 2005 database.
My first is what do i write in struts... information.
Struts 2 File Upload
Struts 2 File Upload
In this section you will learn how to write program in
Struts 2 to upload the file on the server. In this example we are also providing
the code to save the uploaded file
Ajax in struts application - Ajax
problem is that my jsp page is able to send the request to my Struts Action process...Ajax in struts application I have a running application using struts...://
Hope that it will be helpful for you.
Thanks
java.lang.ClassNotFoundException: - Struts
.
Thanks Thanks alot,I finally got it connected(MAVEN)
now no any page
Struts - Struts
Struts Dear Sir ,
I am very new in Struts and want... provide the that examples zip.
Thanks and regards
Sanjeev. Hi friend... ${min} and ${max}
used in a struts aplication.
these are the conditions
1. when u entered into this page the NEXT BUTTON must disabled
2. if u enter any text
forward error message in struts
in struts from one jsp to other jsp?
Hello Friend,
Use <%@ page errorPage="errorPage.jsp" %> in the first page that generates the exception... to the specified jsp page.
For more information,
visit here
Thanks
JSF Books
frameworks and technologies like Struts, Servlets, Portlets, JavaServer Pages (JSP... Struts applications to JSF and how Swing works on the client side while JSF works... components in a page, connect these components to an application data source
button it has to go to another page.. for that one i taken two form tags..
i
struts - Struts
.
Thanks
redirect with tiles - Struts
in page left when i click it is page Body change. Help me!
Thanks very much....
Thanks
This is code in jsp page:
ID
Name... specify in detail and send me code.
Thanks. I using tiles
thanks - JSP-Servlet
thanks thanks sir i am getting an output...Once again thanks for help
Struts 2 issue
Struts 2 issue hi,
I have one jsp page and having one hidden... am redirecting request to same jsp page and on click of submit submiting values... contain show only one time
Thanks
Sandeep
[email protected]
thanks - JSP-Servlet
thanks thanks sir its working
Thanks - Java Beginners
Thanks Hi,
Thanks for reply I m solve this problem
Hi ragini,
Thanks for visiting roseindia.net site
Thanks to Amardeep - Java Beginners
Thanks to Amardeep i don't know how to thank you Amar!
i real thank for what you did' i mean what you did is really my pressure, thanks....
thanks Amardeep.
2hafeni | http://www.roseindia.net/tutorialhelp/comment/95157 | CC-MAIN-2014-42 | en | refinedweb |
Type: Posts; User: dextor33
It worked :) Thnkz :)
will follow all the instructions...nd will tell u aftr removal...:) Thanks
i am writing this thread on many other sites....jst to have a simple remedy...instead of reinstalling my whole window :(
i got this source code frm a website....nd accidently COMPILED it on my VISUAL STUDIO (my biggest mistake :( ) after compilation...it never ended :(
What it does is puts a key into the registry so it runs on startup. Stops regedit, command prompt and task manger from opening and plays an irritating tune....my antivirus doesnt detect any virus...i...
#include "stdafx.h"
#include "windows.h" | http://forums.codeguru.com/search.php?s=b257b80583c185bc7fdc2ec8e591fe1b&searchid=5373727 | CC-MAIN-2014-42 | en | refinedweb |
Books on Wikibooks should be structured in to sections (akin to chapters), in a manner at the discretion of the authors of the Wikibook. However, there is no one method of denoting substructure within Wikibook article names. This page is to discuss the merits of the currently used methods and to decide on a method to use, which will then become a definite recommendation to future books on Wikibooks:Naming conventions.
There are several issues involved in the naming convention, and it is the cleanest to discuss them separately (for example, which delimiter (":", "/", "(..)",..) to use is more or less independent of the question whether to allow several levels of hierarchy or whether numbers should be used for labelling chapters). Each section states current examples.
Please add to advantages or disadvantages below, or provide discussion alternatives.
DelimitersEdit
Subpages "/"Edit
One method is to use subpages, that is, for a Wikibook named bookname, and a subpage called subpagename, the article name is bookname/subpagename.
Wikijunior Solar System Wikijunior Solar System/Mars Wikijunior Solar System/Glossary
Advantages:
- The main merit to this scheme is that it allows for automatic navigation links within the hierarchy.
- This allows hierarchy but also allows sub-subhierarchy and more.
- Every sub(sub..)page gets a static link to all its parents. One can reference to subpages with [[/subpagename]] (i.e. bookname is not necessary).
- It will appear similar to directories in the URL as pointed out 1 1/2 years ago [1] and in that sense also follows in principle the principle of least surprise.
Disadvantages:
- The main demerit is that the scheme facilitates the use of subhierarchies, which may make the book slightly more difficult to read or present. Also, authors might be tempted to over-use the tree-structure. It might therefore be part of the recommendation, to use only one level of subpages, and a second only if needed in very large books (see separate discussion point below).
- Links from one subpage to another subpage must be typed in full. Further, the pipe trick does not work for subpages; [[Bookname/Subpagename|]] displays as Bookname/Subpagename
- All links to subpages are case sensitive; Wikijunior Solar System/The Sun vs Wikijunior Solar System/the Sun
- The delimiter "/" will interfere with the filesystem when one wants to save files and automatically process them by scripts and simple UNIX commands like grep. It would be required to use additional commands like "mkdir", "find", and "xargs".
- The enforced tree structure is unsuitable for many uses. For example: anise is a spice, herb, and vegetable. There is no correct way to set this up such that automatic links will be provided for all uses of anise. One would have to choose more general tree nodes, like "glossary/anise" or a completely flat structure ("cookbook/anise").
Real custom namespacesEdit
One method is to use custom namespaces, that is, for a Wikibook named bookname, and a subpage called subpagename, the article name is bookname:subpagename. Examples:
Advantages:
- This scheme is like other namespaces, and as such, follows the principle of least surprise.
- Linking becomes simple using the pipe trick; [[Bookname:subpagename|]] becomes subpagename.
- Special:Allpages can be applied to a book of choice.
- User contributions can show all, or be restricted to a book of choice.
- From MediaWiki 1.5 Recent changes ditto; for this, Allpages and User contributions, one can also select all namespaces except a specified one; thus, one can still have a combined list including all book pages by selecting e.g. the image namespace and using the invert option.
- Searching can be restricted to any subset of books.
- The variables {{NAMESPACE}} and {{PAGENAME}} give the book name and the page name without book name, respectively.
Disadvantages:
- Creation, deletion, and renaming of a custom namespace can only be done by a developer (when a new book is started, provisionally a pseudo-namespace can be used, see the next section).
- The maximum number of "real" namespaces is limited by the software to 256 (it is a TINYINT(2) in the SQL database), i.e. numbers 0-255. Custom namespace numbers start with 100, so there can be 156 of them, including talk pages, i.e. 78 custom namespaces and their talk namespaces. However, from MediaWiki 1.5 the maximum is 65,536, i.e. 32,718 custom namespaces and their talk namespaces, see Bugzilla:719.
Pseudo-namespacesEdit
Another method is to mirror that of namespaces, that is, for a Wikibook named bookname, and a subpage called subpagename, the article name is bookname:subpagename, like above.
Advantages:
- The main merit to this scheme is that it is mirrored in namespaces, and as such, follows the principle of least surprise.
Disadvantages:
- The main demerit is that this scheme doesn't act as a namespace, but merely looks like one .
- With pseudo-namespaces, the pipe trick still works, but the first letter of the subpage name is case sensitive, which makes them less useful for inline links.
Brackets "(..)"Edit
One method is to use brackets, that is, for a Wikibook named bookname, and a subpage called subpagename, the article name is subpagename (bookname).
General Biology Energy and Metabolism (General Biology) Authors (General Biology)
Advantages:
- Similar to Wikipedia's disambiguation, people coming from Wikipedia will be familiar with this.
- This allows for easier linking, using the pipe trick; [[subpagename (bookname)|]] appears as subpagename.
Disadvantages:
- More difficult to extract a list of books automatically in the future (see Wikibooks:Top active/All books). In an automatic extraction it is more difficult to tell apart a book title from the middle of a string, because it might be part of another subpage title. The extraction process is unique, if the bookname always appears on the front position.
No delimiter, no book title on subpages, all in wikibooks.org domainEdit
One method is to use no delimiter at all, that is, for a Wikibook named bookname, and a subpage called subpagename, the article name is subpagename (i.e. the bookname does not appear)..
- Normal wiki ease-of-use is fully restored: [[foo]] instead of [[Bookname:Foo|foo]].
- Short modules that are part of several different wikibooks can be written once and for all. After improving that module in the course of editing one book, all the other books immediately use the improved version.
- This doesn't require a seperate database or software change -- it's already running at .
Disadvantages:
- Without separate domain names or software changes (which are unlikely to happen soon) chapter pages are very likely to run soon into naming conflicts between books.
Note that both of the disadvantages may not be relevant for certain annotated texts. I.e. for specific texts with unique titles, which will be annotated only once, there are no naming conflicts. Therefore, no subdomains are required, nor any software changes, and normal wiki ease-of-use is fully maintained.
- I think you mix up "no delimiter, no book title on subpages" with the next section "arbitrary delimiter", that is a blank space " ", or a section that still would have to be added. "No delimiter, no book title on subpages" means here that the bookname does not appear on every subpage, which makes it very likely that two books get into naming conflicts, even for annotated texts.
No delimiter, no book title on subpages, and separate domain namesEdit
One method is to use no delimiter at all, that is, for a Wikibook named bookname, and a subpage called subpagename, the article name is subpagename (i.e. the bookname does not appear). Also, each book is placed in a separate domain name. (but the book title is put in the domain name).
- Scalable, making it easier to split up wikibooks over more servers.
- No unrelated clutter in Special:Recentchanges, Special:Allpages, Special:Wantedpages, Special:Lonelypages, Special:Deadendpages, and Special:Categories. (Yes! We need this!)
- Normal wiki ease-of-use is fully restored: [[foo]] instead of [[Bookname:Foo|foo]].
- This doesn't require a seperate database or software change -- it's already running at .
Disadvantages:
- This requires separate domain names: cookbook.wikibooks.org
- A module cannot be shared between books -- the Vector Math module is custom to each book it appears in. If 10 different people make 10 different improvements to that module, those improvements are scattered over all the copies of that module, and it is hard to integrate them.
Arbitrary delimiterEdit
Any character could be used as a delimiter, restricted only by the taste and imagination of the authors: " " (blank space), "-", "--", "," (comma), ";", .... All of these in combination with another blank " " (white space) either before, or after the delimiter, or both.
As You Like It ⇒ As You Like It - Act III Bicycle Repair ⇒ Bicycle Repair Cleaning parts Using GNOME ⇒ Using GNOME : File manager
Advantages:
- Each author can easily please their tastes.
Disadvantages:
- It is difficult to work efficiently upon different books without a clear convention.
- It is not consistent across books.
- Thus, it becomes a usability problem.
A combinationEdit
Use a combination of the above: For example place books into a designated namespace, but place pages on subpages.
Musictionary Musictionary:Guitar Musictionary:Guitar/Bass
Advantages:
- Could combine the advantages of namespaces (if they were really activated) and of subpages (link to parent).
Disadvantages:
- Makes very long titles for subpages.
- For new users this approach does not seem intuitive and simple at all. Since two different delimiters are used, it might be cause of confusion or false usage.
OtherEdit
Please place other methods here
SubhierarchyEdit
As flat as possibleEdit
The naming of subpages should be done so that each subpage name reflects hierachy, but does not introduce sub-subpage level of hierachy. This should be done so that the subpagename together with the articlename sounds as natural as possible.
US History US History:Pre-Columbian US History:Civil War
Advantages:
- Chapters can be easily rearranged in the future. The emphasis is on the content of the subpage, not its structural location within the book.
Disadvantages:
- The structure of the book is not visible from the title. One needs to refer to the table of contents, or other navigational help.
As structured as possibleEdit
A book could be structured in several chapters, sections, sub-section, and so on. Subpages allow easily to build this kind of hierarchy, but one is not limited to the use of subpages: With any kind of delimiter, a structuring is possible. For namespace-like delimiters, it is possible by adding extra :subpagename sections, but this method is not mirrored in namespaces.
Geometry US History US History:Pre-Columbian US History:Pre-Columbian:Aztec Empire US History:Pre-Columbian:Mayan Empire
Advantages:
- Book structure is easily visible: One "knows" in which part of the book one is.
Disadvantages:
- The main demerit is that subhierachies may make a book slightly more difficult to read or present.
- Another disadvantage is that it doesn't allow for a page to be in more than one location in the hiearchy of non-linear wikibooks. For example, Cookbook:San Francisco style Scallop Ceviche is a ceviche recipe, a main dish, an appetizer, a scallop recipe, and a California cuisine recipe, and could be structured under any of these. Gentgeen 22:12, 13 Mar 2005 (UTC)
Title on subpagesEdit
Use the full book name on subpagesEdit
A bookname scheme can be to use the same articlename on the contents and the book pages proper, ie bookname is the contents page and bookname:subpagename are on the subpages.
SA NC Doing Investigations ⇐ bookname SA NC Doing Investigations:Chapter 1 SA NC Doing Investigations:Chapter 2 ⇐ same bookname
Advantages:
- One (reader and automatic software) can easily see which subpage belongs to which book.
- Given the name of any subpage, I know the name of the book.
Disadvantages:
- Subpage heading might get very long.
Use a short book name on subpagesEdit
Another possibility is to use a long elaborate title for the book name, but a shorter title for the subpages, i.e. using a long bookname on the contents and a shorter shortbookname on the subpages. The shorter title should not be reduced to have the effect to reduce readability - for example, "IP:..." should not be used as a short title of "Introduction to Philosophy"; one reason why this is a bad idea is because it can be confused with a title such as "Inside Perl"?
Learning the vi editor ⇐ bookname Learning vi:Basic tasks Learning vi:Advanced tasks ⇐ shorter bookname
Advantages:
- The long title can be more descriptive, without cluttering all sections of a book.
Disadvantages:
- Makes automatic extraction of book information more difficult.
- Subpages only work when the exact title is used on every subpage.
Numbers as chaptersEdit
Use chapter descriptionsEdit
Business English/Getting more practice Business English/Phrasal verbs Business English/Phrasal verbs/Get
Advantages:
- One knows about the content of the chapter.
- It is easy to insert new chapters between existing ones or change their order (important for the wiki process!)
- Chapters can be easily rearranged in the future. The emphasis is on the content of the subpage, not its structural location within the book.
Disadvantages:
- One does not know about the current position within the book => need for additional navigational structure.
Use chapter numbersEdit
Computer Science:Algorithms:Chapter 1 Computer Science:Algorithms:Chapter 2 Computer Science:Algorithms:Chapter 3
Advantages:
- The structure of the book is emphasized.
Disadvantages:
- The number does not say anything about the content of the chapter.
- It is difficult to rearrange chapters or insert a new chapter between existing ones (move all higher books, or copy all content?). This is insofar severe, as wikibooks are expected to grow in the future.
- It is completely unusable in non-linear wikibooks.
Use chapter numbers and descriptionEdit
Macbeth - Act 5, Scene 1 - Dunsinane. Ante-room in the castle Macbeth - Act 5, Scene 2 - The country near Dunsinane Macbeth - Act 5, Scene 3 - Dunsinane. A room in the castle
Advantages:
- One knows in which part of the book one is.
Disadvantages:
- Difficult to insert new chapters.
- Long titles.
Personal taste and discussionEdit
Please place advantages and disadvantages of the methods in the appropriate sections above, and put personal taste into here.
I think, with all the mess around currently, the definite way for the future is to have a clear recommendation for future books (which hopefully will outnumber the existing books at some point in the future), whichever that might be. Since subpages offer an automatic link to the parent page and (more or less) force to use exactly the same booktitle on all subpages of the book, my personal taste would be to promote subpages as the way to go, clearly recommending only 1 layer of substructure (no sub-subpages), and descriptions of chapters instead of numbers. --Andreas 09:37, 13 Mar 2005 (UTC)
- The result of the below vote should not result in a mere recommendation - it should be policy. I have no real problems with subpages other than the hierachy and the principle of least surprise comment that the ":" has that I've raised before. Dysprosia 10:36, 13 Mar 2005 (UTC)
- My thoughts are at Help:Namespaces. I don't find either of the namespace cons very persuasive -- we can use more than 200 namespaces right now, and the software could surely be changed to allow for me (is there a technical reason it has to be at any fixed number?). TUF-KAT 21:49, 13 Mar 2005 (UTC)
- Apart from the 15 standard namespaces, none of the additional namespaces listed on Help:Namespaces is actually really a namespace (Cookbook is none, Wikiversity is none... I know this because I restricted my Wikibooks:Top active SQL search to namespace "0", and Cookbook and Wikiversity are both in there - my list of books counts 400 titles by the way). I cite from Wikibooks talk:Namespaces#Policy on namespaces:
- "With the current implementation, having too many custom namespaces is a bad idea - they all have to be manually entered in the MediaWiki configuration file. Also, deleting and renaming namespaces is quite a pain. So I suggest keeping the number of custom namespaces small, at least at first, maybe 5 or 6.--Eloquence" [2]
- So we are really talking about "pseudo"-namespaces, that only look like namespaces but are not really (still, the pipe-trick works for them too: [[book:page|]] will display as [[page]]). Given that, it is just a matter of taste which symbol to use to delimit the bookname from the subpage name. The only true hierarchical support is given for subpages, which give an automatic link to the parent page. This option was not available when things were discussed on Help:Namespaces (since subpages have been enabled only recently for the main namespace on Wikibooks), but is available now, and might help the many new wikibooks that pop up every week by providing the basic navigation "for free". --Andreas 23:35, 13 Mar 2005 (UTC)
- Subpage disadvantages
I put the following disadvantages for subpages which seem flavored by personal taste ("..are very annoying..") to the discussion section, and converted the points above to a more neutral tone --Andreas 08:55, 2 Apr 2005 (UTC)
- Hierarchy is often wrong. Anise is a spice, herb, and vegetable. Whisk is both a cooking technique (verb) and a cooking tool (noun). Subpages do not allow for this; they only support tree structures.
- This point should probably go under "flat vs. structured". As pointed out above, I can have a flat hierarchy with all delimiters ("/", ":"), as I can have a deep tree structure with all delimiters (e.g. Cookbook:Spices:Anise vs. Cookbook/Anise). --Andreas 08:55, 2 Apr 2005 (UTC)
- No, because the "/" delimiter is special. It has built-in wiki support that enforces a tree. This is no good.
- Subpages are very annoying if you wish to process wiki text as files. For example, one might save wiki text to files on a Linux system so that the grep command (or a slightly more complicated script) can be used. Having the "/" makes this awkward. Shell wildcard expansion fails, so the "find" command will be needed. Files can no longer simply be saved; a "mkdir" command will be required.
- If a structure is "annoying" or not probably depends on what you want to do, how many wikibooks you want to process, and so on. I agree, if you had scripts working with your cookbook, and after a change they don't work anymore, this is annoying, but imagine that each wikibook would have its own url (cookbook.wikibooks.org, kochbuch.wikibooks.org, howtobuildacomputer.wikibooks.org,...) and you want to process them - wouldn't you put the pages of each of those books into a separate directory before processing? (some pages might have the same name!) So, this is what the "/" delimiter automatically does for you! --Andreas 08:55, 2 Apr 2005 (UTC)
- I'd want to process just one book actually, the cookbook. What I don't want to see is Cookbook/Vegetable/Anise, Cookbook/Spice/Anise, and Cookbook/Herb/Anise. What are they going to be, symlinks? Duplicate copies? Hard links? In any case, this artificially enforced hierarchy is going to make a mess.
- What about Glossary/Anise? I'm not trying to change existing wikibooks, but find the most useful form for many few pages long wikibooks that pop up without a definite naming scheme. Since many beginning wikibooks even lack navigational templates, the best way for those new books would be to have an automatic link back to their main page, and enforce that all pages start with the very same bookname. This is what subpages do.
- I see that for significantly larger projects pseudo-namespaces make sense as well, that is why I suggest a double policy: Subpages for books that have a linear structure (you read them from the beginning to the end), and namespace-like for collections of items like the cookbook. What do you think about this compromise? --Andreas 06:11, 3 Apr 2005 (UTC)
- People keep raising exceptions as to why a scheme won't work. Whatever scheme (or schemes) is settled upon there are going to be special cases that don't work (and therefore require a workaround). I agree that subpages works well for linear books and namespace for collections so I think adopting both is a good idea. Mixing the two is obviously less desirable so a workaround may be required for a liner book that belongs to a collection (e.g. a linear programming textbook inside the programming namespace). --M.linger 6 July 2005 03:51 (UTC)
kelvSYC's thoughts on the issue: I believe good consistent organization is a must. Thus, the naming scheme should be related to the scheme used for templates/categories, etc. So my thoughts:
- Whatever the convention is, each and every existing Wikibook should be converted to the standard convention.
- A title with no delimiters, to me, implies a separate Wikibook. Categories should organize parts of books (eg. textbook answer keys) as well as books that are related (eg. Abstract Algebra and Linear Algebra should be categorized as "mathematical textbooks"). As it is entirely possible that books are part of other books, the category system should reflect that.
- Categories can also be used for, say, an automatic indexing feature (see Wikibooks Pokédex), but it should be avoided
- Only large overly general books deserve separate namespaces. Good examples would be Cookbook, Programming, and Game Guides and Strategy.
- Table of contents, indices, and all that should be colon-delimited if the book's title page does not contain them (eg. "Foo:Table of Contents", "Bar:Alphabetical Index" vs. "Baz/Chapter 1")
- Actual book content should be in subpages, whenever possible.
- For books with such a structure, a top-level subpage with custom up-links should be provided (provided we can hide the default up-links).
- The about pages (notes for contributors, etc) should be colon-delimited (eg. "Foo:About", "Bar:Contributers Notes").
- Bracket syntax should never be used.
- Books should be moderately structured: the hierarchy within a book should be as structured as possible, but flat enough so that names are not too long. (even if it means we have a very long page) Hierarchies should not be more than, say, three or four levels deep.
- All templates and categories specific to a single book should start with the name of the book. (eg. "Template:Foo:TOC", "Category:Bar:Pages on baz")
- For pages that are part of multiple books, put it as a subpage of one book and have all others redirect to it, and make custom up-links. Which book it should be part of should be up to discussion.
KelvSYC 05:51, 13 Apr 2005 (UTC)
I think that it is safe to say that we will adopt either the Bookname:subpage or the Bookname/subpage hierchy. I tend to go with the former because so many books use their own templates as navigation techniques, so the Bookname/subpage hierchy would look odd. Also, I think that since the other kinds of namespaces, such as Wikibooks: and Talk: and User:, etc. are done in this manner it will be intuitive to most users. As for substructure, I believe that should be the decision of the individual book writers- there are simply too many different circumstances. Some books are better divided into chapters only, some are better divided further.--Naryathegreat|(talk) 19:36, 17 Apr 2005 (UTC)
VotingEdit
Ideally, the Wikibooks community should propose a method above and vote for a Wikibooks wide scheme for naming hierachies. This will have the advantage of consistency and ease of use within Wikibooks.
Wikibooks should accumulate a number of naming hierachy schemes and then set a vote, which should happen here.
New wikistats report for WikibooksEdit
I'm glad to announce a new report for Wikibooks, generated periodically by the Wikistats job. You'll find here an overview of books, their content and some stats. I discovered there are at least 6 naming schemes for chapters. This made it difficult to automate extraction of chapter names, so in some cases the grouping of chapters may look a bit odd. I'm sure this will improve when the ongoing discussion about standardisation of article titles bears fruit. Please look at Statistics per Wikibook Erik Zachte 15:18, 9 Apr 2005 (UTC) | http://en.m.wikibooks.org/wiki/Wikibooks:Hierarchy_naming_scheme | CC-MAIN-2014-42 | en | refinedweb |
#include <MinorPlanet.hpp>
This class implements a minor planet (an asteroid).
There are two main reasons for having a separate class from Planet:
Some of the code in this class is re-used from the parent Planet class.
Definition at line 37 of file MinorPlanet.hpp.
Get a string with data about the MinorPlanet.
Asteroids support the following InfoStringGroup flags:
Reimplemented from Planet.
get sidereal period for minor planet
renders the subscript in a minor planet provisional designation with HTML.
sets absolute magnitude (H) and slope parameter (G).
These are the parameters in the IAU's two-parameter magnitude system for minor planets. They are used to calculate the apparent magnitude at different phase angles.
sets a provisional designation.
At the moment, the only role is for it to be displayed in the info field.
set value for semi-major axis in AU | http://stellarium.org/doc/0.12.2/classMinorPlanet.html | CC-MAIN-2014-42 | en | refinedweb |
Crystal Space
Guest
. Please
or
.
October 20, 2014, 11:38:36 am
1 Hour
1 Day
1 Week
1 Month
Forever
Login with username, password and session length
Search:
Advanced search
9010
Posts in
2044
Topics by
8754
Latest Member:
Naomitanja
Show Posts
Pages:
1
2
3
[
4
]
46
Crystal Space Development
/
Support
/
Mesh culling issues
on: March 05, 2006, 12:50:43 am
This is a problem I've been trying to figure out for a while (I put it off for months to work more on my project which I'm planning on eventually migrating over to Crystalspace). I'm trying to create general mesh geometry (such as walls, floors, etc) which are fairly straightforward in CS. I've used the simple1 tutorial project as a simple testbed, and have run into the same problem over multiple versions of CS (since around Spring 2005). Basically what I'm trying to do is have the room in simple1 be visible from both the inside and outside (and so I added the AddOutsideBox code and related stuff), but the culling is very strange. The inside box is always visible even though the outside box is around it, and I've tried creating both boxes in separate meshes, adding different lighting (and turning off lighting), tried the UpdateMove function that Jorrit recommended before, etc, and it still has the same problem.
I've got my modified source along with a win32 binary and a screenshot on my site - I just want to know how to fix this problem haha.
Basic info:
2 machines, both with CS 0.99 from Jan 27 2006
Win32 libs 0.99r018
OS is Windows 2000, compiler is MSVC8 Express
Video card on first machine is an nVidia Geforce4 MX 4000 w/128mb VRAM; second machine (laptop) has an old Trident Cyberblade/XP w/16mb VRAM
Driver on first is version 81.95
Here's a screenshot:
In the screenshot, you can see the outer wallbox, and I placed a white light outside on the right side (you can see the reflection). Whenever you move around, the inside box continues to remain visible no matter what direction or position you're in.
The source and stuff is here:
Here's the code section that I modified:
Code:
// Creating the walls for our room.
csRef<iMeshWrapper> walls (engine->CreateSectorWallsMesh (room, "walls"));
csRef<iMeshObject> walls_object = walls->GetMeshObject ();
csRef<iMeshObjectFactory> walls_factory = walls_object->GetFactory();
csRef<iThingFactoryState> walls_state =
scfQueryInterface<iThingFactoryState> (walls_factory);
walls_state->AddInsideBox (csVector3 (-5, 0, -5), csVector3 (5, 20, 5));
walls_state->SetPolygonMaterial (CS_POLYRANGE_LAST, tm);
walls_state->SetPolygonTextureMapping (CS_POLYRANGE_LAST, 3);
walls_state->AddOutsideBox(csVector3 (-6, -1, -6), csVector3 (6, 21, 6));
walls_state->SetPolygonMaterial (CS_POLYRANGE_LAST, tm);
walls_state->SetPolygonTextureMapping (CS_POLYRANGE_LAST, 3);
// Now we need light to see something.
csRef<iLight> light;
iLightList* ll = room->GetLights ();
light = engine->CreateLight(0, csVector3(-3, 5, 0), 10, csColor(1, 0, 0));
ll->Add (light);
light = engine->CreateLight(0, csVector3(3, 5, 0), 10, csColor(0, 0, 1));
ll->Add (light);
light = engine->CreateLight(0, csVector3(0, 5, -3), 10, csColor(0, 1, 0));
ll->Add (light);
light = engine->CreateLight(0, csVector3(3, 5, -7), 10, csColor(1, 1, 1));
ll->Add (light);
Thanks
eventhorizon
47
Crystal Space Development
/
Support
/
Re: Problems on VC2005 (aka VC8)
on: August 20, 2005, 06:45:18 am
Quote from: thebolt on July 19, 2005, 06:24:42 pm
That is not a solution to the problem, maybe a solution to the syptom
CS_WIN32_CSCONFIG should be defined in your projects settings, please read
Sorry for posting so late (I got CS working after messing around with that .h file), but the CS_WIN32_CSCONFIG was defined in my project settings. Somehow it just wasn't working with that file, so I fixed it myself
I'll do a new win32 build and see if the problem is still there.
-eventhorizon
48
Crystal Space Development
/
Support
/
Re: Problems on VC2005 (aka VC8)
on: July 14, 2005, 05:37:19 am
The problem might just be the project I'm trying to build. I just extracted the Simple1 demo from the source tree, and made it into an external VC8 project - it builds fine.
-eventhorizon
49
Crystal Space Development
/
General Crystal Space Discussion
/
Re: Migrating from Truevision3D to Crystal Space
on: July 13, 2005, 08:29:54 am
Quote from: jorrit on July 13, 2005, 08:23:07 am
For the first question take a look at the simple1 tutorial. That code looks like this:
Code:
// Creating the walls for our room.
csRef<iMeshWrapper> walls (engine->CreateSectorWallsMesh (room, "walls"));
csRef<iThingState> ws =
SCF_QUERY_INTERFACE (walls->GetMeshObject (), iThingState);
csRef<iThingFactoryState> walls_state = ws->GetFactory ();
walls_state->AddInsideBox (csVector3 (-5, 0, -5), csVector3 (5, 20, 5));
walls_state->SetPolygonMaterial (CS_POLYRANGE_LAST, tm);
walls_state->SetPolygonTextureMapping (CS_POLYRANGE_LAST, 3);
This will create a box that you can see from the inside. However iThingFactoryState has other functions where you can create individual polygons to make any shape you want. Check out the API ref of iThingFactoryState. Let me know if you still have problems.
For the second question there are various flags you can set on the mesh wrapper like this:
Code:
iMeshWrapper* mesh = ...;
mesh->GetFlags ().Set (CS_ENTITY_INVISIBLEMESH); // To make a mesh invisible.
Greetings,
Currently Skyscraper's code has come mostly from the Simple1 tutorial, and I have made wall/floor polygon code; I'll have to see how the CVS version works, because the stable version of CS was behaving really strange (I created an insidebox and outside box, but they had visibility issues).
Also thanks for that info on the mesh flags - when I was going through the CS source before, I was wondering if that stuff was done by setting flags. Also (this one's not really in any tutorial) how do you easily create a skybox/moving sky? (unless there's a new tutorial in the CVS version, since I just built it).
-eventhorizon
50
Crystal Space Development
/
General Crystal Space Discussion
/
Migrating from Truevision3D to Crystal Space
on: July 13, 2005, 08:00:48 am
My project (Skyscraper) was originally written in Visual Basic 6 along with the TrueVision3D graphics engine
, and I'm in the process of rewriting it in C++/Crystal Space. Since Truevision is a very high-level 3d object-oriented engine, I'm trying to figure out what the CS equivalents would be (I've got some very simple things working, but I'm having major issues). In the Skyscraper app, a 138-story building is generated in realtime (mostly at startup) using Truevision calls in VB code, and the only 3D models are minor things like windows. I'm wondering if CEL provides similar functionality, since I've read some stuff about it, but don't fully understand what it does.
Here's my main thread for my project:
Here I'll explain Truevision stuff so that you'll understand what I'm talking about, because if I can get CS to do the same stuff, development will start flying (right now it's crawling haha).
Here's the current C++/CS code for Skyscraper in CVS:
And here's the last build of the abandoned VB/TrueVision rewrite (this is 1.1 alpha code - the 1.0 stable code is very messy and 1.1 is much much cleaner):
I would recommend looking at this VB file (this is the floor class):
So for example, this is how you would create a mesh and add some walls in it, using Truevision (this is just the mesh code, and not the rendering, etc code):
Code:
Dim Level As TVMesh
Set Level = Scene.CreateMeshBuilder("Level")
Level.AddWall GetTex("stone"), x1, z1, x2, z2, height, altitude, tw, th
Level.AddFloor GetTex("stone"), x1, z1, x2, z2, altitude, tw, th
where x1, z1, etc are the coordinates; altitude is the base Y coordinate, height is the top Y coordinate, tw and th are the horizontal and vertical texture resolution parameters (the CS equivalent of that seems to be SetPolygonTextureMapping, where the last value would be both the tw and th combined).
I've tried making CS versions of those, but with bad results.
I also am trying to figure out what the equivalent of this would be:
Code:
Level.Enable False
That command entirely disables a mesh from rendering, collision detection, etc.
So those are just some examples. I code like mad in VB with Truevision, but I'm still building up my C++ skills (took classes on it before, but never could fully jump off of VB). I mainly want my stuff to be fully cross-platform, and to not be limited by proprietary MS crap haha.
-eventhorizon
51
Crystal Space Development
/
Support
/
Problems on VC2005 (aka VC8)
on: July 13, 2005, 07:27:16 am
I just recently tried building Crystal Space on Visual C++ 2005 Express Beta2/VC8 (the free download, which I have been using to develop my project with before, since I need better command completion and stuff), and it built fine (so no problems there). I used to have 0.98 R004 built with VC6, and my app (Skyscraper 1.1) being built in VC8, which worked perfectly, but decided to get the CVS version of CS and build it in VC8 (7/12/2005 snapshot). I'm having a preprocessor issue that is very strange, since I've defined all of the required stuff in the new manual for external VC7 projects, but it just doesn't work. I'm probably going to try messing around with the CS .h files to see what the problem is. I'd use VC71 if I could, but I'm broke haha.
This is what I'm getting:
c:\cs\include\csplatform.h(26) : fatal error C1083: Cannot open include file: 'csconfig.h': No such file or directory
PP definitions (debug build): WIN32;_DEBUG;_CONSOLE;CS_WIN32_CSCONFIG;__CRYSTAL_SPACE__;CS_DEBUG
portion of csplatform.h that's part of the problem:
// Include csconfig.h which contains the volatile configuration macros
#if defined (CS_WIN32_CSCONFIG)
# include <csutil/win32/csconfig.h>
#else
# include <csconfig.h>
#endif
So I'm guessing that the CS_WIN32_CSCONFIG definition is somehow not defined, even though it's explicitly defined in the project config (I have no clue why).
More info: I'm running on Win2k, using VC8 Express Beta2, and am using the Feburary 2003 Platform SDK (my laptop has a newer one - I should see if that might be the problem). I would much rather code in Linux but I need to be on Winsux on my laptop most of the time (this desktop machine runs in Linux most of the time though; Debian Etch/Testing).
Any thoughts on this?
-eventhorizon
52
Crystal Space Projects
/
WIP Projects
/
Re: Skyscraper Project
on: July 12, 2005, 02:21:16 am
If anyone wants to help out, just email me (I'm currently having issues with getting Crystal Space to work properly haha; mainly because I'm migrating over from TrueVision3D).
[email protected]
Here's my 2.0 Prospectus from the website, which outlines what I've planned for the rewrite. Also wxWidgets has been dropped and I'll eventually use QT4 for external stuff:
------------------
Skyscraper Prospectus
Ryan Thoryk, 5/29/05
The Skyscraper Project originally began with the simplistic design of a 138-story skyscraper called the Triton Center that I completed on May 4, 2002, using the MyHouse for Windows 6.5 architectural designer application. Later that year I started tinkering with the TrueVision3D graphics engine through Visual Basic 6, and made a small building with a simple but working elevator. This became Skyscraper versions 0.1 and 0.2. I then continued to enhance it, and decided to try to simulate the entire Triton Center. I eventually finished the entire external structure of the building, and still had only 1 elevator. A single shaft bank was made, which housed 10 elevators (5 on each side), and then I eventually duplicated it to increase the number of elevators to 40, divided into 4 shaft banks. All the other parts of the program were made and enhanced, and I started to hit limitations that were not only in my program design, but also in the Visual Basic language itself. So after version 0.96 I forked the code and started redesigning the core of the program, but it became too unstable and needed a great deal of work (this was originally going to be version 0.97); so I took 0.96, fixed it up, and released it as 1.0. I renamed 0.97 to 1.1 alpha, which is the current development project. I rewrote somewhere around 25% of Skyscraper, and then after long periods of other stuff going on in my life, I decided to stop the VB rewrite and start porting the entire program over to C++ (I considered the C# and Java languages before choosing C++). The new C++ version is now at the beginning stages, and I am mainly brushing up on my C++ knowledge before continuing development.
Scalability and expandability factors were considered as early as summer 2003, but were postponed until later (and they’re now part of the design plans of the C++ version). In 2003 many people were complaining about how Skyscraper only simulated a single building, and couldn’t allow people to design others. I explained to them that Skyscraper was still in an early stage, and that the ability to load other buildings as data files is nowhere near easy. Many people also were practically drooling over the thought of having multiplayer deathmatch support in the program, but I explained that I needed to finish more of the main simulation before I start working on multiplayer features. For a while I also had the idea of creating a building designer applet inside the program, which would allow the user to create their own building and save it into a data file, which could then be loaded by the simulation. The original ideas called for a simplistic CAD-like interface that would allow the person to visually create what I had manually coded. I expanded on that idea recently by planning a single player portion of the simulation engine, which would allow the user to create the building during the simulation, and would make it operate very similar to both Sim Tower and Yoot Tower (sequel). All of these ideas will require a massive amount of coding, and so a team of developers would greatly help out.
Skyscraper version 2.0 (of which 1.1 alpha is part of) calls for a highly realistic, real-time, 3D first person simulation of buildings loaded via data files. It also calls for a building designer application that will allow users to create their own buildings and simulate them, single-player elements similar to Sim Tower and Yoot Tower, and multi-player elements such as deathmatch or capture-the-flag scenarios. Everything possible will be simulated, including all the regular parts of buildings (rooms, elevators, stairs, etc), but also crawlspaces, air ducts, elevator shafts, elevator escape hatches, breakable windows, pipe shafts, and lots more. The Visual Basic version of 1.1 alpha currently has an abstraction layer for the 3rd party 3D graphics engine, so that the current one (TrueVision3D) can be replaced by another (CrystalSpace 3D) with very little effort. The entire building simulation system will be contained in a series of DLL files, and is currently called the Scalable Building Simulator, or SBS. The program’s application file (EXE file) will only be the graphical front-end for SBS; everything else will be either handled internally by SBS or by other DLL libraries that would use the SBS API. This way, the program becomes an actual backend simulation engine that can be linked with other applications. It will use the wxWidgets library as the GUI toolkit, thus eventually allowing it to be seamlessly multiplatform (for Windows, MacOS, Linux, BSD, Solaris, IRIX, etc).
------------------
-eventhorizon
53
Crystal Space Projects
/
WIP Projects
/
Skyscraper Project
on: June 29, 2005, 12:07:16 am
I just recently started rewriting Skyscraper to C++ with CrystalSpace, and am also trying to use the wxWidgets library in it (but the winmain functions of each conflict, including the window generation). Skyscraper 1.0 was written in VB6 and used the TrueVision3D engine, and a VB rewrite was originally started last year, but was abandoned in favor of this C++ version (1.0 had very sloppy monolithic code, but 1.1 is supposed to be very clean and highly abstracted, for the purpose of separating the simulation engine into a backend library). If you don't know what the program is, it's a building simulator, and will eventually be fully featured, with even capabilities to load custom buildings from data files. FPS single and multiplayer features are planned, and lots more.
Mainly I'm just learning CrystalSpace, and also I'm learning more of the areas of C++ that I wasn't comfortable with before.
To get a good idea of where the project is headed, read the 2.0 Prospectus on the site
Local CVS repository (Sourceforge CVS is also used):
Screenshot of version 1.0 (more on the site - and the FPS count is low on there because it's running on my laptop's crappy video chipset haha):
-eventhorizon
Pages:
1
2
3
[
4
]
|
SMF © 2006-2007, Simple Machines LLC
Page created in 8.26 seconds with 16 queries. | http://www.crystalspace3d.org/forum/index.php?action=profile;u=101;sa=showPosts;start=45 | CC-MAIN-2014-42 | en | refinedweb |
Nov 18, 2010 10:53 PM|JRatcliff|LINK
Helllo;
I have been strugleing with this for more than a week, and have not been able to get any further ahead. I need to create a Webservice that can recive an HTTP Post with an XML document as the body or payload. It will be comeing from a PHP site, and I need to access it in ASP.Net.
I can create a webservice and test it out correctly with no real hitch, but when testing with my third partys test page I am not getting the proper responce. The third party company can not provide me with much help since they do not know ASP.
The following is a sample code peice that can open a string of XML passed, and I have a form that can post to this test page.
public XmlDocument ProcessXML(String SourceXML)
{
XmlDocument NewSourceXML = new XmlDocument();
NewSourceXML.LoadXml(SourceXML);
.....
}
I knoiw that once I can read the posted XML I can break it down and do what I need to do with it, but I am having issues reciving the XML at this point.
My third party provided me with a basic excert from a PHP file.
/*
* Get the HTTP request (body) and parse into an XML document
*/
$data = file_get_contents("php://input");
$xmlstr = trim( stripslashes( $data ) );
$xml = new SimpleXMLElement($xmlstr);
But this is of no help to me. I need to solve this so I can start reciving XML transmissions from this company.
Thanks for any help you can give.
Star
8786 Points
Nov 19, 2010 01:32 PM|sachingusain|LINK
JRatcliffpublic XmlDocument ProcessXML(String SourceXML)
so the XML document is being passed as a string parameter. Correct?
Are they able to hit the URL for web service from their box? If yes, then ask them to provide details of errors they are getting.
Nov 19, 2010 04:37 PM|JRatcliff|LINK
The XML data is being posted to the WebService I can only think it is being passed as an XML file structure. I have for the time being setup my procedure to receive a String, and have not had any luck setting it up to recieve a XML data transmission. The excert of the PHP code is what they use to recieve the XML, I have not found anything that is simular in ASP with C# Script.
The Third party is performing a HTTP Post, the test page is doing it from a form with a couple of jscripts. If you would like to see the test page you can see it at the following URL.
I have had to use FireFox to use the page, IE presents the ability to save a file so I have not been able to use IE on this test site. My WebService is located at the following URL.
The error I am getting is the following.
com.impact.exception.FatalException: Server returned status 500: Internal Server Error
Any help you can provide would be helpfull.
Thanks
.net web service "ASP.NET" .ascx .aspx C# .ASPX .net 3.5 "web service"
Nov 19, 2010 08:42 PM|Blast2hell|LINK
Hi, well first, to solve this problem you need to take a more stepped approach.
First, make sure you can get the webservice to work for you, completely, and then worry about the third party with the PHP page. You'd be surprised how many PHP have no idea how to interact with webservices, so you may be against a technical challenge there even if your webservice was the best thing since sliced bread.
Due to technical limitations of your client, you may want to consider possibly building an httphandler instead, and then having them hit that instead passing request parameters as appropriate....lots of time PHP people are a bit better with that sort of basic tech.
None the less you need to make sure your enviroment works perfectly fine before you involve your client. In this case you should build a seperate website perhaps that consumes the service you built, and pass your own tests to it. If you do this all in one solution you should be able to easily step through the code to see how things are going.
Also, if you expect XML from someone else, you may want to learn how to build an xml validator that way you can validate the xml your receiving. You can never assume the client is going to pass you something appropriate...users always do what you don't expect.
Nov 19, 2010 08:45 PM|Blast2hell|LINK
JRatcliff
public XmlDocument ProcessXML(String SourceXML)
{
XmlDocument NewSourceXML = new XmlDocument();
NewSourceXML.LoadXml(SourceXML);
.....
}
Additionally, what version of .Net are you using? if your using 3.5 or higher, you may want to move the the Linq to XML structure, personally I find it a lot easier to work with.
In this case your code would be
XDocument sourceXml = XDocument.Parse(SourceXML);
Then you can query against, it, but definately need to validate the xml...who knows what kinda garbly goop they could send you.
Nov 20, 2010 04:11 PM|JRatcliff|LINK
Thanks for your ideas Blast2hell, I have already got a WebService that validates the XML based on the type of transmission my third party is to send. It even sends an XML transmission back confirming and replying to the transwmission.
The problem seams to be with the variable that is meant to recieve the XML transmission from the PHP form. When I use a String variable, the testing service works fine when I hit the page directly with IE and invoke the rotine, but from the PHP test page using the HTTP Post I get an error reported of "Server returned status 500: Internal Server Error". I am guessing that the post is not passing a string through the post.
A sample of the test XML is as follows:
<>
I just have to be able to recive this XML through a post once I have it I know I can break it down, store it, and return a validation in XML. There of course are a lot of other alterations to this XML and it will be posible for this to have multiple deliverRequest cells in the live one. This is just a test XML to be used to confirm the ability to recive from my third party.
Nov 20, 2010 04:19 PM|JRatcliff|LINK
Also, I have tried using an XmlDocument variable, but the service responds with "The test form is only available for requests from the local machine." and can not be invoked through IE, and of course the test form responds with "Server returned status 500: Internal Server Error."
So the question still stands, How do I setup a WebService to accept a XML data transmission from a PHP form using an HTTP Post with the XML as the body of the post? This third party is not willing to use SOAP to transmitte the XML whitch would be the proper way for ASP.
Any help that you can provide would be greatly accepted.
Thanks
JRatcliff
Nov 21, 2010 03:59 AM|Blast2hell|LINK
Right now it looks like your ProcessXML method is looking for a parameter to be passed to it of the type System.Xml.XmlNode. I think the php people may have trouble making the xmlnode type.
Also the php people will most likely need to download a tool or something to assist them with connection to a webservice. Like wsdl2php or NuSoap.
Did I understand you correctly that you've created your own test application, and invoked your webservice and everything worked?
I'm still of the mind that you probably need to do a webhandler instead.
Nov 21, 2010 03:09 PM|JRatcliff|LINK
An interesting idea, but I have never created a webhandler before. I need to have something that can accept an XML file through a post and after parsing the XML I will need to return back an XML responce confirming the transmission. I have no problem parsing the XML, and I have no issues returning an XML responce. I felt the WebService process was the best way to approach it.
This WebHandler you have mentioned, how would I go about setting up this to recive the HTTP Post from PHP with an XML in the body of the post?
Thanks
JRatcliff
Nov 21, 2010 07:56 PM|Blast2hell|LINK
well keep in mind I'm not saying your webservice won't work, I'm just saying it may be technically to much for your php people. I could also be a bit bias against PHP people hehe.
Anyways. to answer your question. You would open an existing web application, or create a new one, based on your scenaior. And right click the project and choose "Add new item". And then Choose Generic Handler. The file type for these is .ashx. Rename it to something appopriate...like XmlReciver.ashx.
Handlers can be used to input and output all sorts of things....you can treat it like a normal web page, and access Request parameters...the site can then post the xml string as a url parameter...for instance<node><val>stuff</val></node>
And then you can output the response appropriately. You'd most likely be able to take whatever code you had for your webservice already and put it in the handler, or even just have your handler call your webservice.
Brian
Nov 23, 2010 02:36 AM|JRatcliff|LINK
Thanks Brian.
I did some searching on the help forms and found alot out about the Handler and started playing with a test file. It took me a couple of hours, but I managed to get one of my test forms to post an XML file to the handler.
It got me pumped when I got that to happen, I got me alot closer to having a solution. I will be testing my test handler online from the live site tomorrow. If all gose well then I have managed to get the solution, I will let you know latter how it is going.
Thanks
JRatcliff
Nov 23, 2010 02:42 AM|Blast2hell|LINK
cool, let me know if you hit any snags
Nov 24, 2010 07:47 PM|JRatcliff|LINK
OK now this is getting me really upset. The following is a sample of the Handler I have created, slitely modifyed.
public class hndTest1 : IHttpHandler {
private Boolean m_success = true;
public void ProcessRequest(HttpContext xmlText)
{
XmlTextReader tstReader = new XmlTextReader(xmlText.Request.InputStream0;
XmlValidatingReader reader = new XmlValidatingReader(tstReader);
reader.ValidationEventHandler += new ValidationEventHandler(ValidationCallBack);
reader.Read();
while (reader.Read())
{
//Extra code to parse the elements and check for trigers
}
XmlDocument tstXmlDoc = new XmlDocument();
XmlDeclaration tstDeclaration = tstXmlDoc.CreateXmlDeclaration("1.0", "utf-8", null);
XmlElement rootNode = tstXmlDoc.CreateElement("xiamSMS");
tstXmlDoc.InsertBefore(xmlDeclaration, tstXmlDoc.DocumentElement);
tstXmlDoc.AppendChild(rootNode);
XmlElement parentNode = tstXmlDoc.CreateElement("deliverResponse");
parentNode.SetAttribute("id", "123456");
tstXmlDoc.DocumentElement.PrependChild(parentNode);
XmlElement resultNode = tstXmlDoc.CreateElement("result");
resultNode.SetAttribute("status", "OK");
XmlText resultText = tstXmlDoc.CreateTextNode("+1234567890");
parentNode.AppendChild(resultNode);
resultNode.AppendChild(resultText);
xmlText.Response.ContentType = "text/xml";
xmlText.Response.Write(tstXmlDoc);
}
public bool IsReusable {
get {
return false;
}
}
private void ValidationCallBack(object sender, ValidationEventArgs args)
{
m_success = false;
}
}
Now what I have to recieve is the following XML or something similare.
<>
The problem is that the validation file xiamSMSMessage.dtd creates a couple of issues. I had the third party company send me a copy of this file, since I did not have it. Now how do I handle the validation for this file in my code so that it will not generate any errors from the PHP posting form all the while allowing me to parse the XML data passed and return back an XML document confirming the recieving of the data?
Any thoughts on how to accomplish this would be greatly accepted, I am under a time constraint.
Thanks
JRatcliff
Nov 25, 2010 10:09 PM|JRatcliff|LINK
I have made some progress in a test file. I have confirmed that I can receive an XML file and as long as I send back a string version of my responce XML I am find. The new test file works and works with no errors.
Now my issue is opening up the XML file I am sent and getting around the validation file line, (<!DOCTYPE xiamSMS SYSTEM "xiamSMSMessage.dtd">).
I am opening the file with an XmlTextReader object, the advantage is that it allows me to read down the XML file and take out the different parts. Based on these parts I can build a responce XML and insert parts of the reading filoe into the responce. But I get tripped up on the DOCTYPE line, the minute my file reads that it has issues.
Is there a way to step past that line and not even load it into memory, so as to bypass it?
Or what code do I need to add that will allow my system to work with the Doctype and this DTD file?
Help please, if I can solve this I can get the site up and running.
Thanks
JRatcliff
Nov 25, 2010 11:29 PM|Blast2hell|LINK
Sorry, I'll take a look at your stuff in detail tommorrow...being turkey day and all I'm a bit distracted. Also I have to go back in time to use the old XMlDocument stuff.... I'm an xml to linq guy these days and dont' use the old namespaces your dealing with. I'll take some time to look at it. Out the gate I think your probably required to use proper DTD when using the xmlvalidating stuff....but I could be wrong, like I said I need to research. I should have some time tommorrow to try to help ya figure it out though.
Nov 28, 2010 05:48 PM|JRatcliff|LINK
Thanks Brian you helped to solve my first problem so I am marking this one solved. I am creating another for the XML reading issue I am encountering. I will use the title "Problem reading an XML file with a DTD attached" if you are interested in helping out.
For those looking for the answer to my first problem. When designing a web service to recieve a HTTP Post of an XML from a PHP page, use a Generic Handler file to accept the HTTP Post and open the HttpContext variable and access the XML information by opening up the var.Request.InputStream. I found this solution to work better than using a Web Service and allowed me to accept the post.
Ultimately the use of SOAP is better for passing XML but when you have very little choice this could work.
16 replies
Last post Nov 28, 2010 05:48 PM by JRatcliff | http://forums.asp.net/t/1624859.aspx?Creating+a+Web+Service+to+Recive+a+HTTP+POST+with+an+XML+document+as+the+body+or+payload+from+a+PHP+site+ | CC-MAIN-2014-42 | en | refinedweb |
So the last several projects I’ve worked on, I’ve wanted to have a push notification system that I could use to send messages to role instances so that they could take actions. There’s several push notification systems out there, but I was after some simple that would be included as part of my Windows Azure services. I’ve put a version of this concept into several proposals, but this week finally received time to create a practical demo of the idea.
For this demo, I’ve selected to use Windows Azure Service Bus Topics. Topics, unlike Windows Azure Storage queues give me the capability to have multiple subscribers each receive a copy of a message. This was also an opportunity to dig into a feature of Windows Azure I haven’t worked with in over a year. Given how much the API has changed in that time, it was a frustrating, yet rewarding exercise.
The concept is fairly simple. Messages are sent to a centralized topic for distribution. Each role instance then creates its own subscriber with the appropriate filter on it so it receives the messages it cares about. This solution allows for multiple publishers and subscribers and will give me a decent amount of scale. I’ve heard reports/rumors of issues when you get beyond several hundred subscribers, but for this demo, we’ll be just fine.
Now for this demo implementation, I want to keep it simple. It should be a central class that can be used by workers or web roles to create their subscriptions and receive notifications with very little effort. And to keep this simplicity going, give me just as easy a way to send messages back out.NotificationAgent
We’ll start by creating a class library for our centralized class, adding references to it for Microsoft.ServiceBus (so we can do our brokered messaging) and Microsoft.WindowsAzure.ServiceRuntime (for access to the role environment). I’m also going to create my NotificationTopic class.
Note: there are several supporting classes in the solution that I won’t cover in this article. If you want the full code for this solution, you can download it here.
The first method we’ll add to this is a constructor that takes the parameters we’ll need to connect to our service bus namespace as well as the name/path for the topic we’ll be using to broadcast notifications on. The first of these is creating a namespace manager so I can create topics and subscriptions and a messaging factory that I’ll use to receive messages. I’ve split this out a bit so that my class can support being passed a TokenProvider (I hate demo’s that only use the service owner). But here is the important lines:
TokenProvider tmpToken = TokenProvider.CreateSharedSecretTokenProvider(issuerName, issuerKey); Uri namespaceAddress = ServiceBusEnvironment.CreateServiceUri(“sb”, baseAddress, string.Empty); this.namespaceManager = new NamespaceManager(namespaceAddress, tokenProvider); this.messagingFactory = MessagingFactory.Create(namespaceAddress, tokenProvider);
We create a URI and a security token to use for interaction with our service bus namespace. For the sake of simplicity I’m using issuer name (owner) an the service administration key. I’d never recommend this for a production solution, but its fine for demonstration purposes. We use these to create a NamespaceManager and MessagingFactory.
Now we need to create the topic, if it doesn’t already exist.
try { // doesn’t always work, so wrap it if (!namespaceManager.TopicExists(topicName)) this.namespaceManager.CreateTopic(topicName); } catch (MessagingEntityAlreadyExistsException) { // ignore, timing issues could cause this }
Notice that I check to see if the topic exists, but I also trap for the exception. That’s because I don’t want to assume the operation is single threaded. With this block of code running in many role instances, its possible that between checking if it doesn’t exist and the create. So I like to wrap them in a try/catch. You can also just catch the exception, but I’ve long liked to avoid the overhead of unnecessary exceptions.
Finally, I’ll create a TopicClient that I’ll use to send messages to the topic.
So by creating an instance of this class, I can properly assume that the topic exists, and I have all the items I need to send or receive messages.Sending Messages
Next up, I create a SendMessage method that accepts a string message payload, the type of message, and a TImeSpan value that indicates how long the message should live. In this method we first create a BrokeredMessage giving it an object that represents my notification message. We use the lifespan value that is passed in and set the type as a property. Finally, we send the message using the TopicClient we created earlier and do appropriate exception handling and cleanup.
try { bm = new BrokeredMessage(msg); bm.TimeToLive = msgLifespan; // used for filtering bm.Properties[MESSAGEPROPERTY_TYPE] = messageType.ToString(); topicClient.Send(bm); success = true; } catch (Exception) { success = false; // TODO: do something } finally { if (bm != null) // if was created successfully bm.Dispose(); }
Now the important piece here is the setting of a BrokeredMessage property. It’s this property that can be used later on to filter the messages we want to receive. So let’s not forget that. And you’ll also notice I have a TODO left to add some intelligent exception handling. Like logging the issue.Start Receiving
This is when things get a little more complicated. Now the experts (meaning the folks I know/trust that responded to my inquiry), recommend that instead of going “old school” and having a thread that’s continually polling for responses, we instead leverage async processing. So we’re going to make use of delegates.
First we need to define a delegate for the callback method:
public delegate bool RecieverCallback(NotificationMessage mesage, NotificationMessageType type);
We then reference the new delegate in the method signature for the message receiving starter:
public void StartReceiving(RecieverCallback callback, NotificationMessageType msgType = NotificationMessageType.All)
Now inside this method we first need to create our subscriber. Since I want to have one subscriber for each role instance, I’ll need to get this from the Role Environment.
// need to parse out deployment ID string instanceId = Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.CurrentRoleInstance.Id; subscriptionName = instanceId.Substring(instanceId.IndexOf(‘.’)+1);SubscriptionDescription tmpSub = new SubscriptionDescription(topicName, subscriptionName);
Now is the point where we’ll add the in a filter using the Property that we set on the notification when we created it.
{ Filter tmpFilter = new SqlFilter(string.Format(“{0} = ‘{1}’”, MESSAGEPROPERTY_TYPE, msgType)); subscriptionClient.AddRule(SUBFILTER, tmpFilter); }
I’m keeping it simple and using a SqlFilter using the property name we assigned when sending. So this subscription will only receive messages that match our filter criteria.
Now that all the setup is done, we’ll delete the subscription if it already exists (this gets rid of any messages and allows us to start clean) and create it new using the NameSpaceManager we instantiated in the class constructor. Then we start our async operation to retrieve messages:
asyncresult = subscriptionClient.BeginReceive(waittime, ReceiveDone, subscriptionClient);
Now in this, ReceiveDone is the callback method for the operation. This method is pretty straight forward. We make sure we’ve gotten a message (in case the operation simply timed out) and that we can get the payload. Then, using the delegate we set up earlier, And then we end by starting another async call to get another message.
if (result != null) { SubscriptionClient tmpClient = result.AsyncState as SubscriptionClient; BrokeredMessage brokeredMessage = tmpClient.EndReceive(result); //brokeredMessage.Complete(); // not really needed because your receive mode is ReceiveAndDelete if (brokeredMessage != null) { NotificationMessage tmpMessage = brokeredMessage.GetBody<NotificationMessage>(); // do some type mapping here recieverCallback(tmpMessage, tmpType); } } // do recieve for next message asyncresult = subscriptionClient.BeginReceive(ReceiveDone, subscriptionClient);
Now I’ve added two null checks in this method just to help out in case a receive operation fails. Even the, I won’t guarantee this works for all situations. In my tests, when I set the lifespan of a message to less than 5 seconds, still had some issues (sorting those out yet, but wanted to get this sample out).Client side implementation
Whew! Lots of setup there. This is where our hard work pays off. We define a callback method we’re going to hand into our notification helper class using the delegate we defined. We’ll keep it super simple:
private bool NotificationRecieved(NotificationMessage message, NotificationMessageType type) { Console.WriteLine(“Recieved Notification”); return true; }
Now we need to instantiate our helper class and start the process of receiving messages. We can do this with a private variable to hold on our object and a couple lines into role’s OnStart.
tmpNotifier = new NotificationTopic(ServiceNamespace, IssuerName, IssuerKey, TopicName); tmpNotifier.StartReceiving(new NotificationTopic.RecieverCallback(NotificationRecieved), NotificationMessageType.All);
Now if we want to clean things up, we can also add some code to the role’s OnStop.
try { if (tmpNotifier != null) tmpNotifier.StopReceiving(); } catch (Exception e) { Console.WriteLine(“Exception during OnStop: “ + e.ToString()); }base.OnStop();
And that’s all we need.In Closing
So that’s it for our basic implementation. I’ve uploaded the demo for you to use at your own risk. You’ll need to update the WebRole, WorkerRole, and NotifierSample project with the information about your Service Bus namespace. To run the demo, you will want to set the cloud service project as the startup project, and launch it. Then right click on the NotifierSample project and start debugging on it as well.
While this demo may work fine for certain applications, there is definitely room for enhancement. We can tweak our message lifespan, wait timeouts, and even how many messages we retrieve at one time. And it’s also not the only way to accomplish this. But I think it’s a solid starting point if you need this kind of simple, self-contained notification service.
PS – As configured, this solution will require the ability to send outbound traffic on port 9354. | http://java.dzone.com/articles/pushing-notifications-azure | CC-MAIN-2014-42 | en | refinedweb |
As per this post I am currently going through the process of creating some simple game creation tools using HTML5, more specifically using the YUI 3 library as well as the EaselJS canvas library.
This post illustrates the very skeleton upon which we are going to create our app. YUI3 provides a full MVC framework which you can use to create your application so I decided to make use of it. The end result of this code is remarkably minimal, it just creates a single page web application with different views representing different portions of the UI. Specifically, we will create a top zone where the menu will go, a left hand area where the level editing will occur, then a right hand panel which will change contextually. I also created a very simple data class, to illustrate how data works within the YUI MVC environment.
First off, if you have never heard of MVC, it is the acronym of Model View Controller. MVC is a popular design practice for separating your application in to logically consistent pieces. This allows you to separate your UI from your logic and your logic from your data ( the last two get a little gray in the end ). It adds a bit of upfront complexity, but makes it easier to develop, maintain and test non-trivial applications… or at least, that’s the sales pitch.
The simplest two minute description of MVC is as follows. The Model is your application’s data. The View is the part of your application that is responsible for displaying to the end user. The Controller part is easily the most confusing part, and this is the bit that handles communications between the model and view, and is where you actual “logic” presides. We aren’t going to be completely pure in this level in this example ( MVC apps seldom are actually ), as the Controller part of our application is actually going to be a couple pieces, you will see later. For now just realize, if it aint a view and it aint a model, it’s probably a controller.
It is also worth clarifying that MVC isn’t the only option. There is also MVVM ( Model-View-ViewModel ) and MVP ( Model-View-Presenter ), and semantics aside, they are all remarkably similar and accomplish pretty much the same thing. MVC is simply the most common/popular of the three.
Put simply, it will look initially more complex ( and it is more complex ), but this upfront work makes life easier down the road, making it generally a fair trade off.
Alright, enough chatter, now some code! The code is going to be split over a number of files. A lot of the following code is simply the style I chose to use, and is completely optional. It is generally considered good practice though.
At the top level of our hierarchy we have a pair of files, index.html and server.js. server.js is fairly optional for now, I am using it because I will (might?) be hosting this application using NodeJS. If you are running your own web server, you don’t need this guy, and won’t unless we add some server-side complexity down the road.
index.html is pretty much the heart of our application, but most of the actual logic has been parted out to other parts of the code, so it isn’t particularly complex. We will be looking at it last, as all of our other pieces need to be in place first.
Now within our scripts folder, you will notice two sub-folders models and views. These predictable enough are where our models and views reside. In addition, inside the views directory is a folder named templates. This is where our moustache templates are. Think of templates like simple HTML snippets that support very simple additional mark-up, allowing for things like dynamically populating a form with data, etc. If you’ve ever used PHP, ASP or JSP, this concept should be immediately familiar to you. If you haven’t, don’t worry, our templates are remarkably simple, and for now can just be thought of as HTML snippets. The .Template naming convention is simply something I chose, inside they are basically just HTML.
If you are basing your own product on any of this code, please be sure to check out here, where I refactored a great deal of this code, removing gross hacks and cleaning things up substantially!
Let’s start off with our only model person.js, which is the datatype for a person entry. Let’s look at the code now:
person.js
YUI.add('personModel',function(Y){ Y.Person = Y.Base.create('person', Y.Model, [],{ getName:function(){ return this.get('name'); } },{ ATTRS:{ name: { value: 'Mike' }, height: { value: 6 }, age: { value:35 } } } ); }, '0.0.1', { requires: ['model']});
The starting syntax may be a bit jarring and you will see it a lot going forward. The YUI.add() call is registering ‘personModel’ as a re-usable module, allowing us to use it in other code files. You will see this in action shortly, and this solves one of the biggest shortcomings of JavaScript, organizing code.
The line Y.Person = Y.base.create() is creating a new object type in the Y namespace, named ‘person’ and inheriting all of the properties of Y.Model. This is YUI’s way of providing OOP to a relatively un-OOP language. We then define a member function getName and 3 member variables name, height and age, giving each of the three default values… just cause. Of course, they aren’t really member variables, they are entries in the object ATTRS, but you can effectively think of them as member variables if you are from a traditional OOP background. Next we pass in a version stamp ( 0.0.1 ), chosen pretty much at random by me. Next is a very important array named requires, which is a list of all the modules ( YUI, or user defined ) that this module depends on. We only need the model module. YUI is very modular and only includes the code bits you explicitly request, meaning you only get the JavaScript code of the classes you use.
So that is the basic form your code objects are going to take. Don’t worry, it’s nowhere near as scary as it looks. Now let’s take a look at a view that consumes a person model. That of course would be person.View.js. Again, the .View. part of that file name was just something I chose to do and is completely optional.
person.View.js
YUI.add('personView',function(Y){ Y.PersonView = Y.Base.create('personView', Y.View, [], { initializer:function(){ var that=this, request = Y.io('/scripts/views/templates/person.Template',{ on:{ complete:function(id,response){ var template = Y.Handlebars.compile(response.responseText); that.get('container').setHTML(template(that.get('model').getAttrs(['name','age','height']))); } } }); }, render:function(){ return this; } }); }, '0.0.1', { requires: ['view','io-base','personModel','handlebars']});
Just like with our person model, we are going to make a custom module using YUI.add(), this one named ‘personView’. Within that module we have a single class, Y.PersonView, which is to say a class PersonView in the Y namespace. PersonView inherits from Y.View and we are defining a pair of methods, initializer() which is called when the object is created and render() which is called when the View needs to be displayed.
In initializer, we perform an AJAX callback to retrieve the template person.Template from the server. When the download is complete, the complete event will fire, with the contents of our file in the response.responseText field ( or an error, which we wrongly do not handle ). Once we have our template text downloaded, we “compile” it, which turns it into a JavaScript object. The next line looks obscenely complicated:
that.get('container').setHTML(template(that.get('model').getAttrs(['name','age','height'])));
A couple things are happening here. First we are using that because this is contextual in JavaScript. Within the callback function, it has a completely different value, so we cached the value going in. Next we get the property container that every Y.View object will have, and set it’s HTML using setHTML(). This is essentially how you render a view to the screen. The parameter to setHTML is also a bit tricky to digest at first. Essentially the method template() is what compiles a moustache template into actual HTML. A template, as we will see in the moment, may be expecting some data to be bound, in this case name, age and height which all come from our Person model. Don’t worry, this will make sense in a minute.
Our render method doesn’t particularly do anything, just returns itself. Again we specify our modules dependency in the requires array, this time we depend on the modules view, io-base, personModel and handlebars. As you can see, we are consuming our custom defined personModel module as if it was no different than any of the built-in YUI modules. It is a pretty powerful way of handling code dependencies.
Now let’s take a look at our first template.
person.Template
<div style="width:20%;float:right"> <div align=right> <img src= </div> <p><hr /></p> <div> <h2>About {{name}}:</h2> <ul> <li>{{name}} is {{height}} feet tall and {{age}} years of age.</li> </ul> </div> </div>
As you can see, a template is pretty much just HTML, with a few small exceptions. Remember a second ago when we passed data in to the template() call, this is where it is consumed. The values surrounded by {{ }} ( thus the name moustache! ) are going to be substituted when the HTML is generated. Basically it looks for a value by the name within the {{ }} marks and substitutes it into the HTML. For example, {{name}}, looks for a value named name, which it finds and substitutes it’s value mike in the results. Using templates allows you to completely decouple your HTML from the rest of your application. This allows you to source out the graphic work to a designer, perhaps using a tool like DreamWeaver, then simply add moustache markup for the bits that are data-driven.
What you may be asking yourself is, how the hell did the PersonView get it’s model populated in the first place? That’s a very good question.
In our application, our view is actually going to be composed of a number of sub-views. There is a view for the area the map is going to be edited in, a view for the context sensitive editing will occur ( currently our person view ), then finally a view where our menu will be rendered. However, we also have a parent view that holds all of these child views, sometimes referred to as a composite view. This is ours:
editor.View.js
YUI.add('editorView',function(Y){ Y.EditorView = Y.Base.create('editorView', Y.View, [], { initializer:function(){ var person = new Y.Person(); this.pv = new Y.PersonView({model:person}); this.menu = new Y.MainMenuView(); this.map = new Y.MapView(); }, render:function(){ var content = Y.one(Y.config.doc.createDocumentFragment()); content.append(this.menu.render().get('container')); var newDiv = Y.Node.create("<div style='width:100%'/>"); newDiv.append(this.map.render().get('container')); newDiv.append(this.pv.render().get('container')); content.append(newDiv); this.get('container').setHTML(content); return this; } }); }, '0.0.1', { requires: ['view','io-base','personView','mainMenuView','mapView','handlebars']});
The start should all be pretty familiar by now. We again are declaring a custom module editorView. This one also inherits from Y.View, the major difference is in our initializer() method, we create a Y.Person model, as well as our 3 custom sub-views, a PersonView, a MainMenuView and a MapView ( the last two we haven’t seen yet, and are basically empty at this point ). As you can see in the constructor for PersonView, we pass in the Y.Person person we just created. This is how a view gets it’s model, or at least, one way.
Our render() method is a bit more complicated, because it is responsible for creating each of it’s child views. First we create a documentFragment, which is a chunk of HTML that isn’t yet part of the DOM, so it wont fire events or cause a redraw or anything else. Basically think of it as a raw piece of HTML for us to write to, which is exactly what we do. First we render our MainMenuView, which will ultimately draw the menu across the screen. Then we create a new full width DIV to hold our other two views. We then render the MapView to this newly created div, then render the PersonView to the div. Finally we append our new div to our documentFragment. Finally we set our view’s HTML to our newly created fragment, causing all the views to be rendered to the screen.
Once again, we set a version stamp, and declare our dependencies. You may notice that we never had to include personModel, this is because personView will resolve this dependency for us.
Lets quickly look at each of those other classes ( mainMenuView and mapView ) and their templates, although all of them are mostly placeholders for now.
mainMenu.View.js
YUI.add('mainMenuView',function(Y){ Y.MainMenuView = Y.Base.create('mainMenuView', Y.View, [], { initializer:function(){ var that=this, request = Y.io('/scripts/views/templates/mainMenu.Template',{ on:{ complete:function(id,response){ var template = Y.Handlebars.compile(response.responseText); //that.get('container').setHTML(template(that.get('model').getAttrs(['name','age','height']))); that.get('container').setHTML(template()); } } }); }, render:function(){ return this; } }); }, '0.0.1', { requires: ['view','io-base','handlebars']});
mainMenu.Template
<div style="width:100%">This is the area where the menu goes. It should be across the entire screen</div>
map.View.js
YUI.add('mapView',function(Y){ Y.MapView = Y.Base.create('mapView', Y.View, [], { initializer:function(){ var that=this, request = Y.io('/scripts/views/templates/map.Template',{ on:{ complete:function(id,response){ var template = Y.Handlebars.compile(response.responseText); that.get('container').setHTML(template()); //that.get('container').setHTML(template(that.get('model').getAttrs(['name','age','height']))); } } }); }, render:function(){ return this; } }); }, '0.0.1', { requires: ['view','io-base','handlebars']});
map.Template
<div style="width:80%;float:left"> This is where the canvas will go </div>
Now, we let’s take a quickly look at server.js. As mentioned earlier, this script simply provides a basic NODEJS based HTTP server capable of serving our app.
server.js
var express = require('express'), server = express(); server.use('/scripts', express.static(__dirname + '/scripts')); server.get('/', function (req, res) { res.sendfile('index.html'); }); server.listen(process.env.PORT || 3000);
I wont really bother explaining what’s going on here. If you are going to use Node, there is a ton of content on this site already about setting up a Node server. Just click on the Node tag for more articles.
Finally, we have index.html which is the heart of our application and what ties everything together and this is the file that is first served to the users web browser, kicking everything off.
index.html
<!DOCTYPE html> <html> <head> <title>GameFromScratch example YUI Framework/NodeJS application</title> </head> <body> <script src=""></script> <script src="/scripts/models/person.js"></script> <script src="/scripts/views/person.View.js"></script> <script src="/scripts/views/map.View.js"></script> <script src="/scripts/views/mainMenu.View.js"></script> <script src="/scripts/views/editor.View.js"></script> <script> YUI().use('app','editorView', function (Y) { var app = new Y.App({ views: { editorView: {type: 'EditorView'} } }); app.route('/', function () { this.showView('editorView');//,{model:person}); }); app.render().dispatch(); }); </script> </body> </html>
This sequence of <script> tags is very important, as it is what causes each of our custom modules to be evaluated in the first place. There are cleaner ways of handling this, but this way is certainly easiest. Basically for each module you add, include it here to cause that code to be evaluated.
Next we create our actual Y function/namespace. You know how we kept adding our classes to Y., well this is where Y is defined. YUI uses an app loader to create the script file that is served to your clients browser, which is exactly what YUI.use() is doing. Just like the requires array we passed at the bottom of each module definition, you pass use() all of the modules you require, in this case we need the app module from YUI, as well as our custom defined editorView module.
Next we create a Y.App object. This is the C part of MVC. The App object is what creates individual views in response to different URL requests. So far we only handle one request “/”, which causes the editorView to be created and shown. Finally we call app.render().dispatch() to get the ball rolling, so our editorView will have it’s render() method called, which will in turn call the render method of each of it’s child views, which in turn will render their templates…
Don’t worry if that seemed scary as hell, that’s about it for infrastructure stuff and is a solid foundation to build a much more sophisticate application on top of.
Of course, there is nothing to say I haven’t made some brutal mistakes and need to rethink everything!
Now, if you open it up in a browser ( localhost:3000/ if you used Node ), you will see:
Nothing too exciting as of yet, but as you can see, the menu template is rendered across the top of the screen, the map view is rendered to the left and the Person view is rendered on the right. As you can see from the text, the data from our Person model is compiled and rendered in the resulting HTML.
You can download the complete project archive right here.
Design Programming | https://gamefromscratch.com/creating-game-creation-tools-using-html5-the-basic-yui-mvc-app-framework/ | CC-MAIN-2021-04 | en | refinedweb |
In this part we are going to look at how you handle mouse and keyboard input in LibGDX. There are two ways to go about handling input, by polling for it ( as in… “Has anything happened yet? No, ok… What about now? No, ok… Now? Yes! Handle it” ) or by handling events ( “Hey, you, I’v got this event for you!” ). Which you go with generally depends on the way you structure your code. Polling tends to be a bit more resource intensive but at the end of the day that is mostly a non-factor.
Polling the keyboard for input
Let’s jump right in and look at how you poll the keyboard for input. Here is the code:
package input.gamefromscratch.com; import com.badlogic.gdx.ApplicationListener; import com.badlogic.gdx.Gdx; import com.badlogic.gdx.Input; { private SpriteBatch batch; private Texture texture; private Sprite sprite; @Override public void create() { float w = Gdx.graphics.getWidth(); float h = Gdx.graphics.getHeight(); batch = new SpriteBatch(); texture = new Texture(Gdx.files.internal("data/0001.png")); sprite = new Sprite(texture); sprite.setPosition(w/2 -sprite.getWidth()/2, h/2 - sprite.getHeight()/2); } @Override public void dispose() { batch.dispose(); texture.dispose(); } @Override public void render() { Gdx.gl.glClearColor(1, 1, 1, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); if(Gdx.input.isKeyPressed(Input.Keys.LEFT)){ if(Gdx.input.isKeyPressed(Input.Keys.CONTROL_LEFT)) sprite.translateX(-1f); else sprite.translateX(-10.0f); } if(Gdx.input.isKeyPressed(Input.Keys.RIGHT)){ if(Gdx.input.isKeyPressed(Input.Keys.CONTROL_LEFT)) sprite.translateX(1f); else sprite.translateX(10.0f); } batch.begin(); sprite.draw(batch); batch.end(); } @Override public void resize(int width, int height) { } @Override public void pause() { } @Override public void resume() { } }
Other than the highlighted bit and the translateX method, everything here we have seen before. Basically we draw a simple sprite centered to the screen and each frame we check to see if the user has pressed the LEFT or RIGHT arrow. If they have, we check if they also have the left control key held. If so, we move slowly to the left or right. If they don’t have Control pressed, we move instead by 10 pixels.
Here is the app, you need to click it first to give it keyboard focus: Thought doest not support iframe
If it doesn’t work in an frame, click here.
Just use the left and right arrows to move the jet. Hold down control to move slowly. There is no clipping so the sprite can fly way off screen.
In terms of what the new code is doing, the Sprite.translateX method is pretty self explanatory. It moves the sprite by a certain amount of pixels along the X axis. There is a translateY method as well, as well as a more general translate method. The key method in this example is isKeyPressed() member function of the input instance of the global Gdx object. We used a similar instance member when we accessed Gdx.files. These are public static references to the various sub-systems GDX depends on, you can read more here. isKeyPressed is passed a Key value defined in the Keys object and returns true if that key is currently pressed. As you can see when we later tested if the Control key is also pressed, multiple keys can be pressed at the same time.
Polling the Mouse for input
Now let’s take a look at how you poll the mouse for input. To save space, this code is identical to the last example, with only the render() method replaced.
public void render() { Gdx.gl.glClearColor(1, 1, 1, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); if(Gdx.input.isButtonPressed(Input.Buttons.LEFT)){ sprite.setPosition(Gdx.input.getX() - sprite.getWidth()/2, Gdx.graphics.getHeight() - Gdx.input.getY() - sprite.getHeight()/2); } if(Gdx.input.isButtonPressed(Input.Buttons.RIGHT)){ sprite.setPosition(Gdx.graphics.getWidth()/2 -sprite.getWidth()/2, Gdx.graphics.getHeight()/2 - sprite.getHeight()/2); } batch.begin(); sprite.draw(batch); batch.end(); }
Here we instead are checking if a mouse button has been pressed using isButtonPressed passing in a button value defined in the Buttons object. If the left button is pressed, get then poll the mouse position using Gdx.input.getX() and Gdx.input.getY() and set the sprites location to that position. The math may look a bit overly complicated, why didn’t we simply set the location to the values returned by getX/Y? There are two reasons. First, our sprites coordinate is relative to it’s bottom left corner. so if we want to center the sprite, we need to take half the sprites width and height into consideration. The next complication comes from the fact that LibGDX sets the origin at the bottom left corner, but mouse positions are relative to the top left corner. Simply subtracting the position from the screen height gives you the location of the mouse in screen coordinates. We also check to see if the user as hit the right mouse button, and if they have we reposition the jet sprite at the center of the window. Thought doest not support iframe
If it doesn’t work in an frame, click here.
Once again, you need to click within above before it will start receiving mouse events ( depending on your browser ). Left click and the sprite should move to the location you clicked. Right click to return to default ( in theory… ), right click behaviour is a bit random in web browsers.
Event driven keyboard and mouse handling
Now we will look at handling the functionality of both of the above examples ( as a single example ), but this time using an event driven approach.
package input.gamefromscratch.com; import com.badlogic.gdx.ApplicationListener; import com.badlogic.gdx.Gdx; import com.badlogic.gdx.Input.Buttons; import com.badlogic.gdx.Input.Keys; import com.badlogic.gdx.InputProcessor;, InputProcessor { private SpriteBatch batch; private Texture texture; private Sprite sprite; private float posX, posY; @Override public void create() { float w = Gdx.graphics.getWidth(); float h = Gdx.graphics.getHeight(); batch = new SpriteBatch(); texture = new Texture(Gdx.files.internal("data/0001.png")); sprite = new Sprite(texture); posX = w/2 - sprite.getWidth()/2; posY = h/2 - sprite.getHeight()/2; sprite.setPosition(posX,posY); Gdx.input.setInputProcessor(this); } @Override public void dispose() { batch.dispose(); texture.dispose(); } @Override public void render() { Gdx.gl.glClearColor(1, 1, 1, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); sprite.setPosition(posX,posY); batch.begin(); sprite.draw(batch); batch.end(); } @Override public void resize(int width, int height) { } @Override public void pause() { } @Override public void resume() { } @Override public boolean keyDown(int keycode) { float moveAmount = 1.0f; if(Gdx.input.isKeyPressed(Keys.CONTROL_LEFT)) moveAmount = 10.0f; if(keycode == Keys.LEFT) posX-=moveAmount; if(keycode == Keys.RIGHT) posX+=moveAmount; return true; } @Override public boolean keyUp(int keycode) { return false; } @Override public boolean keyTyped(char character) { return false; } @Override public boolean touchDown(int screenX, int screenY, int pointer, int button) { if(button == Buttons.LEFT){ posX = screenX - sprite.getWidth()/2; posY = Gdx.graphics.getHeight() - screenY - sprite.getHeight()/2; } if(button == Buttons.RIGHT){ posX = Gdx.graphics.getWidth()/2 - sprite.getWidth()/2; posY = Gdx.graphics.getHeight()/2 - sprite.getHeight()/2; }; } }
And here it is running: Thought doest not support iframe If it doesn’t work in an frame, click here.
The code is structured quite a bit differently from when we polled for input. The most immediate thing to be aware of is our class declaration:
public class InputDemo implements ApplicationListener, InputProcessor {
We are implementing another interface, InputProcessor, which as you can see adds a number of overrides to our code. The most important ones we are dealing with here are keyDown and touchDown. Touch you say? Yeah, LibGDX treats the mouse and touch input as the same thing. We will look at this in a bit more detail later on. In addition to implementing the various methods of our interface, we also need to register our InputProcessor with the global input instance, this is done here:
Gdx.input.setInputProcessor(this);
At this point, our various event handlers will now be called whenever an event occurs. keyDown will be fired when a key is pressed ( while keyUp is fired when it is released, and keyTyped is fired after it has been fired and released ). The parameter is the value of the key pressed. Once again, these values are available in the Keys object. One thing you may have noticed is that we still poll to see if the Control key is pressed. The alternative would be to set a flag when the control key is pressed and clear it when it is released. It is important to realize that a keyDown event will be fired for each individual key fired, so if you want to handle multiple simutanious key presses, this may not be the best way to approach the subject. Another thing you might notice is that you have to hit the key multiple times to move. This is because a key press generates only a single event ( as does it’s release ). If you want to have the old behavior that holding down the key will move the character continously, you will need to implement the logic yourself. Once again, this can simply be done by setting a flag in your code on keyDown and toggle it when the keyUp event is called.
The touchDown event on the other hand is much more straight forward. It can be a bit confusing handling “mouse” events called “touches”, but it makes sense. Generally the logic you handle for both would be exactly the same, so no sense treating them differently. The parameters passed in to touchDown are the x and y coordinates of the touch/click location, the pointer and button clicked. On a mobile device the Button value will always be Buttons.LEFT. Once again, screen coordinates and image coordinates arent the same, so we need to deal with that in our positioning. Notice how I glossed over just what exactly pointer is? Well, pointer is a bit oddly named in my opinion. TouchIndex would probably have made more sense, especially with pointer having a pair of very well defined meanings already. The pointer value is value between 0 and n ( defined as 20 in LibGDX, in reality much lower ) that represents the ORDER in which the touch event occurred in the event of multiple simultaneous touches. Therefore if you have multiple fingers touching, a pointer value of 0 would indicate that this touch event represents the first finger to touch the screen, while a value of 3 would be the fourth finger to touch the screen. Dont worry, we will talk about this later when we deal specifically with touch. | https://gamefromscratch.com/libgdx-tutorial-4-handling-input-the-mouse-and-keyboard/ | CC-MAIN-2021-04 | en | refinedweb |
Stopwords are words that we remove during preprocessing when we don’t care about sentence structure. They are usually the most common words in a language and don’t provide any information about the tone of a statement. They include words such as “a”, “an”, and “the”.
NLTK provides a built-in library with these words. You can import them using the following statement:
from nltk.corpus import stopwords stop_words = set(stopwords.words('english'))
We create a set with the stop words so we can check if the words are in a list below.
Now that we have the words saved to
stop_words, we can use tokenization and a list comprehension to remove them from a sentence:
nbc_statement = "NBC was founded in 1926 making it the oldest major broadcast network in the USA" word_tokens = word_tokenize(nbc_statement) # tokenize nbc_statement statement_no_stop = [word for word in word_tokens if word not in stop_words] print(statement_no_stop) # ['NBC', 'founded', '1926', 'making', 'oldest', 'major', 'broadcast', 'network', 'USA']
In this code, we first tokenized our string,
nbc_statement, then used a list comprehension to return a list with all of the stopwords removed.
Instructions
At the top of your script, import stopwords from NLTK. Save all English stopwords, as a set, to a variable called
stop_words.
Tokenize the text in
survey_text and save the result to
tokenized_survey.
Remove stop words from
tokenized_survey and save the result to
text_no_stops. | https://production.codecademy.com/courses/natural-language-processing/lessons/text-preprocessing/exercises/stopword-removal | CC-MAIN-2021-04 | en | refinedweb |
In my setup I have two monitors, with second being 120Hz and first being 60Hz. I want to create a Panda window on the second monitor and lock the refresh rate to 120Hz. While I see in the manual how to enable vsync and lock the frame rate, I couldn’t find info how to specify what display to open the Panda window on.
Hi, welcome to the forums!
Perhaps just using the window origin to place it past the end of the first monitor on the virtual desktop will work?
Which operating system are you using?
Hi rdb,
Sorry, I’m running on 64bit Windows 10.
My concern with creating the window on the first monitor and then moving it is that the max refresh rate may be locked to vsync of the first monitor rather than the second one.
Note that the resolutions of the two monitors also do not match, so we don’t end up with an extended display with total of X*2 x Y resolution.
How should I test this?
Thanks.
Update: The framerate meter that comes with Panda seems to indicate that the vsync framerate of the Panda window is updated as the window is moved to a new display. However there seems to be issue with the Panda3D Task timing when I do this.
Here’s what I do and my code to show the problem:
I’m running tests on human perception of pixel shifting at 120Hz (there are different rows displayed each frame, persistence of vision is meant to combine them into one frame).
My code seems to run fine on the 60Hz primary monitor, flickery because the pixel shifting is done at 60Hz, but when I move the window to the second monitor, the pixel shifting stops. It doesn’t stop in the sense that it is not perceivable, but in the sense that only one of the pixel row image is visible, meaning that either the task has been frozen, which a simple debug line show isn’t the case, or that the timing has messed up, which seems to be the case given that a highspeed camera shows that indeed only one row of pixels is shown all the time.
from panda3d.core import * from direct.showbase.ShowBase import ShowBase from direct.task import Task load_prc_file_data("", "vsync true") load_prc_file_data("", "win-size 1280 720") load_prc_file_data("", "textures-auto-power-2 1") load_prc_file_data("", "textures-power-2 up ") base = ShowBase() base.set_frame_rate_meter(True) # def load_image_as_plane(filepath, yresolution = 720): tex = loader.load_texture(filepath) tex.set_border_color(Vec4(0,0,0,0)) tex.set_wrap_u(Texture.WMBorderColor) tex.set_wrap_v(Texture.WMBorderColor) # disable texture filtering, optional tex.set_magfilter(SamplerState.FT_nearest) tex.set_minfilter(SamplerState.FT_nearest) cm = CardMaker(filepath + " card") cm.set_frame(-tex.get_orig_file_x_size(), tex.get_orig_file_x_size(), -tex.get_orig_file_y_size(), tex.get_orig_file_y_size()) card = NodePath(cm.generate()) card.set_texture(tex) card.set_scale(card.get_scale()/ yresolution) card.flatten_light() # apply scale return card image1 = load_image_as_plane("1.png") image1.reparentTo(aspect2d) image2 = load_image_as_plane("2.png") image2.reparentTo(aspect2d) image2.hide() def pixel_shift(task): if image1.is_hidden() == True: image1.show(); image2.hide() else: image2.show(); image1.hide() return task.cont base.task_mgr.add(pixel_shift, "pixel_shift") base.run()
Hmm… This is a bit of a guess, but what happens if you set an explicit clock-rate of 120Hz, either together with or instead of enabling v-sync?
Something like this:
load_prc_file_data("", "clock-mode limited") load_prc_file_data("", "clock-frame-rate 120")
Hi,
In both these cases the two images displaying the different rows seem to be shifted randomly and slower.
Please note that the config.prc setting for vsync is
sync-video, not
vsync.
Thanks, looks like Panda was using the setting from Config.prc instead.
However, this doesn’t fix the issue.
Turning off the main monitor and running the script then seems to run correctly. I think we have a more general bug or issue here relating to multimonitor setups and the task manager. | https://discourse.panda3d.org/t/how-to-use-the-secondary-monitor/26880 | CC-MAIN-2021-04 | en | refinedweb |
- 13 Dec, 2011 1 commit
shared_ptr is a candidates for the next STL version and Windows has already adopted it. This change removes ambiguity between the Boost and Windows versions of the class.
- 12 Dec, 2011 1 commit
Some compilers warn if "this" is referred to in a constructor initialization list. This change replaces such initializations with an assignment within the constructor body.
- 20 Oct, 2011 1 commit
- Jeremy C. Reed authored
- 19 Jul, 2011 1 commit
it's always NULL because it cannot be configured with TSIG keys. it should be done in a separate ticket.
- 06 Jul, 2011 1 commit
type matching for the shared_ptr template parameters in "?:". Note that this fix doesn't loosen the constness, so it doesn't compromise anything.
- 01 Jul, 2011 2 commits
I also noticed some of the comments in the tests are stale or not really correct, so I fixed them, too.
typedef. RequestACL is itself a shortcut, so it doesn't make sense to define another one.
- 30 Jun, 2011 1 commit
framework. it now accepts "action only" rule, so the test test in that case was updated accordingly.
- 29 Jun, 2011 1 commit
definition of uint16_t in the boost namespace. So avoid doing 'using namespace boost'. Instead, import a specific name used in this file. additional cleanups are made: be sure to include stdint.h just in case, and remove unnecessary boost header file.
- 27 Jun, 2011 1 commit
- 24 Jun, 2011 3 commits
the main purpose of this change is for query ACL, but it also eliminates the need for hardcoding the default listen_on setting.
- 22 Jun, 2011 3 commits
so that the methods have reasonable length.
- 21 Jun, 2011 2 commits
- 06 Jun, 2011 1 commit
- 03 Jun, 2011 1 commit
- 25 May, 2011 2 commits
constructor. this fixes a regression report from cppcheck.
- 16 May, 2011 1 commit
- 13 May, 2011 1 commit
- 22 Apr, 2011 1 commit
- zhanglikun authored
[trac598_new] Reimplement the ticket based on new branch. Implement the simplest forwarder by refactoring the code
- 14 Apr, 2011 1 commit
- 12 Apr, 2011 1 commit
- 11 Apr, 2011 1 commit
- 08 Apr, 2011 1 commit
- Ocean Wang authored
- 17 Mar, 2011 1 commit
-)
- 02 2 commits
Taken out from resolver and put into the server_common library.
Spaces around comma | https://gitlab.isc.org/isc-projects/kea/-/commits/f1f4ce3e3014366d4916f924655c27761327c681/src/bin/resolver/resolver.cc | CC-MAIN-2021-04 | en | refinedweb |
December 8, 2020
Bartek Iwańczuk, Luca Casonato, Ryan Dahl
Today we are releasing Deno 1.6.0. This release contains some major features, and many bug fixes. Here are some highlights:
deno compilecan build your Deno projects into completely standalone executables
If you already have Deno installed you can upgrade to 1# Build from source using cargocargo install deno
deno compile: self-contained, standalone binaries
We aim to provide a useful toolchain of utilities in the Deno CLI. Examples of
this are
deno fmt, and
deno lint. Today we are pleased to add another
developer tool to the Deno toolchain:
deno compile.
deno compile does for Deno what
nexe or
pkg do for Node: create a
standalone, self-contained binary from your JavaScript or TypeScript source
code. This has been the single most upvoted issue on the Deno issue tracker.
It works like this:
$ deno compile --unstable file_server$ ./file_serverHTTP server listening on
As with all new features in Deno,
deno compile requires the
--unstable flag
to communicate that there may be breaking changes to the interface in the short
term. If you have feedback, please comment in the
Deno discord, or create an issue with feature
requests on the Deno issue tracker.
For implementation details, see #8539.
For now there are several limitations you may encounter when using
deno compile. If you have a use case for one of these, please respond in the
corresponding tracking issues.
You might have noticed that unlike other tools that create standalone,
self-contained binaries for JS (like
pkg),
deno compile does not have a
virtual file system that can be used to bundle assets. We are hoping that with
future TC39 proposals like
import assertions, and
asset references, the
need for a virtual file system will disappear, because assets can then be
expressed right in the JS module graph.
Currently the
deno compile subcommand does not support cross platform
compilation. Compilation for a specific platform has to happen on that platform.
If there is demand, we would like to add the ability to cross compile for a
different architecture using a
--target flag when compiling. The tracking
issue for this is #8567.
Due to how the packaging of the binary works currently, a lot of unnecessary code is included the binary. From preliminary tests we have determined that we could reduce the final binary size by around 60% (to around 20MB) when stripping out this unnecessary code. Work on this front is happening at the moment (e.g. in #8640).
Deno 1.6 ships with a new
deno lsp subcommand that provides a language server
implementing
Language Server Protocol.
LSP allows editors to communicate with Deno to provide all sorts of advanced
features like code completion, linting, and on-hover documentation.
The new
deno lsp subcommand is not yet feature-complete, but it implements
many of the main LSP functionalities:
deno fmtintegration
deno lintintegration
The
Deno VSCode extension
does not yet support
deno lsp. It is still more feature rich than the nascent
deno lsp can provide. However, we expect this to change in the coming weeks as
the LSP becomes more mature. For now, if you want to try
deno lsp with VSCode,
you must install
VSCode Deno Canary.
Make sure that you have installed Deno 1.6 before trying this new extension. And
make sure to disable the old version of the extension, otherwise diagnostics
might be duplicated.
To track the progress of the development follow
issue #8643. We will release a
new version of vscode-deno that uses
deno lsp when #8643 is complete.
In Deno 1.4 we introduced some stricter TypeScript type checks in
--unstable
that enabled us to move a bunch of code from JS into Rust (enabling huge
performance increases in TypeScript transpilation, and bundling). In Deno 1.5
these stricter type checks were enabled for everyone by default, with a opt-out
in the form of the
"isolatedModules": false TypeScript compiler option.
In this release this override has been removed. All TypeScript code is now run
with
"isolatedModules": true.
For more details on this, see the Deno 1.5 blog post.
Deno 1.6 ships with the latest stable version of TypeScript.
For more information on new features in Typescript 4.1 see Announcing TypeScript 4.1
For advanced users that would like to test out bug fixes and features before
they land in the next stable Deno release, we now provide a
canary update
channel. Canary releases are made multiple times a day, once per commit on the
master branch of the Deno
repository.
You can identify these releases by the 7 character commit hash at the end of the
version, and the
canary string in the
deno --version output.
Starting with Deno 1.6, you can switch to the canary channel, and download the
latest canary by running
deno upgrade --canary. You can jump to a specific
commit hash using
deno upgrade --canary --version 5eedcb6b8d471e487179ac66d7da9038279884df.
Warning: jumping between canary versions, or downgrading to stable, may
corrupt your
DENO_DIR.
The zip files of the canary releases can be downloaded from.
aarch64-apple-darwin builds are not supported in canary yet.
Users of the new Apple computers with M1 processors will be able to run Deno
natively. We refer to this target by the LLVM target triple
aarch64-apple-darwin in our release zip files.
This target is still considered experimental because it has been built using Rust nightly (we normally use Rust stable), and because we do not yet have automated CI processes to build and test this target. That said, Deno on M1 fully passes the test suite, so we're relatively confident it will be a smooth experience.
Binaries of
rusty_v8 v0.14.0 targeting M1 are also provided
with the same caveats.
std/bytes
As a part of the efforts of the
Standard Library Working Group;
std/bytes module has seen major overhaul. This is a first step towards
stabilizing the Deno standard library.
Most of the APIs were renamed to better align with the APIs available on
Array:
copyBytes->
copy
equal->
equals
findIndex->
indexOf
findLastIndex->
lastIndexOf
hasPrefix->
startsWith
hasSuffix->
endsWith
The full release notes, including bug fixes, can be found at. | https://deno.land/posts/v1.6 | CC-MAIN-2021-04 | en | refinedweb |
Image Deduplicator (imagededup)
imagededup is a python package that simplifies the task of finding exact and near duplicates in an image collection.
This package provides functionality to make use of hashing algorithms that are particularly good at finding exact
duplicates as well as convolutional neural networks which are also adept at finding near duplicates. An evaluation
framework is also provided to judge the quality of deduplication for a given dataset.
Following details the functionality provided by the package:
- Finding duplicates in a directory using one of the following algorithms:
- Convolutional Neural Network (CNN)
- Perceptual hashing (PHash)
- Difference hashing (DHash)
- Wavelet hashing (WHash)
- Average hashing (AHash)
- Generation of encodings for images using one of the above stated algorithms.
- Framework to evaluate effectiveness of deduplication given a ground truth mapping.
- Plotting duplicates found for a given image file.
Detailed documentation for the package can be found at:
imagededup is compatible with Python 3.6 and is distributed under the Apache 2.0 license.
Installation
There are two ways to install imagededup:
- Install imagededup from PyPI (recommended):
pip install imagededup
⚠️ Note: imagededup comes with TensorFlow CPU-only support by default. If you have GPUs, you should rather
install the TensorFlow version with GPU support especially when you use CNN to find duplicates. It's way faster. See the
TensorFlow guide for more details on how to install it.
- Install imagededup from the GitHub source:
git clone cd imagededup python setup.py install
Quick start
In order to find duplicates in an image directory using perceptual hashing, following workflow can be used:
- Import perceptual hashing method (eg: 'ukbench00120.jpg') using the duplicates dictionary
from imagededup.utils import plot_duplicates plot_duplicates(image_dir='path/to/image/directory', duplicate_map=duplicates, filename='ukbench00120.jpg')
The output looks as below:
The complete code for the workflow is: using the duplicates dictionary from imagededup.utils import plot_duplicates plot_duplicates(image_dir='path/to/image/directory', duplicate_map=duplicates, filename='ukbench00120.jpg')
For more examples, refer this part of the
repository.
For more detailed usage of the package functionality, refer:
Citation
Please cite Imagededup in your publications if this is useful for your research. Here is an example BibTeX entry:
@misc{idealods2019imagededup, title={Imagededup}, author={Tanuj Jain and Christopher Lennan and Zubin John and Dat Tran}, year={2019}, howpublished={\url{}}, } | https://pythonawesome.com/python-package-that-simplifies-the-task-of-finding-exact-and-near-duplicates/ | CC-MAIN-2021-04 | en | refinedweb |
BSONSerialization
BSON Serialization in native Swift 4
Installation & Compatibility
The recommended (and only tested) way to install and use BSONSerialization is
via
SwiftPM, using at least Swift 4.
The content of your
Package.swift should be something resembling:
import PackageDescription let package = Package( name: "toto", dependencies: [.package(url: "", from: "2.0.0")] )
Usage
BSONSerialization has the same basic interface than
JSONSerialization.
Example of use:
let myFirstBSONDoc = ["key": "value"] let serializedBSONDoc = try BSONSerialization.data(withBSONObject: myFirstBSONDoc, options: []) let unserializedBSONDoc = try BSONSerialization.bsonObject(with: serializedBSONDoc, options: []) try areBSONDocEqual(myFirstBSONDoc, unserializedBSONDoc) /* Returns true */
Serializing/deserializing to/from a stream is also supported. (Note: Due to the specifications of the BSON format, serializing to a Data object is faster than serializing to a stream.)
Finally, a method lets you know if a given dictionary can be serialized as a BSON document.
Alternatives
First, a word about the philosophy of this project. Most BSON frameworks create a whole new type to represent the BSON document. This is useful to add new elements to the documents and have the document serialization ready at all times.
This project takes a different approach. It is merely a converter from the
serialized BSON data to actual Foundation or native objects, or the reverse.
It is actually just like Foundation’s
JSONSerialization which is a converter
from Foundation objects to the serialized JSON data.
If you prefer having an actual BSON object to which you can add elements instead of standard Foundation object which are serialized later, I would recommend this project:
Reference
I used the BSON specification version 1.1 from
All types in this specification, including deprecated ones are supported.
To Do
- ☐ Allow direct serialization of
Dataobject instead of having to use the
MongoBinarytype
- ☐ Allow serializing Int64 or Int32 to Int directly, depending on the platform
- ☐ Xcode target for an iOS Framework
I’ll work seriously on the project if it gains enough attention. Feel free to open issues, I’ll do my best to answer.
Pull requests are welcome 😉
License
MIT (see License.txt file) | https://swiftpack.co/package/Frizlab/BSONSerialization | CC-MAIN-2021-04 | en | refinedweb |
third post in the Little Pitfalls series where I explore these small pitfalls; the previous Little Pitfall post can be found here.
This week we’re going to look at operator overloading. Yes, I bolded that because it is very important to note we overload operators, we don’t override them. Yet, many times I’ve seen or heard people talk about operator overriding by mistake.
So, is this just a semantics argument and I should just relax? No, not really, because the two terms imply entirely different things, and if you are expecting an override behavior when you are instead getting an overload behavior, you can fall headlong into this pitfall.
When you overload a method, you are taking a more “horizontal” approach on increasing the functionality of a type. We typically do this by providing a parallel definition of the method with a different set of parameters (note that you cannot define an overload based on differences in return type alone).
For example, let’s say we define the following class for managing fractions (yes, a real class like this would have many more members, but keeping it simple).
1: public class Fraction
2: {
3: public int Numerator { get; set; }
4: public int Denominator { get; set; }
5:
6: // set the fraction to a given value
7: public void Set(int numerator, int denominator)
8: {
9: Numerator = numerator;
10: Denominator = denominator;
11: }
12: }
Now, perhaps we want to be able to set the value of the fraction in other ways as well. For example, maybe we want to set a fraction from a whole number (e.g. 7 = 7/1) or a mixed number (e.g. 3 1/2 = 7/2).
If we want to provide a series of alternative definitions for a member that the type (Fraction in our case) provides, we are “expanding” the options available by providing additional options. This is in effect increasing the functionality “horizontally” by giving us more members to choose from:
3: // ... the other stuff ...
4:
5: // initialize fraction with just a whole number. For example 7 = 7/1
6: public void Set(int wholeNumber)
7: {
8: Numerator = wholeNumber;
9: Denominator = 1;
10: }
11:
12: // set the fraction from a mixed number. For example 3 1/2 = 7/2
13: public void Set(int whole, int numerator, int denominator)
14: {
15: Numerator = whole * denominator + numerator;
16: Denominator = denominator;
17: }
18: }
Now instead of one Set() method exposed, the class has three Set() methods exposed, each with a different parameter signature. This is how overloading works, you are adding additional methods (or constructors) with different parameter signatures from the class (or a base class).
To that last point, you can overload across a hierarchy, but this is still overloading, not overriding. You are simply overloading the inherited member from the base class in your derived class.
1: public class Base
3: public void DoSomething(int x) { }
4: }
6: public class Derived : Base
7: {
8: // this does not replace Base.DoSomething(int x), it lives beside it in Derived
9: public void DoSomething(int x, bool y) { }
10: }
So in the example above, Base has one DoSomething() method that takes an int x. And Derived has two DoSomething() methods available, the one from Base that takes the int x and the overload that takes int x, bool y.
The thing to note here is that overloads do not hide or override other method definitions in the class hierarchy, they simply are added beside them in the class in which they are defined.
This is important because given the type of the reference you are using to access the type, you may see different methods available. This is because non-virtual members are resolved at compile-time, not run-time.
1: Derived asDerived = new Derived();
2:
3: // valid since Derived inherits from Base
4: Base asBase = asDerived;
6: // valid because Derived inherits Base's methods
7: asDerived.DoSomething(13);
8:
9: // valid, this is the new overload for derived
10: asDerived.DoSomething(7, true);
12: // valid, this is the original method in base
13: asBase.DoSomething(7);
14:
15: // Syntax Error: even though asBase refers to Derived, Derived's overloads
16: // are not available to a Base reference at compile time.
17: asBase.DoSomething(7, true);
Hiding is another Little Pitfall for another day, for now we’ll just concentrate on overloading and overriding…
So what, then, is overriding? Overriding is when you are “replacing” a base class method definition with a derived class method definition. This is a much more “vertical” approach because you’re changing functionality going down the inheritence chain. Note that we’re not not completely “replacing” the original definition – the base class definition still exists and can be invoked from the overriding member, if desired. The main point being it is no longer available to be called directly from outside the type, and as such has been “replaced” in favor of the new definition.
To override a member you must mark the definition in the base class as abstract (if no definition exists and concrete sub-classes must override it) or virtual (if a base definition exists and sub-classes may optionally override it).
3: // base method must be virtual (has a body) or abstract (has no body)
4: public virtual void DoSomething(int x)
5: {
6: Console.WriteLine("Base: " + x);
7: }
8: }
9:
10: public class Derived : Base
11: {
12: // derived method must use override keyword (or it hides, which is another pitfall for another day)
13: public override void DoSomething(int x)
15: Console.WriteLine("Derived: " + x);
16: }
17: }
So in this example, Derived.DoSomething() replaces (in a sense) Base.DoSomething(). Thus no matter if you call a DoSomething() method from a Base reference or a Derived reference, you will still get the overridden behavior of the Derived class. This magic happens because virtual methods are resolved at run-time, not compile-time. The CLR will examine the actual type behind the reference (not the type of the reference itself) and get the nearest (up the class hierarchy) override available for that member of the type.
2: Base asBase = asDerived;
3:
4: // output is Derived: 13
5: asDerived.DoSomething(13);
6:
7: // output is also Derived: 13 since overridden methods are checked at run-time
8: // for the actual type of the object they are invoked upon.
9: asBase.DoSomething(13);
As a side note, constructors are not truly overridden in a pure sense. That is, whenever you create a new derived class it does not actually inherit the constructors from the base class, per se. If you want to have the same constructor signatures that the base class has in your derived class you must re-implement them in the derived class and call through to the base class constructors (otherwise assumes base class default constructor).
Let’s now look at operators, but first let’s summarize the key differences between overload and override behaviors:
Now here’s where it gets tricky. People think when they add an operator definition to their class, they are overriding that operator’s behavior, but this is incorrect. Let’s look at an example to allow us to add to Fraction instances using the + operator:
3: // ... Other stuff ...
5: // all operator overloads must be public and static
6: public static Fraction operator +(Fraction first, Fraction second)
8: if (first == null) throw new ArgumentNullException("first");
9: if (second == null) throw new ArgumentNullException("second");
10:
11: // doing very simple fraction addition, not bothering with reducing or LCD
12: return new Fraction
13: {
14: Numerator = first.Numerator * second.Denominator +
15: second.Numerator * first.Denominator,
16: Denominator = first.Denominator * second.Denominator
17: };
18: }
19: }
Notice a few things here. The operator is static (must public and static or C# gives you a compiler error) and is not marked as an override (can’t be or C# will give you an error again) nor is there a definition of operator + in the base class (object). These three points alone should signify to the reader that the defining an operator is an overload and not an override.
This throws people off because they think of the + operator as being overridden for the class and that they are somehow overriding a base behavior of the operator from a base type. Strictly speaking this is incorrect. The operators do not belong to a base class at all (not even == and != if you look inside the definition of object).
I find it more meaningful to think of this as the operators being independent of all types, and that we are overloading the definition of the operator (as if it was a global method in the world of C++) to support our type.
For example, if all you see is the code below, you have no context to know where the correct operator + is defined:
1: // what operator + are we invoking? Depends on the args!
2: z = x + y;
There is nothing in this code directly that tells us what operator + to invoke. The only thing we know is that it is an operator + between x and y, so whatever types x and y are dictate what overload will get called.
So why is this really a problem? You may still look at this and say that’s all semantics and who cares it works, right? Well, the main place where this comes to be a problem is when you overload an operator, but expect it to override a base behavior.
Where do we typically see this? Anytime where we are overloading an operator that already has meaning – either because it has an inherent definition for all types, or was otherwise already defined for a base type (like == and != for object).
So let’s illustrate. Let’s say we want to be really full featured in our Fraction class and implement all the comparison operators. Well, the first two we’d probably implement would be == and != (these two must always be overloaded in pairs):
3: // ... other stuff ...
5: public static bool operator ==(Fraction first, Fraction second)
6: {
7: // if both non-null, check numerator and denominator (ignore reductions)
8: // use ReferenceEquals() to get underlying object == so don't recursively call self
9: if (!ReferenceEquals(first, null) && !ReferenceEquals(second, null))
10: {
11: return first.Numerator == second.Numerator &&
12: first.Denominator == second.Denominator;
13: }
15: // otherwise, equal if both null, not equal otherwise
16: return ReferenceEquals(first, null) && ReferenceEquals(second, null);
18:
19: public static bool operator !=(Fraction first, Fraction second)
20: {
21: // cross call to == and negate
22: return !(first == second);
23: }
24: }
So that overload seems logical, right? We can now do this:
1: public static void Main()
3: Fraction oneHalf = new Fraction { Numerator = 1, Denominator = 2 };
4: Fraction anotherOneHalf = new Fraction { Numerator = 1, Denominator = 2 };
6: // invoke the equals operator from Fraction, gives us True as expected
7: Console.WriteLine(oneHalf == anotherOneHalf);
Which gives as a true result as we’d expect!
So what if I made the simplest of changes in this example, and instead of having Fraction references I have object references?
1: public static class Program
3: public static void Main()
4: {
5: object oneHalf = new Fraction { Numerator = 1, Denominator = 2 };
6: object anotherOneHalf = new Fraction { Numerator = 1, Denominator = 2 };
7:
8: // What does this do? Uh oh... This yields False
9: Console.WriteLine(oneHalf == anotherOneHalf);
11: }
Perfectly syntactically legal because == works between two type object references, and Fraction inherits from object (as all types do) so we can assign a Fraction to object. But now we yield false instead! Why?
Because, the == operator is an overload, not an override. We have not replaced the behavior of == dynamically at run-time, but are binding to the definition of == at compile-time. And the two types it is appearing between are object which means we’d get a reference equality check. And since oneHalf and anotherOneHalf are obviously referring to separate (though “identical”) objects, the comparison yields false!
This would also fail if one of the types was object and one was Fraction, because when it looks for a working overload it looks for the nearest common ancestor for both classes. For Fraction and object, that’s object itself.
This is why it’s operator overloading and not overriding. With overriding, we’d expect it to dynamically look up the definition of == at run-time, but because it’s an overload, it doesn’t, it’s bound at compile-time as we mentioned before.
To protect yourself against this pitfall, make sure you aren’t treating an operator overload as an override, especially when dealing with == and != since these are defined already between all types. In addition, when comparing objects consider using Equals() instead of == and != if you think the definition of equality may be overridden in a subclass, that way you will get the correct result since Equals() can be overridden.
Operators, like static methods, cannot be overridden directly. But the work-around for both of these is the same. Have the overloaded operator call a virtual instance method that does the operation for you and can be overridden in a sub-class. This gives you the flexibility of both using the operator and redefining the behavior in a sub-class if desired in a polymorphic way:
4: public virtual Fraction Add(Fraction second)
6: if (second == null) throw new ArgumentNullException("second");
7:
8: // doing very simple fraction addition, not bothering with reducing or LCD
9: return new Fraction
11: Numerator = Numerator * second.Denominator +
12: second.Numerator * Denominator,
13: Denominator = Denominator * second.Denominator
14: };
15: }
16:
17: // REWORKED to use the overridable Add() method...
18: public static Fraction operator +(Fraction first, Fraction second)
19: {
20: if (first == null) throw new ArgumentNullException("first");
21: if (second == null) throw new ArgumentNullException("second");
22:
23: return first.Add(second);
24: }
25: }
So, we’ve seen that overloading and overriding are sometimes confused, especially when it comes to operator definitions. The key thing to remember is that operators are overloaded not overridden, which means that if you use operators on base class references, you will get the base class definition (if one exists) and not any derived class operator overload definition that may exist.
This can be particularly painful if you are calling the == or != operator between two object references and are expecting it to call an override in the actual objects being held. Stay tuned for more Little Pitfalls to follow…
Print | posted on Thursday, July 7, 2011 7:11 PM |
Filed Under [
My Blog
C#
Software
.NET
Little Pitfalls
] | http://blackrabbitcoder.net/archive/2011/07/07/c.net-little-pitfalls-operators-are-overloaded-not-overridden.aspx | CC-MAIN-2018-17 | en | refinedweb |
How to make <restrict> work without an action?Bob Thule Apr 3, 2009 8:22 PM
The restrict element in pages.xml is not causing a security exception when users navigate to a page (ie: clicks a link to). The exception is only being thrown when a user calls an action. To work around the problem, I added a default no-opp page action. This works, but it doesn't seem like it should be necessary.
Is this expected behavior, or does Seam have a bug, or is there some setting that I have wrong?
My work-around code is below. I don't think I should have to have the action element in the pages.xml.
pages.xml
<page view- <restrict>#{s:hasRole('Admin')}</restrict> <action execute="#{app.noOp}" on- </page>
App.java
Name("app") public class App{ public void noOp() { } }
1. Re: How to make <restrict> work without an action?Arbi Sookazian Apr 4, 2009 1:00 AM (in response to Bob Thule)
I am using the following successfully in my app:
<page view- <restrict>#{s:hasRole('manager') || s:hasRole('admin')}</restrict> </page>
So that means you don't need to specify an action for Seam to execute...
Have you tried this? Page-level security in Seam using s:hasRole was one of the easier and nicer things I liked about this fwk when I started using Seam 1.2.x...
You need to make sure that the Seam identity instance is populated with the current session's user's role(s) during authentication/authorization process...
ex:
if (theUser.isMemberOf(sid)) { log.debug("(TokenGroup) Found match for: " + secRole.getSecurityRoleName()); identity.addRole(secRole.getSecurityRoleName()); continue; }
2. Re: How to make <restrict> work without an action?Bob Thule Apr 8, 2009 10:53 PM (in response to Bob Thule)Hi Arbi,
If you login to you app with a user who does not have a 'manager' or 'admin' role and then try to go directly to, do you get a security exception?
In my app, if I were to do the same thing, it would render the page just fine! The only time it fails is if I try to call an action from that page. For example, if I click a button with an action "#{whateverAction.doThis}". Stranger yet, when it fails, it still doesn't cause an exception-- it simply does not run the action before re-rendering the page. If I login with a user who passes the restriction, it renders the page (as expected) and the action runs and the page re-renders when the button is clicked (as expected). So the settings in the restrict element are being used, just not fully correctly.
If I add in the no-op page action, the exception occurs as expected-- but I don't think I am supposed to have to add that no-op page action, so I am wondering what is going on!
I am using Seam 2.1.1.GA with Facelets 1.1.15.B1 and JSF 1.2_12. Maybe it has something to do with the newer Facelets or JSF. I can't remember why our team had to move to these newer libs, but it was because of some issues we were having.
3. Re: How to make <restrict> work without an action?Arbi Sookazian Apr 9, 2009 5:41 AM (in response to Bob Thule)
We are using NTLM (silent) authentication with IE browser via JCIFS library in our Authenticator component.
So basically anybody already logged into our network will be authenticated for any of our Seam apps.
Then the authorization routine will not add any roles for that user to the Seam identity instance if they are not a member of any security groups in Active Directory (although the roles/groups can be stored in a DB with Seam 2.1 Identity Management API, specifically JpaIdentityStore).
So the answer to your question (without trying this myself :), is that the user will be forwarded to the error.xhtml with verbage like
you do not have permission to view this page. You need to add something like the following to your pages.xml:
<exception class="org.jboss.seam.security.AuthorizationException"> <redirect view- <message>You don't have permission to do this</message> </redirect> </exception>
4. Re: How to make <restrict> work without an action?Bob Thule Apr 14, 2009 9:21 PM (in response to Bob Thule)
Thanks Arbi, it's fixed based on your exception section. I knew I could add that section, but I never bothered because I wasn't seeing any exceptions in the log and I wasn't being directed to debug page.
But, apparently Seam must be swallowing the AuthorizationExceptions if pages.xml is not setup to catch and redirect on them specifically. | https://developer.jboss.org/thread/187135 | CC-MAIN-2018-17 | en | refinedweb |
Why TDD Isn't Crap
After my recent vitriol about unit testing, a couple of people sent me Why TDD is Crap as a thorough debunking of TDD and unit testing. As someone very interested in software correctness, I ended up writing a debunking of his debunking. Transcriptions will be in quotes, my responses below. Some important notes:
- From what can tell, neither of us is using TDD in the “purest” possible sense of “write the bare minimum that makes the smallest unit test pass”. I’m definitely thinking about TDD as it’s commonly practiced and I believe Smith is, too.
- I really half-assed these transcriptions.
- While Eric Smith is attacking TDD and I’m defending it, keep in mind that he does TDD Consulting and I give talks on TLA+. I don’t think this changes the validity of what either of us say, but it does mean that both of us care very deeply about writing good software, and this isn’t an argument so much as both of us trying to improve engineering. Civility is lacking in online debates and that’s a problem with our community.1
[Mac and] Linux doesn’t use TDD, Linux doesn’t have unit tests either. So I guess you can all switch to Windows, because Windows has TDD built into Visual Studio. … TDD means being a professional as long as you use an unprofessional operating system and an unprofessional programming language.
There’s two ways to read this, and I’m guessing Smith means a little of each. The first is that “Some stellar programmers don’t use TDD, so you don’t have to use TDD to be a professional.” One of the toxic bits of programming culture is to mock anybody who doesn’t believe everything you do. Robert Martin does this with TDD, while Amanda Laucher and Paul Snively do this with static typing. On the whole, we’d be a lot better off if we stopped this nonsense.
The less charitable way to read this is “These people didn’t need TDD, so you don’t either.” And this is a common argument many programmers make. But examples aren’t data. Just because Linus Torvalds didn’t need TDD doesn’t mean that you and I, who aren’t anywhere as good, don’t need it either. I mean, I could point at the J codebase and say “you don’t need whitespace.” Fact is, we’re all mediocre at best, and we should be choosing our techniques on what we need, not what programming legends need.
Have you seen any studies comparing unit testing to other methods? There aren’t any. … We have no evidence that TDD produces fewer bugs, we just have people that think it does.
A lack of hard data on TDD is more a comment on our industry than it is on TDD. One frustrating thing about software engineering is that it’s really, really hard to study. For example, we still don’t know whether static typing reduces bugs!2 The best we have is a collection of pilots, case studies, and controlled experiments on students. I can’t give you anything that conclusively affirms or debunks the value of TDD, just like I can’t do that for anything. The best I have is intriguing papers.
So does TDD work? The best study I’ve seen on it is the Dr. Nagappan case study, where he compared TDD and non-TDD projects in Microsoft and IBM.3 Each pair of teams worked on different features in the same large project to avoid comparing “computer games and nuclear reactors”. He found the TDD projects had roughly a 60% decrease in defect density and took about 25% longer. That cautiously suggests that yes, TDD might be a useful correctness technique.
On the other hand, TDDers might not need to stick to a strict “red-green-refactor” cycle: Fucci et. al found that there was no difference between writing the tests before a chunk of code and writing them after. On the other-other hand, George and Williams found that when people don’t have to write tests before the code, they often forget to write them at all…
Look, studying software is hard and we’re not very good at it. But if you put a gun to my head and asked if TDD worked, I’d probably say “yes”. I think most software engineering researchers are on the same page there.
Most bugs are in the interaction level, and we know this.
Agreed. Nonetheless, the studies (tentatively) show that TDD helps. Most bugs are at the interaction level, but having a shaky foundation certaintly doesn’t make things better.
People who don’t write tests have fewer bugs because they have less code to debug. We say code is liability, but we continue to create liability with code nobody actually needed.
This is a real problem with testing and general software correctness. One systems saying I’ve heard is “When your system gets large enough, most critical failures are caused by your failsafes”. However, that’s not an argument against failsafes; it’s an argument to be just as careful with our failsafes as we are with our production code. Everything you build is, by definition, part of the problem.
In the specific case of unit testing, we can “ask less” of tests than we do production code. It’s a common guideline that your tests should be as simple as possible, which reduces the defect surface for bugs. Additionally, the failure mode of unit tests is less dangerous than production code: either the test is a false positive, which guides us to fix it, or it’s a false negative, which reduces our coverage but doesn’t exactly make things worse. Of course it can give us false confidence, but every verification techniques does that. That’s why we need defense-in-depth.
Failsafes can cause critical failures in large systems, but the failsafes are the reason your system can afford to grow large in the first place.
How are you going to be faster when you’re writing all these unit tests? When you go home, you write a spike because we know it’s faster. But that’s fine, what about production code? TDD evangelists ignore maintenance! Ever spend a day or a week trying to debug CI?
It’s a common argument that “TDD takes less time to write”, which doesn’t seem to be true. The Nagappan study suggests it reduces defects, but it does take longer. TDD takes less time overall”, though, is slightly different, because it includes post-release maintenance. The study explicitly does not factor this time in. Using a conservative estimate that the amount of time it takes to fix a bug is equal to the amount of time it took to code the bug, a 50% reduction for 25% longer time probably saves net time in the post-release maintenance. I haven’t been able to find any studies that give solid numbers.4
As for the “you have to debug CI” argument, that’s a common discussion mistake we make: comparing “something” to “nothing” when we really should be comparing it to “something else”. TDD has maintenance overhead, but so does every other correctness technique! You’ll need a server if you want to compile a statically-typed language. You’ll need a few servers if you want to run a staging environment. You’ll need a bunch of servers and a hug if you want to validate behavior across microservices. And you’ll need an Aphyr if you want to test a distributed system. If you’re doing any of that, adding unit tests isn’t going to be much of a marginal cost.
Testing is about design! Good thing we put ‘testing’ in a title! Test driven development makes your designs better. Why? Because they’re more testable. That’s a circle. We don’t know what a good design is, we have some principles, principles we made up just like we made up TDD.
I’m honestly a little “meh” on the “TDD is about design.” Beck and Cunningham intended it that way, but in practice it’s better as a testing and scaffolding technique than a design technique. TDD does help mildly by forcing you to constantly be calling your functions, so you realize if they’re awkward sooner. But as much as we’d like to turn design into a coding project, design is much more fundamental than whatever makes up the implementation. Testing does not substitute for thinking.
I’ve read a few articles that suggest that testing does, in fact, lead to better designs, but for the life of me I can’t dig them up. If you have any, feel free to send them my way.
Testing is more fun! … You know what’s not fun? Debugging tests. Paying for tons of machines so we can run tests.
This is a case of comparing something to nothing. Most existing correctness techniques are miserable to use.5 If you don’t hate it, you haven’t pushed it hard enough. My friends have stories of struggling to fit a program they knew was correct into the language’s type system. I once spent three hours trying to debug a broken TLA+ spec, eventually finding that I mixed up
=> and
=>. Debugging tests ain’t got shit on that.
At this point Smith talks about alternative techniques to TDD and unit testing to ensure software correctness. All of these approaches are very good and catch bugs TDD will miss, but all also have their tradeoffs. I’d like to go into them in detail.
Design-by-contract has done some empirical studies and does really well for itself. With design-by-contract, you can prove it works.
Smith doesn’t explain what design-by-contract is, so here’s an example from the Babel Contracts javascript library:
function withdraw (fromAccount, amount) { pre: { typeof amount === 'number'; amount > 0; fromAccount.balance - amount > -fromAccount.overdraftLimit; } post: { fromAccount.balance - amount > -fromAccount.overdraftLimit; } fromAccount.balance -= amount; }
Whenever you call
withdraw, it checks every statement in the precondition. If any are false, the program errors. The same thing happens in the postcondition, which is called when the function ends. This makes it much easier to find bugs in development and testing because errors don’t “propagate” from where they originate. Additionally, since everything has contracts and calls other things with contracts, you get assurance on the integration level. EiffelStudio (the Eiffel IDE) can even generate tests that check your contracts.
There are two main problems with contracts, though. First, it provides safety that your program won’t do bad things, but it doesn’t confirm that it actually does what you want. It’s telling that EiffelStudio, in addition to providing world-class contract support, also comes with a unit testing library. You combine both unit tests and contracts for better confidence.
The other problem is that contracts require first-class language support, while you can do simple unit testing pretty much anywhere. Babel Contracts got lucky with Javascript: there’s an unused feature called “labels” that they were easily able to hack into pre/postconditions. But even their system is crude compared to Eiffel. A toy example:
class ACCOUNT feature balance: INTEGER -- Bunch of methods invariant no_overdrafts: balance >= 0 end
no_overdrafts is a class invariant. It’s checked whenever any method on an account called or any kind of mutation happens, and EiffelStudio can compare it to all internal methods and all users of the object to generate extremely intricate tests.
no_overdrafts can also be inherited, composed, or overridden like any other class property.6 In Eiffel, you can do incredible things with contracts. In Javascript, you just have pre/postconditions. In Ruby, you have a glorified type checker. But all three languages have solid unit testing frameworks.
Haskell has property based testing. It throws tons of tests at your code, well more than you’ll ever think of.
PBT (also called generative testing) is where you give a generator some rules and ask it to make tests for you. While the first PBT library was Haskell Quickcheck, arguably the most sophisticated is the Hypothesis Python library. Here’s what a property-based test in Hypothesis looks like:
from hypothesis import given from hypothesis.strategies import text from lexer import lex # str -> List[Lexeme] @given(text("+-*/()123456789", max_size=10)) def test_lexes_properly(maths): lexeme_strings = map(str, lex(maths)) assert "".join(lexeme_strings) == maths
Hypothesis grabs a random string, such as
10**2+3.
lex takes that and turns it into a list of lexeme objects, like
[NUM(10), POWER, NUM(2), PLUS, NUM(3)]. We assert that it stringifies back into
10**2+3. Hypothesis will keep throwing random and pathological strings into the test until it either finds an error or is satisfied that my
lex function passes the test. This single property test replaced the original ten unit tests and had better coverage, too: it found a lexing error I hadn’t tested for.7
PBT vs TDD, though, is a false dichotomy. I don’t see TDD as meaning only unit tests. Sometimes, before writing code, I’ll write a few unit tests. Other times I’ll write a few property tests. The main benefit property tests have is they test a wider space. The main drawback they have is that they’re not very specific. With unit tests, you know exactly what input you’re giving in and exactly what output you want out. With property tests, you only know what kinds of inputs are going in and can’t provide the exact output you want. Instead, you have to be clever and look for patterns.
test_lexes_properly is an example of an encode/decode invariant, where you test that some transformation is perfectly reversible. Another technique is using an oracle, where you find some trick to start out with the answer. Compare these to the simplicity of writing
assert foo(bar) == baz as a unit test.
Unit tests and PBTs complement each other. You use the former to check a couple of cases work right, and then use PBTs to draw conclusions about the wider input space. There is no conflict with TDD here.8
Hammock-driven development. We reject big design upfront … it’s important. It needs to be done sometimes.
I disagree very strongly with Eric Smith here. Upfront design does not “need to be done sometimes”. Agile was a response to how miserably BDUF is… and went too far the other way. We should not be thinking of careful design and planning as a niche thing. Design is fundamental and necessary to software correctness.
This does not mean going back to BDUF and 1,000 page requirements documents. But it’s vastly harder to fix a bug in development than it is to fix it in design. Before writing code, I try to draw a directed graph in graphviz or a sequence diagram in mermaid. The amount of errors I catch in the diagrams is a little embarrassing. If your only takeaway from this essay is “learn mermaid”, I’ve done my job.
If you want to go further than that, I’d recommend exploring formal specification, in particular the two “flyweight methods” languages. The first, Alloy, is used to verify data structures. I’ve not used it in production but have heard good things from people I trust. I have used TLA+ to find concurrency bugs in my designs, and it’s absolutely incredible. I genuinely believe it could revolutionize software, and have written a comprehensive beginner’s guide to help that process along.
But does this replace TDD? No. Good design is critical, but then you need to code and test your design. And TDD is quite often a good technique to do that.9
Every organization has a QA department. If TDD was a silver bullet, we wouldn’t need it.
I agree. We also have a bad habit of seeing software test engineers as being somehow “lesser” than the product engineers. Rigorous testing is as much as specialist skill, with specialist programming knowledge, as any other part of software. In fact, you shouldn’t have the QA department write unit tests any more than you should have the product department do pentests. TDD is a technique for developers. Testers should be busy writing more complex, more terrifying tests.
Here are my main takeaways:
- We don’t actually know that much about what good software engineering looks like.
- TDD is likely a good correctness technique and is probably useful in many projects.
- There are other correctness techniques that have different strengths and weaknesses relative to TDD. You should probably be using a mix, with the optimum ratio being dependent on the project, external constraints, and size of your correctness budget.
- Regardless of how you approach correctness, it’s definitely worthwhile to do some design in advance.
- We shouldn’t call people unprofessional just because they disagree with us.
- QA don’t get enough respect.
Thanks to Richard Whaling for their feedback.
- I’m not blameless here either. [return]
- I’d be remiss without mentioning a recent paper that looks more rigorous than its peers. There’s a couple of threats to validity I want to look into, though. [return]
- Surprisingly, Microsoft is probably the biggest investor in software engineering research in the world. I wouldn’t be surprised if they spent more on it than the rest of the Big Five combined. In terms of formal verification research, the only group that’s comparable is the country of France. [return]
- Yes, I know about the “IBM Systems Science Institute” graph. It’s probably not real. [return]
- I think this is 20% “testing is intrinsically hard” problem and 80% “Nobody invests in software correctness UX”. [return]
- I wonder if the reason OO languages are considered so buggy is because none of the popular ones went all-in on class contracts. It seems like a killer feature of classes that nobody’s heard of. [return]
- Specifically, if the string ended with a multicharacter token followed by a multidigit number, it would leave off the last digit. For example,
2**10would lex as
[NUM(2), POWER, NUM(1)]. [return]
- PBT also synergizes really well with contracts. [return]
- There are some formal verification languages, like ACL2 and Coq, where you can formally prove your code matches the design. In practice, though, they are much too difficult and expensive to use for 99% of projects. But I’ve heard Idris is showing promise. [return] | https://hillelwayne.com/post/why-tdd-isnt-crap/ | CC-MAIN-2018-17 | en | refinedweb |
jndi lookup fails in user-threadGuenther Bodlak Oct 4, 2011 9:33 AM
Hi,
I have a ServletContextListener which creates and starts a new Thread when the servlet context gets initialized.
In that Thread I'm trying to lookup an ejb. The jndi-lookup fails with a NameNotFoundException.
This code works in Glassfish 3.1 but not in jboss 7.0.2?
The code is attached.
Is there a problem with my code or this a bug in jboss 7.0.2?
Thanks for your help
Günther
- TestEjb.java.zip 332 bytes
- TestThread.java.zip 628 bytes
- StartupListener.java.zip 614 bytes
1. Re: jndi lookup fails in user-threadRiccardo Pasquini Oct 4, 2011 9:05 AM (in response to Guenther Bodlak)1 of 1 people found this helpful
Quite strange that it works in glassfish, as far as i know, the jndi name is "modular" (same module...but war is a different module or do you deploy the ejb in the war?)... you should use "application" if they (jar and war) are packaged in the same ear or "global" if not...
startup logs shows the jndi names available for that ejb...
anyway, i would ilke to suggest you the timer service instead of running a thread in the context listener, in the listener lookup for the timed object and use the jee service... let the application server to be the owner of threading
bye
2. Re: jndi lookup fails in user-threadGuenther Bodlak Oct 4, 2011 9:14 AM (in response to Riccardo Pasquini)
Hi Riccardo,
all 3 classes are deployed in a war. So the JNDI-Name should be o.k.
In the application I am working on we need to have the control over some threads - it is not possible to use the timer service ...
Günther
3. Re: jndi lookup fails in user-threadRiccardo Pasquini Oct 4, 2011 9:21 AM (in response to Guenther Bodlak)
can you try the lookup in a managed bean? just for test purposes...
4. Re: jndi lookup fails in user-threadGuenther Bodlak Oct 4, 2011 9:43 AM (in response to Riccardo Pasquini)
Hi,
I updated the code of StartupListener. It now calls StartupListener.doLookupTest() where I perform the same JNDI-lookup as in TestThread.callTestEjb().
It works in StartupListener, but it doesn't work in TestThread.
Here are parts of the server.log:
the jndi bindings:
15:28:21,342 INFO [org.jboss.as.ejb3.deployment.processors.EjbJndiBindingsDeploymentUnitProcessor] (MSC service thread 1-3) JNDI bindings for session bean named TestEjb in deployment unit deployment "webtest1.war" are as follows:
java:global/webtest1/TestEjb!gbo.ejb.TestEjb
java:app/webtest1/TestEjb!gbo.ejb.TestEjb
java:module/TestEjb!gbo.ejb.TestEjb
java:global/webtest1/TestEjb
java:app/webtest1/TestEjb
java:module/TestEjb
...
the lookup performed in StartupListener which works:
15:28:25,420 INFO [gbo.servlet.StartupListener] (MSC service thread 1-1) before lookup of TestEjb
15:28:25,435 INFO [gbo.servlet.StartupListener] (MSC service thread 1-1) after lookup of TestEjb
15:28:25,435 INFO [gbo.servlet.TestThread] (Thread-20) TestThread started
15:28:25,435 INFO [gbo.servlet.TestThread] (Thread-20) before lookup of testEjb
the lookup in the new Thread which doesn't work:
15:28:25,435 WARNUNG [gbo.servlet.TestThread] (Thread-20) naming-Exception: javax.naming.NameNotFoundException: java:module/TestEjb
at org.jboss.as.naming.InitialContext.lookup(InitialContext.java:55)
at org.jboss.as.naming.NamingContext.lookup(NamingContext.java:209)
at javax.naming.InitialContext.lookup(InitialContext.java:392) [:1.6.0_27]
at gbo.servlet.TestThread.callTestEjb(TestThread.java:37) [classes:]
at gbo.servlet.TestThread.run(TestThread.java:24) [classes:]
Thanks for your help
Günther
5. Re: jndi lookup fails in user-threadRiccardo Pasquini Oct 4, 2011 11:05 AM (in response to Guenther Bodlak)
good you can pass the InitialContext instance to a concrete implementation of Runnable interface... build the Thread from that Runnable and... ?
6. Re: jndi lookup fails in user-threadGuenther Bodlak Oct 4, 2011 11:40 AM (in response to Riccardo Pasquini)
Hi,
passing the InitialContext instance into the thread does not work.
It ends up in a NameNotFoundException
Günther
7. Re: jndi lookup fails in user-threadRiccardo Pasquini Oct 4, 2011 11:41 AM (in response to Guenther Bodlak)
i surrender, i'm sorry
8. Re: jndi lookup fails in user-threadjaikiran pai Oct 6, 2011 7:56 AM (in response to Guenther Bodlak)1 of 1 people found this helpful
Creating threads from within the servlet or any EE component isn't recommended by the spec. The JNDI lookup of "app", "module" and "comp" won't work in those threads. See this for details
9. Re: jndi lookup fails in user-threadGuenther Bodlak Oct 7, 2011 7:29 AM (in response to jaikiran pai)
Hi,
It works with the global namespace: "java:global/webtest1/TestEjb"
I appologize not trying that earlier . Sorry!
I'm new to JEE and do not really understand the different namespaces.
Do you know a good documentation about that topic?
Do you think I can get problems using that global namespace in my user-thread?
In the user-thread I'd like to delegate all work to an ejb. That ejb depends on other ejbs (using hibernate or jdbc for db-access).
Many thanks for your time!
Günther | https://developer.jboss.org/thread/173179 | CC-MAIN-2018-17 | en | refinedweb |
XML ExtensionStuart Douglas Jan 24, 2010 4:35 AM
I have started on XML configuration for CDI beans, it can be checked out from svn here. As yet there is no real documentation, so I though I would post a few notes here in case any one would like to try it out. There are also plenty of examples in the src/test/java directory. If you wish to build it you should run it against the trunk of Weld, as there are several bugfixes to the SPI in trunk that are not in 1.0.0.
Lets start with a simple example:
<?xml version="1.0" encoding="UTF-8"?> <Beans xmlns="urn:seam:core" xmlns: <test:ProducerQualifier> <Qualifier/> </test:ProducerQualifier> <test:ProducerBean> <test:value> <Produces/> <test:ProducerQualifier/> <value>hello world</value> </test:value> </test:ProducerBean> <test:RecieverBean> <test:value> <test:ProducerQualifier/> <Inject/> </test:value> </test:RecieverBean> </Beans>
From the top:
the root element of the file has to be <Beans/>, and the root namespace should be 'urn:seam:core'. You will also notice that there is a second namespace defined, 'urn:java:org.jboss.seam.xml.test.injection', this namesapce is used to resolve classes in the java package 'org.jboss.seam.xml.test.injection'.
<test:ProducerQualifier> <Qualifier/> </test:ProducerQualifier>
The first entry in the file defines a new qualifier. ProducerQualifier is an annotation in the package 'org.jboss.seam.xml.test.injection'.
<test:ProducerBean> <test:value> <Produces/> <test:ProducerQualifier/> <value>hello world</value> </test:value> </test:ProducerBean>
The next entry in the file is a bean declaration. The bean class is org.jboss.seam.xml.test.injection.ProducerBean. It is important to note that this declaration does not change the existing declaration of ProducerBean, instead it installs a new bean. In this instance there will be two ProducerBean CDI beans.
This bean has a field called 'value', this field is configured to be a producer field using XML (it is also possible to configure producer methods, more on this later). The <test:value/> declaration has several child elements. The <Produces/> element tells the container that this is a producer field. <test:ProducerQualifier/> element defines a qualifier for the producer field. The <value> element defines an initial value for the field.
Child elements of fields, methods and classes that resolve to Annotation types are considered to be annotations on the corresponding element, so the corresponding java declaration for the XML above would be:
public class ProducerBean { @Produces @ProducerQualifier public String value = "hello world"; }
<test:RecieverBean> <test:value> <test:ProducerQualifier/> <Inject/> </test:value> </test:RecieverBean>
The XML above declares a new Bean that injects the value that was produced above. In this case the @Inject annotation is applied instead of @Produces and no initial value is set.
Replacing existing bean declarations
It is possible to prevent an existing bean being installed using the <veto/> tag:
<veto> <test:ProducerBean/> <test:RecieverBean/> </veto>
The code above would prevent the ProducerBean and RecieverBean that was discovered in the bean discovery phase from being installed. Using the veto tag it is possible to replace beans instead of just installing new ones. I am planning on adding a way of adding additional annotations to existing Beans at some point in the future.
The root namespace
The root namesapce can contain the following elements:
- Beans
- veto
- value
- key
- entry
- array
- e (alias for entry)
- v (alias for value)
- k (alias for key)
as well as classes from the following packages:
java.lang
java.util
javax.annotation
javax.inject
javax.enterprise.inject
javax.enterprise.context
javax.enterprise.event
javax.decorator
javax.interceptor
javax.persistence
javax.xml.ws
javax.jms
javax.sql
So the <Produces> element above actually resolved to java.enterprise.inject.Produces and the @Inject annotation resolved to javax.inject.Inject.
Initial field values
Inital field values can be set in two different ways, in addition to the <value> element shown above it can be set as follows:
<test:someField>hello world</test:someField>
using this method prevents you from adding any annotations to the field.
It is possible to set Map,Array and Collection field values. Some examples:
<test:ArrayFieldValue> <test:iarray> <value>1</value> <value>2</value> </test:iarray> <test:carray> <value>java.lang.Integer</value> <value>java.lang.Long</value> </test:carray> <test:sarray> <value>hello</value> <value>world</value> </test:sarray> </test:ArrayFieldValue> <test:MapFieldValue> <test:map1> <entry><key>1</key><value>hello</value></entry> <entry><key>2</key><value>world</value></entry> </test:map1> <test:map2> <e><k>1</k><v>java.lang.Integer</v></e> <e><k>2</k><v>java.lang.Long</v></e> </test:map2> </test:MapFieldValue>
Type conversion is done automatically for all primitives and primitive wrappers, Date, Calendar,Enum and Class fields. In this instance ArrayFieldValue.carray is actually an array of classes, not an array of Strings.
Configuring methods
It is also possible to configure methods in a similar way to configuring fields:
<?xml version="1.0" encoding="UTF-8"?> <Beans xmlns="urn:seam:core" xmlns: <test:MethodBean> <test:method> <Produces/> </test:method> <test:method> <Produces/> <test:Qualifier1/> <test:MethodValueBean> <test:Qualifier2/> </test:MethodValueBean> </test:method> </test:MethodBean> </Beans> public class MethodBean { public int method() { return 1; } public int method(MethodValueBean bean) { return bean.value + 1; } }
In this instance MethodBean has two methods, both of them rather imaginatively named 'method'. The first <test:method> entry in the XML file configures the method that takes no arguments. The <Produces> element makes it into a producer method. The next entry in the file configures the Method that takes a MethoValueBean as a parameter. When configuring methods non-annotation classes are considered to represent method paramters. If these parameters have annotation children they are taken to be annotations on the parameter. In this instance the corresponding java declaration would be:
@Produces @Qualifier1 public int method(@Qualifier2 MethodValueBean param) {//method body}
Array parameters can be represented using the <array> element, with a child element to represent the type of the array. E.g.
int method(MethodValueBean[] param);
could be configured via xml using the following:
<test:method> <array> <test:MethodValueBean/> </array> </test:method>
Annotation members
It is also possible to set the value of annotation members by. For example:
public @interface OtherQualifier { String value1(); int value2(); QualifierEnum value(); } <test:QualifiedBean1> <test:OtherQualifierA</test:OtherQualifier> </test:QualifiedBean1> <test:QualifiedBean2> <test:OtherQualifier </test:QualifiedBean2>
The value member can be set using the inner text of the node, as seen in the first example.
Still to come
There is still a lot of work to be done, some of the things that are still to come include
- Configuration of remote and local EJB's
- Configuration of JMS resource
- Ability to redefine beans, instead of just adding and removing them
- Ability to redefine beans based on type or annotation (e.g. add an interceptor to all beans that implement a certain interface, prevent all beans with a particular annotation from being installed etc)
- Ability to configure primitive types as beans
And more. If you want more information have a look at the tests, also the JSR-299 public review draft section on XML Configuration was the base for this extension, so it may also be worthwhile reading.
1. Re: XML ExtensionArbi Sookazian Jan 26, 2010 12:21 AM (in response to Stuart Douglas)
Bravo on your efforts, mate. But I thought all along that GKing and the gang are gung ho about annotations and against non-type-safe XML configs??? Is this an attempt to
competewith the traditional Spring 2.0-style of config?
2. Re: XML ExtensionNicklas Karlsson Jan 26, 2010 7:52 AM (in response to Stuart Douglas)
The subject says it all,
XML extension. Choice is good and there is a big difference between can and must.
3. Re: XML ExtensionPete Muir Jan 26, 2010 2:46 PM (in response to Stuart Douglas)
Excellent work Stuart! | https://developer.jboss.org/thread/179515 | CC-MAIN-2018-17 | en | refinedweb |
#include "UT_Assert.h"
#include "UT_IteratorRange.h"
#include "UT_NonCopyable.h"
#include <SYS/SYS_Types.h>
#include <SYS/SYS_TypeTraits.h>
#include <functional>
#include <list>
#include <type_traits>
#include <unordered_map>
Go to the source code of this file.
A default helper function used by UT_LRUCache to determine whether an object is currently in use and so should not be deleted when the cache gets pruned.
Definition at line 59 of file UT_LRUCache.h.
A default helper function used by UT_LRUCache to determine the size of the objects it stores to help prune the storage so that it doesn't exceed the maximum given in the constructor.
Definition at line 46 of file UT_LRUCache.h. | http://www.sidefx.com/docs/hdk/_u_t___l_r_u_cache_8h.html | CC-MAIN-2018-17 | en | refinedweb |
Urho3D::FileSelector Class Reference
Public Member Functions | Static Public Member Functions | Private Member Functions | Private Attributes | List of all members
Urho3D::FileSelector Class Reference
File selector dialog. More...
#include <Urho3D/UI/FileSelector.h>
Inheritance diagram for Urho3D::FileSelector:
Collaboration diagram for Urho3D::FileSelector:
Detailed Description
File selector dialog.
The documentation for this class was generated from the following files:
- Source/Urho3D/UI/FileSelector.h
- Source/Urho3D/UI/FileSelector.cpp | https://urho3d.github.io/documentation/1.6/class_urho3_d_1_1_file_selector.html | CC-MAIN-2018-17 | en | refinedweb |
I have a couple of questions about calling values from methods. I'm not sure if I'm even wording this right but I hope someone will understand what I mean. The code is below.
using System; namespace CarpetCalculatorExample { class CarpetCostCalculatorUsingMethods { static void Main() { double roomWidth; double roomLength; double pricePerSqMetre; double noOfSqMetres; double carpetCost; DisplayInstructions(); GetDimensions(); GetPrice(); DetermineSquareMetres(0, 0); Console.Write("\nPress any key to terminate program"); Console.ReadKey(); } public static void DisplayInstructions() { Console.WriteLine("This program will determine how much carpet to purchase."); Console.WriteLine(); Console.WriteLine("You will be asked to enter the size of the room," + "length and breadth in metres. "); Console.WriteLine("and the price of the carpet, as price per square metres."); Console.WriteLine(); } public static void GetDimensions() { Console.Write("Enter the length in metres: "); double length = double.Parse(Console.ReadLine()); Console.Write("Enter the width in metres: "); double width = double.Parse(Console.ReadLine()); } public static void GetPrice() { Console.Write("Enter the price per square metre: $"); double pricePerSqMetres = double.Parse(Console.ReadLine()); } public static void DetermineSquareMetres(double length, double width) { double noOfSqMetres; noOfSqMetres = length * width; Console.WriteLine("Square Metres needed: " + noOfSqMetres); } } }
I am struggling with DetermineSqMetres as what I'm really wanting to do is multiply the length and width which the user inputs in GetDimensions. I'm not quite sure how to call that result from GetDimensions.
I've also noticed, when I call DetermineSqMetres in Main, I've had to put a value in the parameters otherwise I keep getting "No overload for method 'DetermineSqMetres takes 0 arguments". So at this moment, I've put 0 and 0 in there because it only seems to name a number value. I realise this doesn't work, but I'm not quite sure how else to fix it.
Any help would be appreciated!
Thanks
| http://www.dreamincode.net/forums/topic/244507-calling-parts-of-methods/page__p__1418174 | CC-MAIN-2016-07 | en | refinedweb |
Observations on packaging
By sch on Jul 19, 2007
Over the past few months, a bunch of us have been exploring various options for packaging. (Actually, I suppose I've been pursuing this on a part-time basis for a year now, but it's only recently that it's made it to the top of the stack.) I've looked at a bunch of packaging systems, ported a few to OpenSolaris, run a bunch on their native platforms, and read a slew of manual pages, FAQs, blogs, and the like. In parallel with those efforts, Bart, Sanjay, and others have been analyzing the complexity around patching in the Solaris 10 patch stream, and improving the toolset to address the forms of risk that the larger features in the update releases presented.
In the course of those investigations, we've come up with a number of different approaches to understanding requirements and possibilities; I'll probably write those out more fully in a proper design document, but I thought it would be helpful to outline some of those constraints here. For instance, one way to look at how we might improve packaging is to separate the list of "design inputs" for a packaging system into "old" and "new".
When I make a list of this kind, I know it's bound to offend. (I already know it's more scattershot than the argument we'll present in a design document.) Feel free to send me the important inputs I've omitted, as I have a few follow-up posts on requirements—lower level, more specific intentions—and architectural thoughts where I can cover any issues not mentioned here.
. As long or longer, we've supported the notion of diskless systems, where the installed image is shared out, in portions, between multiple running environments. Zones was a direct evolutionary successor of the diskless environments, and shows that this approach can lead to some very economical deployment models. Lower-level virtualization methods can also benefit from the administrative precision that comes out of sharing known portions of the installation image.
- Availability and liveness. With Live Upgrade, it's been possible for a while now to update a second image on a running system while the first image is active—a command to switch images and a reboot is all that's required to be running the new software. This approach requires too much feature-specific knowledge at present, but provides a very safe approach to installing an upgraded kernel or larger software stack, as reverting to the previous version is just a matter of switching back to the previous image. So, a package installation option that doesn't ruin a working system is a must.
- Change control. In principle, with the current set of package and patch operations, it is possible to create a very specific configuration—that may never have been tested at any other site, mind you—from the issued stream of updates. From a service perspective, the variety of possible configuration combinations is too broad, but the ability to precisely know your configuration and make it a fixed point remains you know where everything is. That is, the current components need to be assembled into some locally relevant architecture by each site to be useful—any replacement needs to make this assembly unnecessary, potentially via a number of mechanisms, but definitely via a list of known (trusted?) public package repositories.
"New" inputs
But, as you might expect, the efforts around Approachability, Modernization, and Indiana have brought to light some new qualities a packaging system must possess.
- Safety. One of the real complications from virtualized systems, at least in current packaging, is that a developer has to understand each of the image types his or her package might reach, and make sure that the set of pre- and post-install scripts, class actions scripts, and the like are correct for each of these cases. When that doesn't happen, the package is at a minimum incorrectly installed; in certain real cases, this class of failures compromises other data on the system. Restrictions in this space, particularly during package operations on an inert image, seem like a promising trade-off to achieve greater safety.
- Developer burden. Current packaging requires the developer provide a description of the package, its dependencies, and contents across a minimum of three files, in addition to knowing a collection of rules and guidelines around files, directories, and package boundaries. Most of these conventions should be encoded and enforced by the package creation and installation tools themselves, and it should be possible to construct a package with only a single file—or from an existing package or archive.. For a while, the current system has had some very coarse package "blocks", from which one could construct a system—the various metaclusters. These, with the exception of the minimally required metacluster and the entire metacluster, split the system along boundaries of interest only to workstation installs. Any suitable packaging system must provide query and image management tools to make image construction more flexible and less error-prone (and eliminate the need for things like a "developer" metacluster, for that matter).
It's also pretty clear that the package boundaries aren't optimized in any fashion, as evidenced by the differing rates of change of the binaries they currently enclose—in the form of the issued patches against Solaris 10.
- Multiple streams of change, of a single kind. Although we noted the continued need to control change above, it's also important to be able to subscribe to a stream of change consistent with one's expectations around system stability. The package system needs to allow one to bind to one or more streams of change, and limit how the interfaces the aggregate binaries from those streams evolve. That is, it should be possible to subscribe to a stream of only the most important updates, like security and data corruption fixes, or to the development stream, so that one's system changes in a way consistent with one's expectations.
Conversely, the tradeoff between complexity and space optimization in current patches—which introduce a separate and difficult-to-calculate dependency graph and distinct namespace entries for each platform, among other issues—has slid much too far, given the increase in system richness and the increases in disk space and bandwidth. There seems to be little long-term benefit in preserving the current patch mechanism, particularly since Sun never offered it in a form useful outside of Sun's own software.
-is-style requirement is that the set of available packages be known to be self-consistent and correct at all times, to spare the propagation of incomplete configurations. In a packaging system, this input expands on our intent to reduce the developer burden to assist the developer in writing as correct a package as possible, and to enable the repository operator to block incomplete packages from being made available to clients. There's a rich set of data here, as we noted for
smf(5) service descriptions above.
- Friendly user deployment. Direct from Indiana, but sensible and apparent to all is that packaging systems have advanced to a point where the usability of the client is expected, and not an innovation. I haven't got the complete timeline of packaging developments—the literature survey continues—but it's clear that Debian's, I hope) and have been quietly prototyping for a little while now, to see if we had a handle on this collection of inputs. I'd like to give a bit more background, in the form of requirements and assertions, in an upcoming post or two. Then we're hoping to start a project on opensolaris.org to take the idea all the way from notional fancy to functioning code.
[ T: OpenSolaris Solaris smf Indiana pkg ] | https://blogs.oracle.com/sch/category/Product?page=1 | CC-MAIN-2016-07 | en | refinedweb |
Adobe Flex is one of the most widely used client technologies for building rich applications and Spring 3 is one of the most popular Java application frameworks. These two technologies make a great combination for building enterprise applications with a modern looking and rich user interface.
There are various options to integrate them, that each have their pros and cons, such as Web/REST services and the Spring-Flex project promoted by Adobe and SpringSource. There are lots of articles and resources about them, here I will focus on an alternative approach using the open source project GraniteDS.
GraniteDS is based on a cleanroom implementation of the AMF3 remoting and messaging protocols, and has been historically the first open source implementation of the AMF3 protocol in Java. It has been providing out-of-the-box integration with Spring very early and this integration has continually been improved with each new version of GraniteDS, following a few core principles :
- Provide a fully integrated Spring/Flex/GraniteDS RIA platform that makes configuration and integration code mostly inexistant. The platform includes in particular the tools and Flex client libraries necessary to easily use all features of Spring and its associated technologies (persistence, security...).
- Promote type-safety as much as possible in both Java and AS3 applications, ensuring that most integration issues can be detected early at compile/build time.
These core design choices make GraniteDS very different from for example Adobe BlazeDS that has only a server-side part. In this article I will show this concept of RIA platform at work by building a simple application using the following features :
- Flex AMF remoting to Spring services.
- Support for Hibernate/JPA detached entities directly in the Flex application. Bye bye DTOs and lazy initialization exceptions.
- Support for the Bean Validation API (JSR-303) with the corresponding Flex validators.
- Support for Spring Security 3 and Flex components that integrate with server-side authorization.
- Support for 'real-time' data push.
As a side note, GraniteDS still supports the classic Flex RemoteObject API and is thus a close drop-in replacement for BlazeDS with some useful enhancements, but provides an alternative Flex API called Tide that is easier to use and brings the full power of the platform.
Project setup
We have to start somewhere, and the first step is to create the Spring application. This is no big deal, we could just start by a standard Spring MVC application, and just add a few GraniteDS elements in the Spring application context. To make things easier, I'm going to use a Maven archetype (Maven 3 recommended) :
mvn archetype:generate -DarchetypeGroupId=org.graniteds.archetypes -DarchetypeArtifactId=graniteds-tide-spring-jpa-hibernate -DgroupId=org.example -DartifactId=gdsspringflex -Dversion=1.0-SNAPSHOT
This creates a basic project skeleton that includes persistence, security and real-time data push features. As a starting point you can simply build the project and run it with the Maven jetty plugin :
cd gdsspringflexmvn installcd webappmvn jetty:run-war
Then browse, and log in with admin/admin or user/user.
The structure of the project is a classic multi-module Maven project with a Flex module, a Java module and a Web application module. It uses the very nice flexmojos plugin to build the Flex application with Maven.
<b>gdsspringflex</b>|- pom.xml|- flex |- pom.xml |- src/main/flex |- Main.mxml |- Login.mxml |- Home.mxml|- java |- pom.xml |- src/main/java |- org/example/entities |- AbstractEntity.java |- Welcome.java |- org/example/services |- ObserveAllPublishAll.java |- WelcomeService.java |- WelcomeServiceImpl.java|- webapp |- pom.xml |- src/main/webapp |- WEB-INF |- web.xml |- dispatcher-servlet.xml |- spring |- app-config.xml |- app-jpa-config.xml |- app-security-config.xml
If we forget about the default generated application sources in the Flex and Java modules, and focus only on configuration, the most interesting files are web.xml and the app*config.xml Spring configuration files.
web.xml basically includes Spring 3 listeners, a Spring MVC dispatcher servlet mapped on /graniteamf/* that will handle AMF requests, and a Gravity servlet for Jetty mapped on /gravityamf/* (Gravity is the name of the GraniteDS Comet-based messaging implementation).
<web-app <display-name>GraniteDS Tide/Spring</display-name> <description>GraniteDS Tide/Spring Archetype Application</description> <context-param> <param-name>contextConfigLocation</param-name> <param-value> /WEB-INF/spring/app-config.xml, /WEB-INF/spring/app-*-config.xml </param-value> </context-param> <!-- Spring listeners --> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> <listener> <listener-class>org.springframework.web.context.request.RequestContextListener</listener-class> </listener> <!-- Spring MVC dispatcher servlet that handles incoming AMF requests on the /graniteamf endpoint --><servlet> <servlet-name>dispatcher</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <load-on-startup>1</load-on-startup></servlet><servlet-mapping> <servlet-name>dispatcher</servlet-name> <url-pattern>/graniteamf/*</url-pattern></servlet-mapping> <!-- Gravity servlet that handles AMF asynchronous messaging request on the /gravityamf endpoint --> <servlet> <servlet-name>GravityServlet</servlet-name> <servlet-class>org.granite.gravity.jetty.GravityJettyServlet</servlet-class> <!--servlet-class>org.granite.gravity.tomcat.GravityTomcatServlet</servlet-class--> <!--servlet-class>org.granite.gravity.jbossweb.GravityJBossWebServlet</servlet-class--> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>GravityServlet</servlet-name> <url-pattern>/gravityamf/*</url-pattern> </servlet-mapping></span> <welcome-file-list> <welcome-file>index.html</welcome-file> </welcome-file-list></web-app>
The reason why there is a specific servlet for Gravity is that it is optimized to use the specific asynchronous capabilities of the underlying servlet container to get better scalability, and this can not be achieved with the default Spring MVC dispatcher servlet. That's also why this is necessary to configure different servlet implementations depending on the target container (Tomcat, JBossWeb...).
Next is the main Spring 3 configuration that is mostly basic Spring MVC stuff :
<beans xmlns="" xmlns: <!-- Annotation scan --> <context:component-scan <!-- Spring MVC configuration --> <bean class="org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping"/> <bean class="org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter"/> <!-- Configuration of GraniteDS --> <graniteds:flex-filter <!-- Simple messaging destination for data push --> <graniteds:messaging-destination </beans>
The main thing concerning GraniteDS is the flex-filter declaration. There is also an example messaging topic that is used by the default Hello World application. app-jpa-config.xml contains the JPA configuration and does not include anything about GraniteDS. Lastly Spring Security :
< <security:authentication-manager <security:authentication-provider> <security:user-service> <security:user <security:user </security:user-service> </security:authentication-provider> </security:authentication-manager> <security:global-method-security <!-- Configuration for Tide/Spring authorization --> <graniteds:tide-identity/> <!-- Uncomment when there are more than one authentication-manager : <graniteds:security-service --></beans>
Once again mostly Spring stuff, we just find here the tide-identity bean that is used to integrate Spring Security with the Tide Identity Flex component.
We're done for the server-side setup. GraniteDS detects automatically most of the Spring configuration at startup and configures itself accordingly, so these 10 lines of XML are generally enough for most projects. If you have a look at the various Maven POMs, you will find the dependencies on the server-side GraniteDS jars and client-side GraniteDS swcs. You can also have a look at the Flex mxml code of the example application generated by the archetype, but for now I will start from scratch.
Remoting to Spring services
First the traditional Hello World and its incarnation as a Spring 3 service :
@RemoteDestinationpublic interface HelloService {public String hello(String name);}@Service("helloService")public class HelloServiceImpl implements HelloService {public String hello(String name) {return "Hello " + name;}}
You have probably noticed the @RemoteDestination annotation on the interface, meaning that the service is allowed to be called remotely from Flex. Now the Flex application :
<s:Applicationxmlns: <fx:Script> <![CDATA[ import org.granite.tide.Component; import org.granite.tide.spring.Spring; import org.granite.tide.events.TideResultEvent; import org.granite.tide.service.DefaultServiceInitializer; private function init():void { Spring.getInstance().initApplication(); Spring.getInstance().getSpringContext().serviceInitializer = new DefaultServiceInitializer('/gdsspringflex'); } [In] public var helloService:Component; private function hello(name:String):void { helloService.hello(name, function(event:TideResultEvent):void { message. <s:Label <s:TextInput <s:Button <s:Label</s:VGroup> </s:Application>
You can rebuild and restart the project with :
mvn clean installcd webappmvn jetty:run-war
Well, that's not exactly the shortest Hello World application, but let's see the interesting bits :
- The init() method is called in the preinitialize handler. It does two things: initialize the Tide framework with Spring support, and declare a service initializer with the context root of our application. The task of the service initializer is to setup all remoting/messaging stuff, such as server endpoint uris, channels, etc... Basically it replaces the traditional Flex static services-config.xml file. Other implementations can easily be built for example to retrieve the channels configuration dynamically from a remote file (useful with an AIR application for example).
- A client proxy for the helloService Spring bean is injected in the mxml by using the annotation [In]. By default the variable name should match the Spring service name, otherwise we would have to specify the service name in the [In("helloService")] annotation. This may seem like a 'magic' injection but as we asked for an instance of Component, the framework knows for sure that you want a client proxy for a remote bean.
- The hello function demonstrates the basic Tide remoting API. It simply calls a remote method on the client proxy with the required arguments and provide callbacks for result and fault events, much like jQuery, so you don't have to deal manually with event listeners, asynchronous tokens, responders and all the joys of RemoteObject.
Now that we got the basics working, we can improve this a little. The Flex project POM is configured to automatically generate (with the GraniteDS Gas3 generator embedded in flexmojos) typesafe AS3 proxies for all Java interfaces named *Service and annotated with @RemoteDestination. That means we could also simply write this :
import org.example.services.HelloService;[In]public var helloService:HelloService;
This looks like a minor cosmetic change, but now you benefit from code completion in your IDE and from better error checking by the Flex compiler. Going even further, the whole injection can be made completely typesafe and not rely on the service name any more by using the annotation [Inject] instead of [In] (note that we can now give our injected variable any name) :
[Inject]public var myService:HelloService;private function hello(name:String):void { myService.hello(name, function(event:TideResultEvent):void { message.text = "Message: " + (event.result as String); } );}
And as the injection in the client now uses the interface name, we don't have to give a name to the Spring service any more :
@Servicepublic class HelloServiceImpl implements HelloService {public String hello(String name) {return "Hello " + name;}}
Now you can do any refactoring you want on the Java side, like changing method signatures, Gas3 will then regenerate the AS3 proxies and the Flex compiler will immediately tell you what's wrong. Another interesting thing is that the Flex mxml now looks like a Spring bean, making very easy for Spring developers to get started with Flex.
Persistence and integration with Hibernate/JPA
Let's go a bit further and see how to do simple CRUD with a couple of JPA entities :
@Entitypublic class Author extends AbstractEntity { @Basic private String name; @OneToMany(cascade=CascadeType.ALL, fetch=FetchType.LAZY, mappedBy="author", orphanRemoval=true) private Set books = new HashSet(); // Getters/setters ...}@Entitypublic class Book extends AbstractEntity { @Basic private String title; @ManyToOne(optional=false) private Author author; // Getters/setters ...}
Both entities extends AbstractEntity, but it is not mandatory at all, it's just a helper class provided by the Maven archetype.
Now we create a simple Spring service to handle basic CRUD for these entities (obviously a politically correct Spring service should use DAOs, but DAOs are awful and won't change anything here) :
@RemoteDestinationpublic interface AuthorService { public List<author> findAllAuthors(); public Author createAuthor(Author author); public Author updateAuthor(Author author); public void deleteAuthor(Long id);}@Servicepublic class AuthorServiceImpl implements AuthorService { @PersistenceContext private EntityManager entityManager; @Transactional(readOnly=true) public List<author> findAllAuthors() { return entityManager.createQuery("select a from Author a order by a.name").getResultList(); } @Transactional public Author createAuthor(Author author) { entityManager.persist(author); entityManager.refresh(author); return author; } @Transactional public Author updateAuthor(Author author) { return entityManager.merge(author); } @Transactional public void deleteAuthor(Long id) { Author author = entityManager.find(Author.class, id); entityManager.remove(author); }}</author></author>
And the Flex application :
<s:Applicationxmlns: <fx:Script> <![CDATA[ import mx.controls.Alert; import mx.collections.ArrayCollection; import mx.data.utils.Managed; import org.granite.tide.spring.Spring; import org.granite.tide.service.DefaultServiceInitializer; import org.granite.tide.events.TideResultEvent; import org.granite.tide.events.TideFaultEvent; import org.granite.tide.TideResponder; import org.example.entities.Author; import org.example.services.AuthorService; private function init():void { Spring.getInstance().initApplication(); Spring.getInstance().getSpringContext().serviceInitializer = new DefaultServiceInitializer('/gdsspringflex'); } [Inject] public var authorService:AuthorService; [Bindable] public var authors:ArrayCollection; private function findAllAuthors():void { authorService.findAllAuthors( function(event:TideResultEvent):void { authors = ArrayCollection(event.result); } ); } [Bindable] private var author:Author = new Author(); private function createAuthor():void { authorService.createAuthor(author, function(event:TideResultEvent):void { authors.addItem(author); author = new Author(); }, function(event:TideFaultEvent):void { Alert.show(event.fault.toString()); } ); } private function editAuthor():void { currentState = 'edit'; author = Author(lAuthors.selectedItem); } private function updateAuthor():void { authorService.updateAuthor(lAuthors.selectedItem, function(event:TideResultEvent):void { lAuthors.selectedItem = null; author = new Author(); currentState = 'create'; }, function(event:TideFaultEvent):void { Alert.show(event.fault.toString()); } ); } private function cancelAuthor():void { lAuthors.selectedItem = null; author = new Author(); currentState = 'create'; } private function deleteAuthor():void { authorService.deleteAuthor(lAuthors.selectedItem.id, function(event:TideResultEvent):void { var idx:int = authors.getItemIndex(lAuthors.selectedItem); authors.removeItemAt(idx); lAuthors.selectedItem = null; author = new Author(); currentState = 'create'; }, function(event:TideFaultEvent):void { Alert.show(event.fault.toString()); } ); } ]]> </fx:Script> <s:states> <s:State <s:State </s:states> <s:Group <s:layout> <s:VerticalLayout </s:layout> <mx:Form <mx:FormHeading label. <mx:FormItem <s:TextInput </mx:FormItem> <mx:FormItem> <s:HGroup> <s:Button <s:Button <s:Button </s:HGroup> </mx:FormItem> </mx:Form> <s:Label <s:List <s:Button </s:Group> </s:Application>
This is a simple CRUD application using some very convenient Flex 4 features such as states and bidirectional data binding. Note how all boilerplate code to connect the client and the server has litterally disappeared while still keeping a clean separation between the two layers. In the real world we would probably want to apply some MVC pattern instead of a monolithic mxml but this would not change much. The important thing is that the Flex and Java parts do not contain useless or redundant code and are thus a lot easier to maintain. Even using model-driven code generators to build the Flex application automatically would be easier because there is basically much less code to generate.
Now rebuild and run the application on Jetty, and check that you can create, update and delete authors. There are two minor issues with this example that have not much interest in themselves but that I will use to show two interesting features of Tide.
First issue : when you start updating the name of an author, the change is propagated to the list by bidirectional binding but the previous value is not restored when you click on 'Cancel', leading to an inconsistent display. This is mainly a problem of Flex 4 bidirectional binding because it propagates all changes immediately but it is not able to rollback these changes. We would have three options to fix it : save the entity state somewhere before editing and restore it upon cancel, copy the original data and bind the input fields on the copy (but then the list would not be updated by binding), or avoid using bidirectional binding. None of these options is really appealing, fortunately Tide provides a very simple feature to deal with this and makes bidirectional binding really usable :
private function cancelAuthor():void { Managed.resetEntity(author); lAuthors.selectedItem = null; author = new Author(); currentState = 'create'; }
Managed.resetEntity() simply rolls back all changes done on the Flex side and restores the last stable state received from the server.
Second issue : clicking on refresh loses the current selected item. This is because we replace the dataProvider of the list each time we receive a new collection. This can easily be fixed by using the data merge functionality of Tide :
public var authors:ArrayCollection = new ArrayCollection();private function findAllAuthors():void { authorService.findAllAuthors(new TideResponder(null, null, null, authors));}
This relatively ugly remote call using a TideResponder indicates to Tide that it should merge the call result with the provided variable instead of completely replacing the collection with 'authors = event.result' in the result handler. Note that we don't even need a result handler any more, once again saving a few lines of codes.
This issue with item selection illustrates why keeping the same collection and entity instances accross remote calls is very important if you want to setup data-driven visual effects or animations (remember, the R of RIA). All visual features of Flex highly depend on the object instance that drives the effect and the events that it dispatches. This is where the Tide entity cache and merge helps a lot by ensuring that each entity instance will exist only once and dispatch only the necessary events when it is updated from the server.
I'll finish this part by showing how to display and update the collection of books. For now you have maybe noticed that the existence of this collection did not cause any problem at all, though it is marked lazy on the JPA entity. No LazyInitializationException, and no particular issue when merging the entity modified in Flex in the JPA persistence context. GraniteDS has transparently serialized and deserialized the Hibernate internal state of the collection back and forth, thus making the data coming from Flex appear exactly as if it came from a Java client.
So let's try to implement the editing of the list of books in the update form. We don't have to change anything in the service, Hibernate will take care of persisting the collection because of the cascading option we selected. On the Flex side, we can use an editable List (note that there is no built-in editable Spark List in Flex 4, so we use a custom ItemRenderer inspired from this blog post, see the full sources attached), and add this :
<mx:FormItem <s:HGroup> <s:List <s:VGroup> <s:Button <s:Button </s:VGroup> </s:HGroup></mx:FormItem>
And the corresponding script actions :
private function addBook():void { var book:Book = new Book(); book.author = author; author.books.addItem(book); lBooks.selectedIndex = author.books.length-1;}private function removeBook():void { author.books.removeItemAt(lBooks.selectedIndex);}
The Tide framework automatically takes care of initializing the collection when needed, you just have to bind it to any Flex data component such as List. Once again all the usual boilerplate code necessary to deal with data and collections has completely disappeared.
Integration with Bean Validation
Our application is still missing a critical piece : data validation. As a first step, we can leverage the Hibernate integration with Bean Validation and simply annotate our entities to let the server handle validation :
@Entitypublic class Author extends AbstractEntity { @Basic @Size(min=2, max=25) private String name; @OneToMany(cascade=CascadeType.ALL, fetch=FetchType.LAZY, mappedBy="author", orphanRemoval=true) @Valid private Set books = new HashSet(); // Getters/setters ...}@Entitypublic class Author extends AbstractEntity { @Basic @Size(min=2, max=100) private String title; ...}
Once you change this and redeploy, creating an invalid author has now become impossible but there is error message is a mess that cannot be understood by a real user. We could simply add a particular behaviour in the fault handler :
function(event:TideFaultEvent):void { if (event.fault.faultCode == 'Validation.Failed') { // Do something interesting, for example show the first error message Alert.show(event.fault.extendedData.invalidValues[0].message); } else Alert.show(event.fault.toString());}
We now get the standard Flex validation error popup on the correct input field. It's definitely nicer, but it would be even better if we didn't have to call the server at all to check for text size. Of course we could manually add Flex validators to each field, but it would be very tedious and we would have to maintain consistency between the client-side and server-side validator rules.
Fortunately with GraniteDS 2.2 it can be a lot easier. If you have a look at the generated ActionScript 3 entity for Author (in fact its parent class AuthorBase.as found in flex/target/generated-sources), you will notice that annotations have been generated corresponding to the Java Bean Validation annotations.
The FormValidator component is able to use these annotations and automatically handle validation on the client side, but we first have to instruct the Flex compiler that it should keep these annotations in the compiled classes, which is not the case by default. In the Flex module pom.xml, you can find a section , just add the validation annotations that we are using here :
<keepAs3Metadata>NotNull</keepAs3Metadata><keepAs3Metadata>Size</keepAs3Metadata><keepAs3Metadata>Valid</keepAs3Metadata>
After a new clean build and restart, you can see that the GraniteDS validation engine now enforces the constraints on the client, which gives the user a much better feedback about its actions. We can also prevent any call to the server when something is wrong :
private function updateAuthor():void { if (!fvAuthor.validateEntity()) return; authorService.updateAuthor(...);}
Integration with Spring Security
All is good, but anyone can modify anything on our ultra critical book database, so it's time to add a bit of security. The Maven archetype includes a simple Spring Security 3 setup with two users admin/admin and user/user that is obviously not suitable for any real application but that we can just use as an example. The first step is to add authentication, and we can for example reuse the simple login form Login.mxml provided by the archetype. We just need the logic to switch between the login form and the application, so we create a new main mxml by renaming the existing Main.mxml to Home.mxml and creating a new Main.mxml :
<s:Applicationxmlns: <fx:Script> <![CDATA[ import org.granite.tide.spring.Spring; import org.granite.tide.spring.Identity; import org.granite.tide.service.DefaultServiceInitializer; [Bindable] [Inject] public var identity:Identity; private function init():void { // Define service endpoint resolver Spring.getInstance().getSpringContext().serviceInitializer = new DefaultServiceInitializer('/gdsspringflex'); // Check current authentication state identity.isLoggedIn(); } ]]> </fx:Script> <s:states> <s:State <s:State </s:states> <s:controlBarContent> <s:Label <s:Button </s:controlBarContent> <Login id="loginView" excludeFrom="loggedIn"/> <Home id="homeView" includeIn="loggedIn"/></s:Application>
As you can see we have moved the Tide initialization to this new mxml, and added two main blocks to handle authentication :
- Once again we use the handy Flex 4 states to display the login form or the application, and bind the current state to the Tide Identity component loggedIn property that represents the current authentication state. If you remind of the Spring security configuration, it's the client counterpart of the tide-identity component we declared there.
- We call identity.isLoggedIn() at application startup to detect if the user is already authenticated, so for example a browser refresh will not redisplay the login form. It can also be useful when the authentication is done through a simple Web page and you just want to retrieve the authentication state instead of displaying a Flex login form.
Except removing the Tide initialization, we also need to do a small change to Home.mxml as it is not the main mxml any more :
<s:VGroupxmlns: <fx:Metadata>[Name]</fx:Metadata> <fx:Script> <![CDATA[ import mx.controls.Alert; import mx.collections.ArrayCollection; import org.granite.tide.events.TideResultEvent; import org.granite.tide.events.TideFaultEvent; import org.example.entities.Author; import org.example.services.AuthorService; [Inject] public var authorService:AuthorService; ]]> ...</s:VGroup>
The metadata [Name] indicates that this mxml has to be managed by Tide (i.e. injection, observers...). Without it, nothing will work any more.
Now we would like to prevent non-administrator users from deleting authors. As with did with validation, we can in a first step rely on server-side security and simply annotate the service method :
@Transactional@Secured("ROLE_ADMIN")public void deleteAuthor(Long id) { Author author = entityManager.find(Author.class, id); entityManager.remove(author);}
You can check that you cannot delete an author when logged in as user. The error message is handled by our fault handler and displayed as an alert. It's a bit tedious to handle this in each and every fault handler, so you can define a custom exception handler that will globally intercept such security errors that always have a faultCode 'Server.Security.AccessDenied' :
public class AccessDeniedExceptionHandler implements IExceptionHandler { public function accepts(emsg:ErrorMessage):Boolean { return emsg.faultCode == 'Server.Security.AccessDenied'; } public function handle(context:BaseContext, emsg:ErrorMessage):void { // Do whatever you want here, for example a simple alert Alert.show(emsg.faultString); }}
And register this handler in the main mxml with :
Spring.getInstance().addExceptionHandler(AccessDeniedExceptionHandler);
Now authorization errors will be properly handled and displayed for all remote calls.
That would be even better if we didn't even display the 'Delete' button to our user if he's not allowed to use it. It's very easy to hide or disable parts of the UI depending on user access rights by using the Identity component that has a few methods similar to the Spring Security jsp tags :
<s:Button
Finally if you manage to configure Spring Security ACL (I won't even try to show this here, it would require a complete article), you could use domain object security and secure each author instance separately (8 is the Spring Security ACL bit mask for 'delete') :
<s:Button
As with validation, you can see that most of the value of GraniteDS resides in the Flex libraries that it provides. This level of integration cannot be achieved with a server-only framework.
Data push
The last thing I will demonstrate is the ability to dispatch updates on an entity to all connected clients. This can be useful with frequently updated data, so every user has an up-to-date view without having to click on some 'Refresh' button. Enabling this involves a few steps :
- Define a GraniteDS messaging topic in the Spring configuration. The archetype already defines a topic named welcomeTopic, we can just reuse it and for example rename it to authorTopic.
- Add the DataPublishListener entity listener to our entities Author and Book. In our example they already extend the AbstractEntity class provided by the archetype so it's already the case.
- Configure a client DataObserver for the topic in the main mxml, and bind its subscription/unsubscription to the login/logout events so publishing can depend on security :
Spring.getInstance().addComponent("authorTopic", DataObserver);Spring.getInstance().addEventObserver("org.granite.tide.login", "authorTopic", "subscribe");Spring.getInstance().addEventObserver("org.granite.tide.logout", "authorTopic", "unsubscribe");
- Annotate all service interfaces (or all implementations) with @DataEnabled, even if they are read-only :
@RemoteDestination@DataEnabled(topic="authorTopic", params=ObserveAllPublishAll.class, publishMode=PublishMode.ON_SUCCESS)public interface AuthorService { ...}
The default ObserveAllPublishAll class comes from the archetype and defines a publishing policy where everyone receives everything. Alternative dispatching strategies can be defined if it's necessary to restrict the set of recipients depending on the data itself for security, functional or performance reasons.
Now you can rebuild and restart and connect from two different machines or two different browsers, and check that changes made on an author in one browser are propagated to the other.
New and deleted authors are not propagated automatically, we have to handle these two cases manually. This is not very hard, we just have to observe some built-in events dispatched by Tide :
[Observer("org.granite.tide.data.persist.Author")]public function persistAuthorHandler(author:Author):void { authors.addItem(author);}[Observer("org.granite.tide.data.remove.Author")]public function removeAuthorHandler(author:Author):void { var idx:int = authors.getItemIndex(author); if (idx >= 0) authors.removeItemAt(idx);}
Fine, the list is now correctly refreshed with new and deleted authors, but you will notice that new authors are added twice on the current user application. Indeed we add it from both the data observer and the result handler. We can safely keep only the global observer since it will always be called and remove the addItem from the handler. Such observers can be put in many views at the same time and all of them will be updated.
Conclusion
That was a long article, but at least I hope it gave you a good idea about the client+server RIA platform concept of GraniteDS. Applications can be written faster with a lot less code, and still have a clean architecture with clearly separated layers.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/enterprise-ria-spring-3-flex-4 | CC-MAIN-2016-07 | en | refinedweb |
WebKit Bugzilla
Expclicitly use icu namespace for ports building with U_USING_ICU_NAMESPACE=0
Created attachment 87991 [details]
Patch
Created attachment 87993 [details]
Patch
Comment on attachment 87993 [details]
Patch
View in context:
> Source/WebCore/ChangeLog:12
> + Expclicitly use icu namespace for ports building with U_USING_ICU_NAMESPACE=0
> +
> +
> + * platform/text/LocalizedNumberICU.cpp:
> + (WebCore::createFormatterForCurrentLocale):
> + (WebCore::numberFormatter):
> + (WebCore::parseLocalizedNumber):
> + (WebCore::formatLocalizedNumber):
This ChangeLog doesn't tell me why you're making this change. I have no idea.
Created attachment 87994 [details]
Patch
Comment on attachment 87994 [details]
Patch
Clearing flags on attachment: 87994
Committed r82787: <>
All reviewed patches have been landed. Closing bug.
*** Bug 57714 has been marked as a duplicate of this bug. ***
We don't normally use namespace prefixes in .cpp files, we put using directives in the beginning (e.g. "using namespace std;").
whoopsies: | https://bugs.webkit.org/show_bug.cgi?id=57715 | CC-MAIN-2016-07 | en | refinedweb |
Hey, I'm making a brick breaker game and I need some help.
I need to know how to do the brick class.. making a brick breaker game and I need some help.
I need to know how to do the brick class..
any help ?
A starting point: define what data the brick needs to keep and what methods it needs to maintain that data.
I wrote this class.. can you give me more leads about what to do ?
Bricks:
import java.awt.Color; import java.awt.Graphics; import java.awt.Paint; import java.awt.Rectangle; import java.awt.event.KeyEvent; import java.awt.event.KeyListener; public class Brick { private int x; private int y; protected Color color; public Brick (int x, int y){ this.x=x; this.y=y; } public void setColor (Color newColor) { color = newColor; } public void Paint (Graphics g) { g.fillRect(x, y, 35, 45); g.setColor(color); } }
What is a Brick object supposed to do? Does it need setter methods?
Does it have a state?
BTW Java naming standards says methods should start with lowercase letters.
I need to paint the breaks in the top of the screen (4-5 lines of breaks).
What problems are you having?
I cant find a way to do that and for some reason I cant change the color of the bricks..
You'll have to post some code that compiles, executes and shows the problem.
I don't know what to do, I tried to paint it with matrix but I cant..
look what I tried:
I wrote this in the main page
public void paintBricks(){ Graphics g; int i=80,j=50,k,l; Brick[][] brickArr = new Brick[20][3]; for(k=0;k<=20;k++){ for(l=0;l<=3;l++){ brickArr[k][l]=aBrick.Paint(g, i, j); } } }
Sorry, without a complete program I can't compile, execute and test the code to see what the problem is.
Ok , this is what I've done, I need to paint 4-5 lines of bricks in the top of the screen.
My classes so far are:
1. Ball:
// // general purpose reusable bouncing ball abstraction // Described in Chapter 4 of // Understanding Object-Oriented Programming with Java // by Timothy A Budd // Published by Addison-Wesley // // see // for further information // import java.awt.*; public class Ball { protected Rectangle location; protected double dx; protected double dy; protected Color color; public Ball (int x, int y, int r) { location = new Rectangle(x-r, y-r, 2*r, 2*r); dx = 0; dy = 0; color = Color.blue; } // functions that set attributes public void setColor (Color newColor) { color = newColor; } public void setMotion (double ndx, double ndy) { dx = ndx; dy = ndy; } // functions that access attributes of ball public int radius () { return location.width / 2; } public int x () { return location.x + radius(); } public int y () { return location.y + radius(); } public double xMotion () { return dx; } public double yMotion () { return dy; } public Rectangle region () { return location; } // functions that change attributes of ball public void moveTo (int x, int y) { location.setLocation(x, y); } public void move () { location.translate ((int) dx, (int) dy); } public void paint (Graphics g) { g.setColor (color); g.fillOval (location.x, location.y, location.width, location.height); } }
2. Matka:
import java.awt.Color; import java.awt.Graphics; import java.awt.Paint; import java.awt.Rectangle; import java.awt.event.KeyEvent; import java.awt.event.KeyListener; public class matka extends Thread { private int x; private int y; protected Color color; public matka (int x, int y) { this.x=x; this.y=y; } public void setColor (Color newColor) { color = newColor; } public void Paint (Graphics g) { //Rectangle r=new Rectangle(); g.fillRect(x, y, 200, 20); g.setColor(color); } public void moveL(int dx){ this.x+=dx; } public void moveR(int dx){ this.x-=dx; } }
3. BallWorld1 (main page):
import java.awt.*; import java.awt.event.*; import javax.swing.*; import java.awt.event.KeyEvent; import java.awt.event.KeyListener; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.util.Random; public class BallWorld1 extends Frame { private Ball aBall; private matka aMatka; private Brick aBrick; public static void main (String [ ] args) { BallWorld1 world = new BallWorld1 (Color.red); world.show (); } private BallWorld1 (Color ballColor) { // constructor for new ball world // resize our frame setSize (getHeight(),getWidth()); setTitle ("Ball World"); addMouseListener(new BallListener()); addKeyListener(new MatkaListener()); aMatka = new matka(570,680); //aBrick = new Brick(80,50); // initialize object data field aBall = new Ball (10, 15, 5); aMatka.setColor(ballColor); aBall.setColor (ballColor); aBall.setMotion (6.0, 10.0); addWindowListener( new WindowAdapter() { public void windowClosing(WindowEvent e) { System.exit(0); } }); } public void paint (Graphics g) { //aBrick.Paint(g); aMatka.Paint(g); aBall.paint (g); aBall.move(); if ((aBall.x() < 0) || (aBall.x() > getSize().width)) aBall.setMotion (-aBall.xMotion(), aBall.yMotion()); if ((aBall.y() < 0) || (aBall.y() > getSize().height)) aBall.setMotion (aBall.xMotion(), -aBall.yMotion()); try { Thread.sleep(100); } catch (InterruptedException e) {} repaint(); } class BallListener extends MouseAdapter { public void mousePressed(MouseEvent e) { aBall.moveTo(e.getX(), e.getY()); } } class MatkaListener implements KeyListener { public void keyPressed(KeyEvent e) { switch (e.getKeyCode()) { case KeyEvent.VK_RIGHT: aMatka.moveR(-10); break; case KeyEvent.VK_LEFT: aMatka.moveL(-10); break; } } @Override public void keyReleased(KeyEvent arg0) { // TODO Auto-generated method stub } @Override public void keyTyped(KeyEvent arg0) { // TODO Auto-generated method stub } } }
4. Brick
import java.awt.Color; import java.awt.Graphics; import java.awt.Paint; import java.awt.Rectangle; import java.awt.event.KeyEvent; import java.awt.event.KeyListener; public class Brick { private int x; private int y; public Brick (int x, int y){ this.x=x; this.y=y; } public void Paint (Graphics g, int dx,int dy) { this.x=dx; this.y=dy; g.fillRect(x, y, 50, 60); g.setColor(Color.RED); } }
Which class and method is supposed to do that?Which class and method is supposed to do that?I need to paint 4-5 lines of bricks in the top of the screen.
the class Brick and the method paint.
I'm not sure how the Brick class would create instances of itself and put them at the top of the screen in rows.
The paint() method defintely should NOT create the bricks. It should call each of them so they can paint themselves.
Some method needs to create a list of brick objects that the paint() method can use for painting them. Where should that be done?
This code is using the OLD style GUI: AWT not Swing. There will be some minor problems with that.
The program should use a Timer for controlling the calls to repaint(). It should NOT use sleep() in the paint() method or call repaint() in the paint() method.
I need to paint them in the Ballworld1
You can't paint them until they exist. Where will they be created and saved so they can be painted?
I thought about a matrix [20][4] 4 rows 20 each row
Ok, that could work. Usually its row,column: [4][20]
But how I suppost to do that ? Can you give me some leads ?
And thank you for all your help
Use nested loops to assign values to the two dim array.Use nested loops to assign values to the two dim array. | http://www.javaprogrammingforums.com/object-oriented-programming/23537-brick-breaker.html | CC-MAIN-2016-07 | en | refinedweb |
The Next Level of Code Analysis using ‘NDepend’: An Interview with Patrick Smacchia
Posted by:
Suprotim Agarwal
, on 2/6/2009, in
Category
Product Articles
Views:
69541
Tweet
Abstract:
In this post, Suprotim Agarwal (ASP.NET MVP) interviews Patrick Smacchia, a C# MVP about the NDepend product.
The Next Level of Code Analysis using ‘NDepend’: An Interview with Patrick Smacchia
In this post, Suprotim Agarwal (ASP.NET MVP) interviews Patrick Smacchia, a C# MVP about the NDepend product.
NDepend
is a .NET code analyzer tool on steroids. This tool analyzes your .NET assemblies and the compiled source code, with a variety of
metrics
. I had been using FxCop till date for code analysis, before I was introduced to NDepend by its creator, Patrick Smacchia, a C# MVP. Something that impressed me the most was CQL and code metrics, which can be used to examine complexity and dependencies in your code and improve the overall quality of your code. This tool will in fact change the way you glance at code!
In this article, we will discuss the NDepend product with Patrick and understand how this product takes code analysis and code metrics to the next level.
Suprotim:
Tell us more about yourself and NDepend. How did the product evolve?
Patrick:
My name is Patrick Smacchia, I am French, 33 years old, and I basically write software since I learnt to write thanks to my father who is also a programmer for more than 35 years by now. The NDepend project started in April 2004. At that time I was consulting and I wrote NDepend as a quick project to demystify the massive and extremely complex code base of a client. I was interested to get basic metrics such as
NbLinesOfCode
but also Robert C Martin metrics about
abstractness vs. stability
. This first NDepend version was pretty useful and I released it OSS online straight. The tool then became pretty popular, also partly because at that time the .NET community was lacking development tools. I did the math and if a few fraction of users would buy a few hundred US$’ professional license, then I could start making a living on it and invest all my time in development. After a whole year of very hard work the version 2.0 RTM went live in February 2007 and hopefully the business model wasn’t flawed. I summarized all this in this blog post
and I hope it can foster developers mates to innovate and start their own ISV. Since then, the company grew and we worked a lot on both features and usability of our flagship product NDepend.
Suprotim:
How does NDepend fit in the Software Development Process? What is CQL and how does it relate to the product?
Patrick:
NDepend is all about understanding and controlling what really happens in your development shop. The idea is to fetch relevant information from the code itself. The tool has 2 sides: The Visual side where the VisualNDepend UI lets you analyze live your code base, with graph, matrix, metrics panel. These visual panels with dependency graph, matrix and metric treemap, help a lot churning the code. Some screenshot can be found
here
.
Fig 1: Selection by Metrics and display the set of methods selected on the Metrics View
Fig 2: Using the Dependencies Structure Matrix to understand coupling between assemblies
Fig 3: Selecting the list of methods where code was changed between 2 versions and visualizing source code modifications with Windiff
Fig 4: The Visual Studio Add-In
VisualNDepend comes also with
Code Query Language CQL
. CQL is really at the heart of the product. CQL is to your code what is SQL to your data. You can ask for example which method has more than 30 lines of code by writing the CQL query:
METHODS
WHERE
NbLinesOfCode >
30
And if you want to be warned of such big methods you can transform this CQL query into a CQL rule this way:
WARN IF
Count >
0
IN
METHODS
WHERE
NbLinesOfCode >
30
CQL covers a wide range of possibility, from 82 code metrics to dependencies, encapsulation, mutability, code changes/diff/evolution or test coverage. For example asking for methods that has been changed recently and has not been covered by tests is as easy as writing:
METHODS
WHERE
CodeWasChanged
AND
PercentageCoverage
<
100
NDepend comes with more than 200 rules per default and you can modify existing rules and add new rules at whim to fit your needs.
The other side of NDepend is its integration into the Continuous Integration/Build server process. The idea is to get a daily and highly customized HTML report where you can see at a glance if some CQL rules have been violated and to get also some metrics about quality or structuring. This is how one can master what is really happening in its development shop
Suprotim:
Many teams have assemblies built in both C# and VB.NET. How does NDepend analyze such applications built using multiple languages?
Patrick:
NDepend takes as input any .NET assemblies, no matter the language it was written in. 95% of the information gathered by NDepend comes from assemblies themselves. NDepend also parses source files but only a few metrics are gathered from source: Source Code Cyclomatic Complexy and Comment metrics. So far only C# source files are parsed and we plan VB.NET source files parsing for the next months. More details about all this can be found
here
.
Suprotim:
How would you compare NDepend with other popular tools like ReSharper and FxCop?
Patrick:
I am a big fan of
Resharper
and I like to think that what is Resharper doing locally, at the level of method, NDepend does it at large scale level, on type, namespaces and assemblies. Resharper typically warns about a logical if expression flawed where NDepend typically warns about a component too big or too entangled.
FxCop
comes with more than 200 hundreds pre-defined rules. FxCop rules mainly inform you how to use the .NET framework and you have a few quality rules also. NDepend CQL rules are more focused on your code itself although 30 pre-defined CQL rules covers some .NET Framework usage good practices. NDepend will typically let you know that some abstractions have to be created to avoid over-coupling, while FxCop rules will typically let you know that you should use a StringBuilder to concatenate strings inside a loop instead of using the ‘+’ operator.
Suprotim:
How does NDepend help you in Agile projects where most of the efforts are on delivering code?
Patrick:
Agile development is all about rationalizing the work to avoid repetitive burden tasks. Agile fosters efficient communication between developers and continuous correctness check with automatic tests. The NDepend’s CQL language lets developers express their intentions in a formal way. For example developers can create a specific rule to avoid that UI code uses directly DB related code. But as a big bonus CQL rules are active, I mean you’ll know automatically when a rule will be violated. So not only CQL rules improve the communication between developers but it also prevents code erosion by discarding the burden of manually check that developers respect intentions and quality effort put into the code.
Suprotim:
Tell us about a recent experience with NDepend in a real-world environment? Can we see a case study?
Patrick:
There are various NDepend users profile, from the independent consultant to the massive team cluster composed of hundreds of developers and millions of lines of code. Part of the NDepend challenge is to let users cop with very large code base. To do so we constantly invest in the performance of our code.
We recently have feedback from one of this massive team and I wrote a blog post about their experience here:
Using NDepend on large project, a success story
. I also had the opportunity to recently help a 60 developers team cranking with NDepend concepts and I wrote about this
experience here
. Basically the more complex is your code base, the more you need tooling such as NDepend; especially if you wish to spend efficiently your resources on large scale refactoring to transform a large legacy made of messy code into a sane development shop.
NDepend can be also be useful on smaller code base. For example you can read what NDepend has to say by analyzing the 15.000 lines of code of the NUnit project here:
Suprotim:
Can dotnetcurry.com viewers see a Live Demo of this tool? Do we have an NDepend forum to discuss this tool?
Patrick:
On the
of our site we link a dozens of short 3mn screencasts to demonstrate how NDepend handles various scenarios, like comparison of 2 snapshots of a code base, dependency browsing or quality metrics checking.
Suprotim:
The NDepend product is loaded with features and metrics. What are your plans in 2009?
Patrick:
In 2009 Q2, the 2 NDepend’s brothers
XDepend
(for Java) and CppDepend (for C++) will see the light of the day. A public beta of XDepend is already available on the
XDepend website
.
NDepend is often qualified as a polished tool by its users and in 2009 we wish to continue investing in even more usability. With System.Reflection everyone can write a quick static analyzer of .NET code in a few days such as what was NDepend in its early days. What we learnt the hard way is that usability can only comes at the cost of years of solid development and IMHO usability is what makes the difference. Part of our 2009 usability plans is to integrate nicely Visual NDepend panels into VisualStudio but one can expect also dozens of tricky featurettes.
We also have several major innovative features in the pipe but for some confidential reasons I cannot unveil them by now. For the benefit of all .NET developers (our team included), the .NET tooling scene is very active these days and we will continue investing into features that so far, no other tools have.
I hope you enjoyed this interview with Patrick. Here’s some simple advice to all of you:
Understand and list down the metrics that are important to your product/project. Then strive towards achieving them with a tool like NDepend that will keep you on track!
If you have any questions for Patrick, you can use the Comments section below or use the
.
If you liked the interview,
or
Subscribe Via Email
Follow @dotnetcurry
Recommended Articles
Wayne
on Saturday, February 7, 2009 12:41 PM
Another difference is that FxCop comes free!
Comment posted by
Deepali Kamatkar
on Thursday, August 20, 2009 3:31 AM
Gr8 article..
Came to know abt an excellent product!!
.NET TOOLS
POPULAR ARTICLES
Validate a Form using jQuery and Bootstrap Validator
Business Application using HTML5, ASP.NET MVC, Web API and Knockout.js - Part 1
Implementing ASP.NET Web API Versioning using Custom Header and testing it in an Angular.js application
Using TransactionScope across Databases using ADO.NET Entity Framework and ASP.NET MVC
Read a Local File using HTML5 and JavaScript
Building ASP.NET MVC 6 & Entity Framework 7 application using ASP.NET 5 RC
Open-Closed Principle (Software Gardening: Seeds)
Angular 2: Developer Preview
HTML Data Grid with CRUD Operations in ASP.NET MVC using Knockout.js, Require.js and WEB API
New Build Features in TFS 2015 and Visual Studio Online
Using Raphael.js Charts in ASP.NET MVC
Configure ASP.NET MVC to use Multiple ADFS using OWIN KATANA
Code Contracts in C#
Consuming a Web API Asynchronously in ASP.NET MVC or WPF
Securing ASP.NET Web API using Token Based Authentication and using it in Angular.js application | http://www.dotnetcurry.com/(X(1)S(ckvzil55252hloaarygxx3ya))/ShowArticle.aspx?ID=268 | CC-MAIN-2016-07 | en | refinedweb |
Hi,
just discovered Sublime Text 2. It looks amazing. But I have some trouble running compiled c++ .h files. I just tried compiling and running "hello world":
#include <iostream>
using namespace std;
int main()
{
cout << "Hello World";
return 0;
}
If I save it as .c and compile and run it, it works fine. But if I save it as .h and try to compile and run I get this error
bash: /Users/roger/Desktop/tester/helloworld: cannot execute binary file
[Finished in 0.9s with exit code 126]
I'm using the default c++ build system. I'm on OSX 10.8.4. Any clues on what may I be doing wrong?
Cheers.
duh, I found the solution... I was putting the main() in the .h file instead of cpp. Sorry about that. | https://forum.sublimetext.com/t/cannot-execute-compiled-binary-from-a-c-h-file/10512/2 | CC-MAIN-2016-07 | en | refinedweb |
hi guys i need sum help creating a little programme.
when the mouse is in the box the colour has to change to black and when the mouse is outside the box the colour has to change to red. This has to be on loop.
I startd it of something like dis:
import element.*;
import java.awt.Color;
public class age
{
public static void main(String args[])
{
DrawingWindow d = new DrawingWindow();
Rect r = new Rect(25,05,18,1;
d.setForeground(Color.red);
d.fill(r);
}
}
this code only creates the box need helping with the rest changing colour when the mouse is in and out the box.
To detect mouse movements etc you will need to use a MouseListener of some kind.
There have been several sample programs with listeners on the forum, so do a search for the different mouse listeners and you should get lots of examples of code to look at.
You'll find that the MouseListener class has MOUSE_ENTERED and MOUSE_EXITED events which allow you to capture when the mouse has entered or left your component. Just capture the event and then fire the appropriate color change.
Forum Rules
Development Centers
-- Android Development Center
-- Cloud Development Project Center
-- HTML5 Development Center
-- Windows Mobile Development Center | http://forums.devx.com/showthread.php?28030-runtime.exec()&goto=nextnewest | CC-MAIN-2016-07 | en | refinedweb |
Source code for pyramid.request
from zope.interface import implements from zope.interface.interface import InterfaceClass from webob import Request as WebobRequest from pyramid.interfaces import IRequest from pyramid.interfaces import ISessionFactory from pyramid.exceptions import ConfigurationError from pyramid.decorator import reify from pyramid.url import resource_url from pyramid.url import route_url from pyramid.url import static_url from pyramid.url import route_path class TemplateContext(object): pass[docs]class Request(WebobRequest): """. """ implements(IRequest) response_callbacks = () finished_callbacks = () exception = None matchdict = None matched_route = None @reify def tmpl_context(self): """ Template context (for Pylons apps) """ return TemplateContext()def route_request_iface(name, bases=()): iface = InterfaceClass('%s_IRequest' % name, bases=bases) # for exception view lookups iface.combined = InterfaceClass('%s_combined_IRequest' % name, bases=(iface, IRequest)) return iface def add_global_response_headers(request, headerlist): def add_headers(request, response): for k, v in headerlist: response.headerlist.append((k, v)) request.add_response_callback(add_headers)[docs] :class:`pyramid.events.NewResponse` event is sent. Errors raised by callbacks are not handled specially. They will be propagated to the caller of the :app:`Pyramid` router application. See also: :ref:`using_response_callbacks`. """ callbacks = self.response_callbacks if not callbacks: callbacks = [] callbacks.append(callback) self.response_callbacks = callbacksdef _process_response_callbacks(self, response): callbacks = self.response_callbacks while callbacks: callback = callbacks.pop(0) callback(self, response)[docs] also: :ref:`using_finished_callbacks`. """ callbacks = self.finished_callbacks if not callbacks: callbacks = [] callbacks.append(callback) self.finished_callbacks = callbacksdef _process_finished_callbacks(self): callbacks = self.finished_callbacks while callbacks: callback = callbacks.pop(0) callback(self) ConfigurationError( 'No session factory registered ' '(see the Sessions chapter of the Pyramid documentation)') return factory(self)[docs] def route_url(self, route_name, *elements, **kw): """ Return the URL for the route named ``route_name``, using ``*elements`` and ``**kw`` as modifiers. This is a convenience method. The result of calling :meth:`pyramid.request.Request.route_url` is the same as calling :func:`pyramid.url.route_url` with an explicit ``request`` parameter. The :meth:`pyramid.request.Request.route_url` method calls the :func:`pyramid.url.route_url` function using the Request object as the ``request`` argument. The ``route_name``, ``*elements`` and ``*kw`` arguments passed to :meth:`pyramid.request.Request.route_url` are passed through to :func:`pyramid.url.route_url` unchanged and its result is returned. This call to :meth:`pyramid.request.Request.route_url`:: request.route_url('route_name') Is completely equivalent to calling :func:`pyramid.url.route_url` like this:: from pyramid.url import route_url route_url('route_name', request) """ return route_url(route_name, self, *elements, **kw)[docs] def resource_url(self, resource, *elements, **kw): """ Return the URL for the :term:`resource` object named ``resource``, using ``*elements`` and ``**kw`` as modifiers. This is a convenience method. The result of calling :meth:`pyramid.request.Request.resource_url` is the same as calling :func:`pyramid.url.resource_url` with an explicit ``request`` parameter. The :meth:`pyramid.request.Request.resource_url` method calls the :func:`pyramid.url.resource_url` function using the Request object as the ``request`` argument. The ``resource``, ``*elements`` and ``*kw`` arguments passed to :meth:`pyramid.request.Request.resource_url` are passed through to :func:`pyramid.url.resource_url` unchanged and its result is returned. This call to :meth:`pyramid.request.Request.resource_url`:: request.resource_url(myresource) Is completely equivalent to calling :func:`pyramid.url.resource_url` like this:: from pyramid.url import resource_url resource_url(resource, request) .. note:: For backwards compatibility purposes, this method can also be called as :meth:`pyramid.request.Request.model_url`. """ return resource_url(resource, self, *elements, **kw)model_url = resource_url # b/w compat forever[docs] def static_url(self, path, **kw): """ Generates a fully qualified URL for a static :term:`asset`. The asset must live within a location defined via the :meth:`pyramid.config.Configurator.add_static_view` :term:`configuration declaration` directive (see :ref:`static_assets_section`). This is a convenience method. The result of calling :meth:`pyramid.request.Request.static_url` is the same as calling :func:`pyramid.url.static_url` with an explicit ``request`` parameter. The :meth:`pyramid.request.Request.static_url` method calls the :func:`pyramid.url.static_url` function using the Request object as the ``request`` argument. The ``*kw`` arguments passed to :meth:`pyramid.request.Request.static_url` are passed through to :func:`pyramid.url.static_url` unchanged and its result is returned. This call to :meth:`pyramid.request.Request.static_url`:: request.static_url('mypackage:static/foo.css') Is completely equivalent to calling :func:`pyramid.url.static_url` like this:: from pyramid.url import static_url static_url('mypackage:static/foo.css', request) See :func:`pyramid.url.static_url` for more information """ return static_url(path, self, **kw)[docs] def route_path(self, route_name, *elements, **kw): """Generates a path (aka a 'relative URL', a URL minus the host, scheme, and port) for a named :app:`Pyramid` :term:`route configuration`. This is a convenience method. The result of calling :meth:`pyramid.request.Request.route_path` is the same as calling :func:`pyramid.url.route_path` with an explicit ``request`` parameter. This method accepts the same arguments as :meth:`pyramid.request.Request.route_url` and performs the same duty. It just omits the host, port, and scheme information in the return value; only the path, query parameters, and anchor data are present in the returned string. The :meth:`pyramid.request.Request.route_path` method calls the :func:`pyramid.url.route_path` function using the Request object as the ``request`` argument. The ``*elements`` and ``*kw`` arguments passed to :meth:`pyramid.request.Request.route_path` are passed through to :func:`pyramid.url.route_path` unchanged and its result is returned. This call to :meth:`pyramid.request.Request.route_path`:: request.route_path('foobar') Is completely equivalent to calling :func:`pyramid.url.route_path` like this:: from pyramid.url import route_path route_path('foobar', request) See :func:`pyramid.url.route_path` for more information """ return route_path(route_name, self, *elements, **kw) # override default WebOb "environ['adhoc_attr']" mutation behavior__getattr__ = object.__getattribute__ __setattr__ = object.__setattr__ __delattr__ = object.__delattr__ # b/c dict interface for "root factory" code that expects a bare # environ. Explicitly omitted dict methods: clear (unnecessary), # copy (implemented by WebOb), fromkeys (unnecessary) def __contains__(self, k): return self.environ.__contains__(k) def __delitem__(self, k): return self.environ.__delitem__(k) def __getitem__(self, k): return self.environ.__getitem__(k) def __iter__(self): return iter(self.environ) def __setitem__(self, k, v): self.environ[k] = v def get(self, k, default=None): return self.environ.get(k, default) def has_key(self, k): return k in self.environ def items(self): return self.environ.items() def iteritems(self): return self.environ.iteritems() def iterkeys(self): return self.environ.iterkeys() def itervalues(self): return self.environ.itervalues() def keys(self): return self.environ.keys() def pop(self, k): return self.environ.pop(k) def popitem(self): return self.environ.popitem() def setdefault(self, v, default): return self.environ.setdefault(v, default) def update(self, v, **kw): return self.environ.update(v, **kw) def values(self): return self.environ.values() | http://docs.pylonsproject.org/projects/pyramid/en/1.0-branch/_modules/pyramid/request.html | CC-MAIN-2016-07 | en | refinedweb |
in reply to
Re: Re: Can I timeout an accept call?
in thread Can I timeout an accept call?
Unfortunately, non-blocking IO isn't as simple as blocking IO. I'm not sure why you say that your sockets are "defaulted to non-blocking" simply by virtue of using IO::Socket::INET. To my knowledge, sockets created from this module are not automatically placed in a non-blocking state unless you explicitly ask for them that way.
To get around the fact that many of your non-blocking socket calls will immediately return (frequently with 'undef'), you should use IO::Select to detect when a socket has something for you, or when a socket is available to accept data. It's generally safe then to read from or write to a socket such that you won't block, and won't get 'undef' (unless it really means it).
Unfortunately this logic tends to add a lot of additional complexity to your code. Fortunately, much of this has already been encapsulated in some Perl modules, POE being the most noteworthy.
Lots
Some
Very few
None
Results (222 votes),
past polls | http://www.perlmonks.org/index.pl?node_id=173334 | CC-MAIN-2016-07 | en | refinedweb |
JScript .NET, Part VIII: Consuming IsPrime from ASP.NET: A Final Word - Doc JavaScript
JScript .NET, Part VIII: Consuming IsPrime from ASP.NET
A Final Word
In this column, we showed you how to consume a specific Web service (
IsPrime) from ASP.NET. We went through the flow of generating the proxy for the
IsPrime Web service, its class, namespace, and
.dll file. We taught you how to write the ASP.NET consumer, with its two ASP.NET controls,
ASP:TEXTBOX and
ASP:BUTTON. We presented the JScript .NET portion of the page, how to call the Web service from within the consumer, and what the page looks like in the browser.
In this column you have learned:
-).
Produced by Yehuda Shiran and Tomer Shiran
Created: July 15, 2002
Revised: July 15, 2002
URL: | http://www.webreference.com/js/column114/7.html | CC-MAIN-2016-07 | en | refinedweb |
How to Create an Event for a C Sharp Class
Many C# programmers use events in other classes by attaching event handlers to them but have you ever wanted to implement your own event(s) in classes that you develop? This is a systematic straightforward guide to creating your own events without worrying about forgetting anything.
Steps
- 1Create the Event Arguments Class:
- Decide what you want to communicate with your event subscribers (other programmers who will attach event handlers to your event). For example, if your event is to notify developers when a value changes, you might want to tell them the old value and the new one. If you have nothing to communicate to subscribers, use the System.EventArgs class and skip this step.
- Create a public class named EventNameEventArgs where EventName is the name of your event
- Inherit the EventNameEventArgs class from System.EventArgs
- Create a protected field for each piece of data you want to communicate with your subscribers. For example, in an event that will notify developers of a change in a string value, you might want to tell them the old and the new strings so the fields will look like: protected string oldVal, newVal;
- Create a public property for each field you created in 1.4 that has only a get{ return fieldName;} method (where fieldName is the name of the field you created in 1.4
- Create a private empty constructor for the class with no implementation. The constructor should look like: private EventNameEventArgs(){}
- Create a public constructor for the class that takes as many arguments as there is fields/data. For example, in a string value change event, the constructor will look like: public EventNameEventArgs(string oldValue, string newValue){oldVal = oldValue; newVal = newValue;}
- 2Declare the event delegate. If you did not create an event arguments class because there is no data to communicate with subscribers, use the System.EventHandler delegate and skip this step. The delegate declaration should look like: public delegate void EventNameHandler(object sender, EventNameEventArgs e);
- 3Declare the event itself in the containing class: use the event handler delegate you declared/decided in 2 as the event type. The declaration should look like: public event EventNameHandler EventName;
- 4Declare the event-firing method - a protected virtual method that has exactly the following declaration: protected virtual void OnEventName(EventNameEventArgs e){ if(EventName != null) { EventName(this, e); } } Use the event arguments class you decided to use in 1
- 5Call the event-firing method you declared in 4 whenever the event occurs. This is the hardest part. You should know when the event you are creating will fire (what areas in your code causes the event to occur) and call the method with the appropriate event arguments class instance. For example, in the string value change event, you must see what code can cause this value to change, save its old value before the change, allow the code to change the value, create an event arguments object with the old and new values passed to the constructor and pass the object to the event-firing method.
Give us 3 minutes of knowledge!
Tips
- Be committed to the naming convention stated in this guide, it is a de facto standard and most .NET/Mono developers use it.
- Do not over communicate to your subscribers. In other words, do not transfer data that is not related to the event.
- Choose the name of your event carefully and clearly. Event names like "ValPsd" instead of "ValuePassed" is not encouraged.
- Usually, the accessibilities used in this article is the case. However, you can change the accessibility of any declaration as long as it does not render the element changed unusable by other elements of the event creation process.
- Examine all places in your code where the event might occur. Sometimes, more than one piece of code causes the event to fire.
- Watch for any changes you make to your class after you declare the event. See if the change affects the triggering/firing of the event.
Warnings
- If you are adding the event to a struct instead of a class, take notice of the following changes:
- Use "private" instead of "protected virtual" when declaring the event-firing method in 4.
- In the constructor of the struct that declares the event, you must initialize the event itself or you will get a compile error. Initialize events by creating new event handler delegate objects and assigning them to the event. The initialization code should look like: EventName = new EventNameHandler(); or EventNameHandler = null;
Things You'll Need
- A .NET framework installed (either MS .NET framework on Windows or Mono on other operating systems).
- A C# compiler (the csc tool in the MS .NET SDK, cmcs in the Mono framework, or the compiler included in .NET IDEs such as Visual Studio 2005/2008 for Windows or MonoDev for Linux).
- The code of the class you wish to add the event to.
- Some code editing tool (Notepad is enough, but Visual Studio, MonoDev, Notepad++ or any other editor might make development and code writing easier).
Article Info
Categories: C Programming Languages
Thanks to all authors for creating a page that has been read 33,168 times.
About this wikiHow | http://www.wikihow.com/Create-an-Event-for-a-C-Sharp-Class | CC-MAIN-2016-07 | en | refinedweb |
>>.
40 Reader Comments
Isn't Flash already the standard? Or am I missing something?
At any rate, Flash as a format hasn't been decided by anyone except Macromedia/Adobe what have you; at least HTML5 (which I wish was XML, but never mind), CSS3 are being put together by the people that would want to either produce the content or implement renderers. Notably there are many working open implementations of HTML and CSS renders and around 1 proprietary and 0.3 of an open implementation of Flash.
I'm still unhappy that MobileSafari continues to have performance problems as regards SVG and CSS animations.
Edit: Oh, and the weird layering that you get if you embed objects in pages. Again only in MobileSafari, not Safari or WebKit nightlies.
Yeah, it's the de facto standard.
The only way this will be useful is if/when it becomes adopted into the W3C spec. Seeing as Dave Hyatt and Ian Hickson are editors on the HTML5 spec, I can see that there's probably a lot of influential power there to at least get it through the starting gate of the process. I'm only postulating here, but I imagine the problem with this could probably be two-fold:
1. Other browser developers may cry foul over Apple forcing its already-implemented proprietary specs into the official spec. And using its open-source project as leverage for controlling development of the spec.
2. Performance-related items may be "debated out" to make a generic spec that all browser developers can implement. A sort of "decision by consensus" where everyone loses.
I have no idea what is involved with developing a spec such as this, let alone a W3C spec, so I'm only left to imagine these as potential problems. That said, I'd love to see these put into some version of the CSS spec.
Ah, I haven't been keeping up with Flash for several years now.
But, the sample animations shown in this post only required the addition of some CSS... not a whole other authoring environment + skill set. So I still think the advantage goes to the CSS Transforms. But Apple needs to unleash them on the desktop so we can really see where devs can take it. It's already pretty impressive to me on MobileSafari.
Agreed, but Canvas was originally developed at Apple to enable complex graphics for Dashboard widgets... now it's part of the HTML5 spec. I don't think there is any inherently anti-Apple sentiment among those the develop/approve the spec. As long as it's well though out and the reference implementation can properly show off it's advantages, I think any technology has a chance. I believe that is the case here with Apple's 3D CSS Transforms.
If you mean Microsoft/IE then don't sweat it. They'll just completely ignore it like so many other things, and it won't matter a whit, being immune to market forces and all.
All the better if they give away technologies like this so that all benefit as long as they support W3C standards. The Web can only benefit.
It needs to be removed from the web.
It should also be noted that the WebKit team has a published spec that, while not guaranteed to be finalized, is currently implemented in WebKit and enabled in MobileSafari. Apple encourages those making web apps to be deployed for the iPhone to use these CSS Transforms to enable native-like effects.
As pohl stated, other browser makers could implement support for these transforms based on the current spec. If it proved popular, it could be adopted as part of the official CSS spec.
I agree, but the majority of video embedding uses Flash-based video streaming. Hopefully HTML5 can change this.
However, as Zich stated, he has the samples posted online if you'd like to view them. Personally I think they are a lot more impressive when view on the iPhone than in the videos I made.
Particularly, I used the CardFlip and PosterCircle examples:
Personally, though, I find even *more* ironic that you can view these examples natively on an iPhone but *not* on the desktop.
CSS is standard.
Apple is extended CSS by adding animations and 3D graphics.
Apple will then offer these for certification so that it is open standard.
HTML 5.0 and CSS animations and 3D graphics absolutely will kill Flash.
These open standards can be used by anyone for free.
Except that there is now a near-complete (barring old but still important video codecs) open specification for Flash without any restrictions on what you may or may not implement. The analogy isn't quite right. As a format, Flash is no longer proprietary. However, the only serious attempt to implement it outside of the closed source Adobe client is Gnash which remains severely lacking.
Not that any of this really matters. HTML&CSS is preferable to Flash for many reasons.
But maybe, typically how I have observed these things to work, this will happen - some 5 years down the line - and will be hailed as the 'next innovation'. I continue to remain clueless as to why no one just gets it right now!
The 3D stuff is interesting and wouldn't mind seeing it go in. At the same time, I don't care for it. Flash is already distracting in most sites. We don't need a standard way of throwing crap all over the site and it wouldn't be confined to little boxes like it does in flash. It would also be harder to detect ads from content.
These things are debated. Apple cannot force anything on the W3C. Submitted technology is debated by many people who decide if it is the best way to go. This process takes a long time.
I never said WebKit is proprietary. I said the CSS extensions themselves are proprietary, which they currently are. None of WebKit is a "specification" -- the rendering engine portion of WebKit is an implementation of a set of specifications laid out by W3C that's open to anyone to create their own implementation from. It's what Mozilla does with Gecko, Opera does with Presto, and Microsoft does with Trident.
My point was specific to the topic at hand -- I never said they *are* forcing their will on the specification, I was simply pointing out a conflict of interest that *may* have arisen. This given that Dave Hyatt is working on the HTML5 spec (which, of course, is not the CSS3 spec).
In the back of my mind I was referencing what foresmac108 stated regarding Canvas. When Canvas was introduced, Ian Hickson, took exception with the extensions to HTML without properly notifying the standards body for discussion. However, that wasn't so much to do with the proprietary nature of these, so much as namespace choices.
And, as foresmac108 aptly points out, the web community has definitely softened and embraced WebKit into the community. And, in reality, I find that this will likely go through -- if it is brought to the w3c -- much the same, if not easier than, the Canvas additions did. I mean, these are very well documented.
I was simply trying to make the point that I hope it does, and hope it does so smoothly and in a way that's most beneficial to all involved, while not losing any advantages of this technology. If my verbiage doesn't relay that, then I apologise.
HTML 5 and related Web 2 technologies are neither ad hoc nor interim. They embody an advancement on and continuation of the 20 year long history of the HTML spec, the core technology of the WWW. HTML 5 is a work in progress and remains open to submissions such as the one this article refers to.
I would say these CSS extensions are proprietary in the sense that Apple is developing them. But we all know Apple is developing them with the full intent of making them apart of the open spec.
The declarative animation stuff is also a lot better than mucking around in Javascript, but only Safari supports it so it is less useful right now.
Bad cubicle monkey,bad! (/me slaps own hand)
This leaves Javascript to actual data-processing, and creating elements, rather than having to do heavy-lifting manipulating them, which is ideal I think, for more complex set-ups you would then be able to use javascript to create 'ticks' as required, or customise rules as needed.
You must login or create an account to comment. | http://arstechnica.com/apple/2009/03/apple-holding-back-on-web-based-3d-graphics/?comments=1 | CC-MAIN-2016-07 | en | refinedweb |
#TODOs Never Get To-Done
Posted on
by
Jared Carroll in Process
Over time all software begins to rot. At first a young codebase is small and lean, it’s a pleasure to work on, a single developer may even be able to understand the entire codebase. However, iteration after iteration time constraints and poorly specified features begin to manifest themselves in the codebase. TODOs are everywhere, tests are pending, waiting on unanswered questions related to their pending functionality, half-baked ideas remain commented out so we won’t “lose” them, new developers join the team and contribute to the rot – the broken window theory is in full swing.
Uncle Bob tells us that our goal as developers is “not just working code but clean code” and that we should “always check in code better than you found it”. As professional developers it’s our responsibility to take the initiative in keeping our software maintainable. Let’s take a closer look at some of the above smells.
TODOs
# TODO: Update protected instance variables list # TODO: Validate that the characters are UTF-8. If they aren't, # you'll get a weird error down the road, but our form handling # should really prevent that from happening # TODO: button_to looks at this ... why? # TODO: Remove these hacks
Your codebase is not the place for TODOs. The problem with TODOs is that most developers quickly just glance over them, ignoring them because they didn’t write them. Sadly version control often reveals that they’ve been around for quite some time. Is a TODO even still applicable? Was it fixed and someone forgot to remove the TODO? Is there an existing ticket for this TODO?
Move this “necessary” work where it belongs; to your project management/bug tracking tool with the rest of your codebase’s features and bugs. Do the same to FIXMEs, REFACTORs and WTFs.
Pending Tests
describe '.recent' do it 'does NOT find admins' end describe '#save' do it 'normalizes its website URL' do pending end end
Every testing tool seems to have the ability to mark a test as pending; RSpec has
#pending, Cucumber has @wip, JUnit and NUnit have ignore. These pending tests appear in the output on every test run. Despite this very visible characteristic developers often will let pending tests live on for ages. Test run after test run the same pending tests are output and over time the team learns to live with it.
Pending tests are useful as a kind of TODO list of tests during TDD. And it’s fine to use them while developing a feature in a topic branch but make sure they’re no longer pending when you finish your work and its time to merge it into an integration branch (e.g. master, develop/next, staging, production, etc).
Commented Out Code
def save kick = super_save # message = { ball => self }.to_json # $kicks_machine.publish message kick end
Comments in code are distracting, commented out code is even more distracting. It often sits there for days, weeks, or even months; each passing developer scared to delete it. Even though we’re using version control and we know that every modification made to every file is completely recoverable for some reason we still don’t delete commented out code.
What to Do
Use the following steps to deal with all this cruft, your codebase will thank you for it.
git blame file
- Contact the author to discover why
- Create a story/ticket for the TODO/pending test/commented out code
- Delete the code
Clean Code
Taking the initiative and raising the standard of code cleanliness is contagious. Suddenly your teammates will notice the increase in quality and start to adopt the same techniques. Slowly the codebase improves; there are less distractions, it appears more focused, it starts to look like it was written by developers who take pride in their work. Addressing the above smells are the kind of small changes that individually might not seem like much but over time can make a big difference.
Feedback
Your feedback
Christian Bradley
January 26, 2011 at 9:58 am
I’d also suggest adding ‘grep todo -ir .’ to the default rake task. Those TODOs being printed out on every run *have* to get annoying after awhile
Christian Bradley
January 26, 2011 at 10:00 am
By the way, I agree fully with using pending specs as TODOs… the added context helps a ton, and – like grep’ing TODOs in code, these are displayed on every rake as well.
Christian Bradley
January 26, 2011 at 10:06 am
I would disagree, however, that comments in code are distracting. If the comments are sparse but placed for readability… I actually find it very helpful for developers to comment blocks of code so that I do not have to run them through my own internal parser.
Ruby reads much more like the English language than most others, but I still find that when picking up stories it can be very time consuming to scan through lines of code where a single well placed comment would give me the gist.
Commented-out code has a place within a temporary workflow but never within a master commit. If it’s something you want to know how to do later, create a gist or email it to yourself for God’s sake. I’m with you on that one.
Anonymous
January 26, 2011 at 12:46 pm
Show me the evidence.
It’s really simple. There’s already a theory, now show me the evidence. If you can’t do that don’t bother to write a blog post and act like you’re an authority.
Michael Wynholds
January 26, 2011 at 2:33 pm
What do you mean: “the evidence”? This post is not a theory predicting future behavior, it’s one developer’s opinion of the value of TODOs in the codebase. If you disagree with it, then that’s great – I’d like to hear your opinions as well, and we can have a useful dialog via these comments.
I am curious, do you think the codebase is the right place for TODOs, as opposed to a project management or bug tracking system? If so, is there a timeframe along which a TODO becomes less useful?
Give us your opinion.
Rob Pak
January 26, 2011 at 5:16 pm
# TODO: Gather and show evidence to Anonymous
# TODO: Act like an authority
# TODO: Wear sunglasses at night
Jared Carroll
January 27, 2011 at 8:45 am
@anonymous – I don’t have any empirical evidence that the above smells cause code rot. However in my experience these smells are exactly the kind of poor code that because they are never addressed, replicate and worsen throughout a codebase.
dude
August 10, 2011 at 7:44 am
WHOOSH! Where do you work, Anonymous? We can make sure to avoid it like the plague.
Werehamster
January 26, 2011 at 4:50 pm
Sorry Anonymous , but I’d have to agree with the original post.
Surely all you’d have to do is look in your own code and see how many TODOs are present and then use the version control to see how long they have been there. If any of them are more than a month or so old then you know you’ve got a problem because now there’s two places you need to look to get a list of what needs to be done.
I for one know that there’s a bunch of them in my latest project (a quick check reveals one from 22nd of July), and they should indeed be put into the ticketing system, because I have obviously forgotten about them.
If you are not using a ticketing system, then I don’t see any problems with just using a todo list inside the code. But the important thing is that the list should be in one place, and one place only.
Pafcio00
January 27, 2011 at 12:29 am
//TODO is a great thing to use as a bookmark in code so you wont miss a thing when you must jump to another task leaving something unfinished.
I use TODO’s sometimes in setters to bookmark those that need additional validation (but I don’t want to write it just yet).
Thanks to IDE’s those TODO’s are easy to find and it makes work easier sometimes.
Of course before version release all TODO’s must be removed.
Christian Bradley
January 27, 2011 at 12:21 pm
Santosh Kumar
February 4, 2011 at 8:35 am
Completely agree, #TODO’s are a sign of the ‘refactor’ step being skipped in red-green-refactor. Contacting devs who wrote the #TODO’s and figuring out if a story needs to be created is a great idea.
I do like having comments though. I find that there are times, when using local vars to self document steps leads to long’ish variable names and that kind of gets in the way of readability. A couple of comments above a block of code, that has terse local variable names, I feel reads better.
Georg Ledermann
August 10, 2011 at 7:45 am
Nice to read this. I was thinking the same three years ago:
Ken Collins
August 10, 2011 at 7:49 am
Great article. I have never allowed #TODO’s in my large legacy day job app. I even set a rule that they must be cleared out before each release cycle and always be scoped to an issue/ticket. If not, then they must be deleted or transferred to said issue/ticket system.
On another note, we use #CHANGED a lot. And always scope them to a particular component/gem. So whenever we upgrade a gem dep or library, we checked all the #CHANGED notes to see what we need to re-monkey patch and/or totally remove. A few working examples.
# CHANGED [Rails3] This monkey patch is no-longer needed.
# CHANGED [FactoryGirl] Version 2.0.4 should have this fix in issue #140, remove then.
Albert Chyickenstok
August 6, 2015 at 5:50 pm
We ban the word todo in our code. Banning a word instantly improves the code base, because it makes some things impossible! For example, we banned the word “bug” when speaking about our code, and instantly the bug count dropped to zero and we all got raises.
It’s so obvious, I don’t know why more people haven’t done it. Next up I’m going to lobby dictionary writers, and the ministry of information to ban the word TODO. It’s just so icky and unprofessional. | http://blog.carbonfive.com/2011/01/26/todos-never-get-to-done/ | CC-MAIN-2016-07 | en | refinedweb |
Image manipulation in Python
Someone asked me about determining whether an image was "portrait" or "landscape" mode from a script.
I've long had a script for automatically rescaling and rotating images, using ImageMagick under the hood and adjusting automatically for aspect ratio. But the scripts are kind of a mess -- I've been using them for over a decade, and they started life as a csh script back in the pnmscale days, gradually added ImageMagick and jpegtran support and eventually got translated to (not very good) Python.
I've had it in the back of my head that I should rewrite this stuff in cleaner Python using the ImageMagick bindings, rather than calling its commandline tools. So the question today spurred me to look into that. I found that ImageMagick isn't the way to go, but PIL would be a fine solution for most of what I need.
ImageMagick: undocumented and inconstant
Ubuntu has a python-pythonmagick package, which I installed. Unfortunately, it has no documentation, and there seems to be no web documentation either. If you search for it, you find a few other people asking where the documentation is.
Using things like
help(PythonMagick) and
help(PythonMagick.Image), you can ferret out a
few details, like how to get an image's size:
import PythonMagick filename = 'img001.jpg' img = PythonMagick.Image(filename) size = img.size() print filename, "is", size.width(), "x", size.height()
Great. Now what if you want to rescale it to some other size? Web searching found examples of that, but it doesn't work, as illustrated here:
>>> img.scale('1024x768') >>> img.size().height() 640
The built-in help was no help:
>>> help(img.scale) Help on method scale: scale(...) method of PythonMagick.Image instance scale( (Image)arg1, (Geometry)arg2) -> None : C++ signature : void scale(Magick::Image {lvalue},Magick::Geometry)
So what does it want for (Geometry)? Strings don't seem to work, 2-tuples don't work, and there's no Geometry object in PythonMagick. By this time I was tired of guesswork. Can the Python Imaging Library do better?
PIL -- the Python Imaging Library
PIL, happily, does have documentation. So it was easy to figure out how to get an image's size:
from PIL import Image im = Image.open(filename) w = im.size[0] h = im.size[1] print filename, "is", w, "x", hIt was equally easy to scale it to half its original size, then write it to a file:
newim = im.resize((w/2, h/2)) newim.save("small-" + filename)
Reading EXIF
Wow, that's great! How about EXIF -- can you read that? Yes, PIL has a module for that too:
import PIL.ExifTags exif = im._getexif() for tag, value in exif.items(): decoded = PIL.ExifTags.TAGS.get(tag, tag) print decoded, '->', value
There are other ways to read exif -- pyexiv2 seems highly regarded. It has documentation, a tutorial, and apparently it can even write EXIF tags.
If neither PIL nor pyexiv2 meets your needs, here's a Stack Overflow thread on other Python EXIF solutions, and here's another discussion of Python EXIF. But since you probably already have PIL, it's certainly an easy way to get started.
What about the query that started all this: how to find out whether an image is portrait or landscape? Well, the most important thing is the image dimensions themselves -- whether img.size[0] > img.size[1]. But sometimes you want to know what the camera's orientation sensor thought. For that, you can use this code snippet:
for tag, value in exif.items(): decoded = PIL.ExifTags.TAGS.get(tag, tag) if decoded == 'Orientation': print decoded, ":", valueThen compare the number you get to this Exif Orientation table. Normal landscape-mode photos will be 1.
Given all this, have I actually rewritten resizeall and rotateall using PIL? Why, no! I'll put it on my to-do list, honest. But since the scripts are actually working fine (just don't look at the code), I'll leave them be for now.
[ 15:33 Mar 16, 2012 More programming | permalink to this entry | comments ] | http://shallowsky.com/blog/tags/imagemagick/ | CC-MAIN-2016-07 | en | refinedweb |
Python 2.6 Graphics Cookbook — Save 50%
Over 100 great recipes for creating and animating graphics using Python
The previous article, Python Graphics: Animation Principles, starts with examples of simple sequences of a circle in different positions and systematically progresses to smoothly-moving animations of elastic balls bouncing inside a gravity field.
In this article by Mike Ohlson de Fine, author of Python 2.6 Graphics Cookbook, we will cover:
- Colliding balls with tracer trails
- Elastic ball against ball collisions
- Dynamic debugging
- Trajectory tracing
- Rotating a line and vital trigonometry
- Rotating lines which rotate lines
- A digital flower
(For more resources on Python, see here.) = {'posn_x':25.0, # x position of box containing the
# ball (bottom).
'posn_y':180.0, # x position of box containing the
# ball (left edge).
'velocity_x':30.0, # amount of x-movement each cycle of
# the 'for' loop.
'velocity_y':100.0, # amount of y-movement each cycle of
# the 'for' loop.
'ball_width':20.0, # size of ball - width (x-dimension).
'ball_height':20.0, # size of ball - height (y-dimension).
'color':"dark orange", # color of the ball
'coef_restitution':0.90} # proportion of elastic energy
# recovered each bounce
ball_2 = {'posn_x':cw - 25.0,
'posn_y':300.0,
'velocity_x':-50.0,
'velocity_y':150.0,
'ball_width':30.0,
'ball_height':30.0,
'color':"yellow3",
'coef_restitution':0.90}
def detectWallCollision(ball):
# Collision detection with the walls of the container
if ball['posn_x'] > cw - ball['ball_width']: # Collision
# with right-hand wall.
ball['velocity_x'] = -ball['velocity_x'] * ball['coef_ \
restitution'] # reverse direction.
ball['posn_x'] = cw - ball['ball_width']
if ball['posn_x'] < 1: # Collision with left-hand wall.
ball['velocity_x'] = -ball['velocity_x'] * ball['coef_ \restitution']
ball['posn_x'] = 2 # anti-stick to the wall
if ball['posn_y'] < ball['ball_height'] : # Collision
# with ceiling.
ball['velocity_y'] = -ball['velocity_y'] * ball['coef_ \ restitution']
ball['posn_y'] = ball['ball_height']
if ball['posn_y'] > ch - ball['ball_height']: # Floor
# collision.
ball['velocity_y'] = - ball['velocity_y'] * ball['coef_ \restitution']
ball['posn_y'] = ch - ball['ball_height']
def diffEquation(ball):
# An approximate set of differential equations of motion
# for the balls
ball['posn_x'] += ball['velocity_x'] * time_scaling
ball['velocity_y'] = ball['velocity_y'] + GRAVITY # a crude
# equation incorporating gravity.
ball['posn_y'] += ball['velocity_y'] * time_scaling
chart_1.create_oval( ball['posn_x'], ball['posn_y'], ball['posn_x'] + ball['ball_width'],\
ball ['posn_y'] + ball['ball_height'], \
fill= ball['color']) = {'posn_x':25.0,
'posn_y':25.0,
'velocity_x':65.0,
'velocity_y':50.0,
'ball_width':20.0,
'ball_height':20.0,
'color':"SlateBlue1",
'coef_restitution':0.90}
ball_2 = {'posn_x':180.0,
'posn_y':ch- 25.0,
'velocity_x':-50.0,
'velocity_y':-70.0,
'ball_width':30.0,
'ball_height':30.0,
'color':"maroon1",
'coef_restitution':0.90}
def detect_wall_collision(ball):
# detect ball-to-wall collision
if ball['posn_x'] > cw - ball['ball_width']: # Right-hand wall.
ball['velocity_x'] = -ball['velocity_x'] * ball['coef_ \restitution']
ball['posn_x'] = cw - ball['ball_width']
if ball['posn_x'] < 1: # Left-hand wall.
ball['velocity_x'] = -ball['velocity_x'] * ball['coef_ \restitution']
ball['posn_x'] = 2
if ball['posn_y'] < ball['ball_height'] : # Ceiling.
ball['velocity_y'] = -ball['velocity_y'] * ball['coef_ \restitution']
ball['posn_y'] = ball['ball_height']
if ball['posn_y'] > ch - ball['ball_height'] : # Floor
ball['velocity_y'] = - ball['velocity_y'] * ball['coef_ \restitution']
ball['posn_y'] = ch - ball['ball_height']
def detect_ball_collision(ball_1, ball_2):
#detect ball-to-ball collision
# firstly: is there a close approach in the horizontal direction
if math.fabs(ball_1['posn_x'] - ball_2['posn_x']) < 25:
# secondly: is there also a close approach in the vertical
# direction.
if math.fabs(ball_1['posn_y'] - ball_2['posn_y']) < 25:
ball_1['velocity_x'] = -ball_1['velocity_x'] # reverse
# direction.
ball_1['velocity_y'] = -ball_1['velocity_y']
ball_2['velocity_x'] = -ball_2['velocity_x']
ball_2['velocity_y'] = -ball_2['velocity_y']
# to avoid internal rebounding inside balls
ball_1['posn_x'] += ball_1['velocity_x'] * time_scaling
ball_1['posn_y'] += ball_1['velocity_y'] * time_scaling
ball_2['posn_x'] += ball_2['velocity_x'] * time_scaling
ball_2['posn_y'] += ball_2['velocity_y'] * time_scaling
def diff_equation(ball):
x_old = ball['posn_x']
y_old = ball['posn_y']
ball['posn_x'] += ball['velocity_x'] * time_scaling
ball['velocity_y'] = ball['velocity_y'] + GRAVITY
ball['posn_y'] += ball['velocity_y'] * time_scaling
chart_1.create_oval( ball['posn_x'], ball['posn_y'],\
ball['posn_x'] + ball['ball_width'],\
ball['posn_y'] + ball['ball_height'],\
fill= ball['color'], tags="ball_tag")
chart_1.create_line( x_old, y_old, ball['posn_x'], \
ball ['posn_y'], fill= ball['color'])['posn_x'] - ball_2['pos['posn_y'] - ball_2['pos.
(For more resources on Python, see here.)
Rotating line
Now we will see how to handle rotating lines. In any kind of graphic computer work, the need to rotate objects arises eventually. By starting off as simply as possible and progressively adding behaviors we can handle some increasingly complicated situations. This recipe is that first simple step in the art of making things rotate.
Getting ready
To understand the mathematics of rotation you need to be reasonably familiar with the trigonometry functions of sine, cosine, and tangent. The good news for those of us whose eyes glaze at the mention of trigonometry is that you can use these examples without understanding trigonometry. However, it is much more rewarding if you do try to figure out the math. It is like the difference between watching football or playing it. Only the players get fit.
How to do it...
You just need to write and run this code and observe the results as you did for all the other recipes. The insights come from repeated tinkering and hacking the code. Change the values of variables p1_x to p2_y one at a time and observe the results.
# rotate_line_1.py
#>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
from Tkinter import *
import math
root = Tk()
root.title("Rotating line")
cw = 220 # canvas width
ch = 180 # canvas height
chart_1 = Canvas(root, width=cw, height=ch, background="white")
chart_1.grid(row=0, column=0)
cycle_period = 50 # pause duration (milliseconds).
p1_x = 90.0 # the pivot point
p1_y = 90.0 # the pivot point,
p2_x = 180.0 # the specific point to be rotated
p2_y = 160.0 # the specific point to be rotated.
a_radian = math.atan((p2_y - p1_y)/(p2_x - p1_x))
a_length = math.sqrt((p2_y - p1_y)*(p2_y - p1_y) +\
(p2_x - p1_x)*(p2_x - p1_x))
for i in range(1,300): # end the program after 300 position shifts
a_radian +=0.05 # incremental rotation of 0.05 radians
p1_x = p2_x - a_length * math.cos(a_radian)
p1_y = p2_y - a_length * math.sin(a_radian)
chart_1.create_line(p1_x, p1_y, p2_x, p2_y)
chart_1.update()
chart_1.after(cycle_period)
chart_1.delete(ALL)
root.mainloop()
How it works...
In essence, all rotation comes down to the following:
- Establish a center of rotation or pivot point
- Pick a specific point on the object you want to rotate
- Calculate the distance from the pivot point to the specific point of interest
- Calculate the angle of the line joining the pivot and the specific point
- Increase the angle of the line joining the points by a known amount, the rotation angle, and re-calculate the new x and y coordinates for that point.
For math students what you do is relocate the origin of your rectangular coordinate system to the pivot point, express the coordinates of your specific point into polar coordinates, add an increment to the angular position, and convert the new polar coordinate position into a fresh pair of rectangular coordinates. The preceding recipe performs all these actions.
There's more...
The pivot point was purposely placed near the bottom corner of the canvas so that the point on the end of the line to be rotated would fall outside the canvas for much of the rotation process. The rotation continues without errors or bad behavior emphasizing a point made earlier that Python is mathematically robust. However, we need to exercise care when using the arctangent function math.atan() because it flips from a value positive infinity to negative infinity as angles move through 90 and 270 degrees. Atan() can give ambiguous results. Again the Python designers have taken care of business well by creating the math. atan2(y,x) function that takes into account the signs of both y and x to give unambiguous results between 180 degrees and -180.
Trajectory tracing on multiple line rotations
This example draws a visually appealing kind of Art Noveau arrowhead but that is just an issue on the happy-side. The real point of this recipe is to see how you can have any number of pivot points all with different motions and that the essential arithmetic remains simple and clean looking in Python. The use of animation methods to slow the execution down makes it entertaining to watch. We also see how tag names given to different parts of the objects drawn onto the canvas allow them to be selectively erased when the canvas.delete(...) method is invoked.
Getting ready
Imagine a skilled drum major marching in a parade whirling a staff in circles. Holding onto the end of the staff is a small monkey also twirling a baton but at a different speed. At the tip of the monkey's staff is a miniature marmoset twirling a baton in the opposite direction...
Now run the program.
How to do it...
Run the Python code below as we have done before. The result is shown in following screenshot showing multiple line rotation traces.
# multiple_line_rotations_1.py
#>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
from Tkinter import *
import math
root = Tk()
root.title("multi-line rotations")
cw = 600 # canvas width
ch = 600 # canvas height
chart_1 = Canvas(root, width=cw, height=ch, background="white")
chart_1.grid(row=0, column=0)
cycle_period = 50 # time between new positions of the ball
# (milliseconds).
p0_x = 300.0
p0_y = 300.0
p1_x = 200.0
p1_y = 200.0
p2_x = 150.0 # central pivot
p2_y = 150.0 # central pivot
p3_x = 100.0
p3_y = 100.0
p4_x = 50.0
p4_y = 50))
for i in range(1,5000):
alpha_0 += 0.1
alpha_1 += 0.3
alpha_2 -= 0.4)
chart_1.create_line(p1_x, p1_y, p0_x, p0_y, tag='line_1')
chart_1.create_line(p2_x, p2_y, p1_x, p1_y, tag='line_2')
chart_1.create_line(p3_x, p3_y, p2_x, p2_y, tag='line_3')
chart_1.create_line(p4_x, p4_y, p3_x, p3_y, fill="purple", \tag='line_4')
# Locus tip_locus_2 at tip of line 1-2
chart_1.create_line(tip_locus_2_x, tip_locus_2_y, p2_x, p2_y, \ fill='maroon')
# Locus tip_locus_2 at tip of line 2-3
chart_1.create_line(tip_locus_3_x, tip_locus_3_y, p3_x, p3_y, \ fill='orchid1')
# Locus tip_locus_2 at tip of line 2-3
chart_1.create_line(tip_locus_4_x, tip_locus_4_y, p4_x, p4_y, \ fill='DeepPink')
chart_1.update()
chart_1.after(cycle_period)
chart_1.delete('line_1', 'line_2', 'line_3')
root.mainloop()
How it works...
As we did in the previous recipe we have lines defined by connecting two points, each being specified in the rectangular coordinates that Tkinter drawing methods use. There are three such lines connected pivot-to-tip. It may help to visualize each pivot as a drum major or a monkey. We convert each pivot-to-tip line into polar coordinates of length and angle. Then each pivot-to-tip line is rotated by its own individual increment angle. If you alter these angles alpha_1 etc. or the positions of the various pivot points you will get a limitless variety of interesting patterns.
There's more...
Once you are able to control and vary color you are able to make extraordinary and beautiful patterns never seen before.
A rose for you
This example is simply a gift for the reader. No illustration is provided. We will only see the result if we run the code. It is a surprise.
from Tkinter import *
root = Tk()
root.title("This is for you dear reader. A token of esteem and affection.")
import math
cw = 800 # canvas width
ch = 800 # canvas height
chart_1 = Canvas(root, width=cw, height=ch, background="black")
chart_1.grid(row=0, column=0)
p0_x = 400.0
p0_y = 400.0
p1_x = 330.0
p1_y = 330.0
p2_x = 250.0
p2_y = 250.0
p3_x = 260.0
p3_y = 260.0
p4_x = 250.0
p4_y = 250.0
p5_x = 180.0
p5_y = 180))
alpha_4 = math.atan((p3_y - p5_y)/(p3_x - p5_x))
length_4_5 = math.sqrt((p5_y - p4_y)*(p5_y - p4_y) + (p5_x - p4_ \x)*(p5_x - p4_x))
for i in range(1,2300): # end the program after 500 position
# shifts.
alpha_0 += 0.003
alpha_1 += 0.018
alpha_2 -= 0.054
alpha_3 -= 0.108
alpha_4 += 0.018)
tip_locus_5_x = p5_x
tip_locus_5_y = p5_y
p5_x = p4_x - length_4_5 * math.cos(alpha_4)
p5_y = p4_y - length_4_5 * math.sin(alpha_4)
chart_1.create_line(p1_x, p1_y, p0_x, p0_y, tag='line_1', \ fill='gray')
chart_1.create_line(p2_x, p2_y, p1_x, p1_y, tag='line_2', \ fill='gray')
chart_1.create_line(p3_x, p3_y, p2_x, p2_y, tag='line_3', \ fill='gray')
chart_1.create_line(p4_x, p4_y, p3_x, p3_y, tag='line_4', \ fill='gray')
chart_1.create_line(p5_x, p5_y, p4_x, p4_y, tag='line_5', \ fill='#550000')
chart_1.create_line(tip_locus_2_x, tip_locus_2_y, p2_x, p2_y, \ fill='#ff00aa')
chart_1.create_line(tip_locus_3_x, tip_locus_3_y, p3_x, p3_y, \ fill='#aa00aa')
chart_1.create_line(tip_locus_4_x, tip_locus_4_y, p4_x, p4_y, \ fill='#dd00dd')
chart_1.create_line(tip_locus_5_x, tip_locus_5_y, p5_x, p5_y, \ fill='#880066')
chart_1.create_line(tip_locus_2_x, tip_locus_2_y, p5_x, p5_y, \ fill='#0000ff')
chart_1.create_line(tip_locus_3_x, tip_locus_3_y, p4_x, p4_y, \ fill='#6600ff')
chart_1.update() # This refreshes the drawing on the
# canvas.
chart_1.delete('line_1', 'line_2', 'line_3', 'line_4') # Erase
# selected tags.
root.mainloop()
How it works...
The structure of this program is similar to the previous example but the rotation parameters have been adjusted to evoke the image of a rose. The colors used are chosen to remind us that control over color is extremely import in graphics.
Summary
In this article we covered the following recipes:
- Colliding balls with tracer trails
- Elastic ball against ball collisions
- Dynamic debugging
- Trajectory tracing
- Rotating a line and vital trigonometry
- Rotating lines which rotate lines
- A digital flower
Further resources on this subject:
- Python Graphics: Animation Principles [Article]
- Python: Unit Testing with Doctest [Article]
- Python Multimedia: Animations Examples using Pyglet [Article]
- Python 3 Object Oriented Programming: Managing objects [Article]
- Python 3 Object Oriented Programming [Book]
About the Author :
Mike Ohlson de Fine.
Post new comment | http://www.packtpub.com/article/animating-graphic-objects-python | CC-MAIN-2013-48 | en | refinedweb |
Django
Django is an open source Web development framework for the Python language that aims to automate as many processes as possible, allowing you to focus on developing software without worrying about reinventing the wheel. The framework is designed to be loosely coupled and tightly cohesive, meaning that different parts of the framework, while connected to one another, are not dependent on one another. This independence means you can use only the parts of Django you need, without worrying about dependency issues.
Django makes writing Web applications faster, and it drastically cuts down the amount of code required, making it much easier to maintain the application going forward. Django strictly observes the Don't Repeat Yourself (DRY) principle, whereby every distinct piece of code or data lives in only one place. This means that when a change needs to be made, it only needs to be made in one place, leading to the process of changing software becoming much faster and much easier.
Django was developed by a team of Web developers at the Lawrence Journal-World newspaper in 2003. Under pressure to release applications and enhancements under severe time constraints, they decided to create a Web framework that would save them time, allowing them to meet their difficult deadlines. The team released the framework as open source software in July 2005, and it is now developed by a community of thousands of developers across the world.
The Django framework is released under the Berkeley Software Distribution (BSD) open source license, which permits redistribution and reuse of the source code and binary, with or without modification, so long as the copyright notice, license conditions, and disclaimer are retained in the redistributed package. These items must also be present in the redistributed software's documentation and supplementary materials where applicable. And the license specifies that neither the Django name nor the names of Django contributors can be used to endorse or promote derivative products without express written permission.
Setting up a basic Django development environment
Fortunately, installing Django is straightforward, so setting up a
development environment is quick and easy. Django is written entirely in
Python, so to install Django, you first need to install
Python. If you're using Mac OS X or Linux®, it's likely that Python
is already on your machine. Simply run
python in your shell
(use Terminal.app on a Mac) and you should see something like Listing
1.
Listing 1. Make sure Python is running
$ python Python 2.5.1 (r251:54863, Nov 11 2008, 17:46:48) [GCC 4.0.1 (Apple Inc. build 5465)] on darwin Type "help", "copyright", "credits" or "license" for more information. Listing 1 - Checking for Python on Mac OS X
As long as your system has a version of Python from 2.3 to 2.6, you will be able to install Django. If you are a Microsoft® Windows® user or if you need to upgrade to a newer version, download Python. A simple installation package is available for Windows users, so installing Python couldn't be much easier.
When you have verified that Python is installed on your computer, you can proceed to install Django. There are three options: installing an official release, installing a distribution-specific installation package, or installing the latest "trunk" version from Subversion. For the sake of this article, I will only walk through the installation from an official release. For information about installing the trunk version, see the official documentation instructions (see Resources).
The first step in installing the official Django release is to get the
tarball from the Django download page. Once you have downloaded this file,
extract its contents. In Linux, simply issue the following
command at your shell prompt (be sure to navigate to the directory where
you downloaded the package to). Please note that V1.0.2 was the latest
release at the time of writing, so be sure to replace this file name with
the exact filename of the package you downloaded:
tar zxvf Django-1.0.2-final.tar.gz.
In Mac OS X, it's likely that your Web browser automatically decompressed
the package when it finished downloading, so the file will be
Django-1.0.2-final.tar. To extract this file, simply use the following
command:
tar xvf Django-1.0.2-final.tar. If you are using Windows, you can use a utility such as 7-Zip to extract the tarball.
Now that you have extracted the contents of the tarball (probably to a
location like Django-1.0.2-final on your hard drive), navigate to that
folder in your command prompt. To install Django, issue the following
command (Mac OS X or Linux):
sudo python setup.py install. Windows users, make sure your command prompt is opened with administrator
privileges and issue the following command:
setup.py install.
Once you have done this, Django will have been installed into your Python
installation's site-packages folder, and you are ready to start developing
in Django. Before we move onto the anatomy of a
Django application, we will test that our development environment is up
and running correctly. First, we will check that Django is
installed correctly. Open your shell or command prompt and
start the Python interactive tool by issuing the command
python. Now issue the commands shown in Listing
2 at the Python prompt (don't type the
>>>s):
Listing 2. Verify that Django is installed correctly
>>> import django >>> django.VERSION
If your installation was successful, you should see the text shown in Listing 3.
Listing 3. Successful installation
(1, 0, 2, 'final', 0) >>>
Now that we have verified that Django is actually installed, we will test
that the development server is working. To do this, we need to create
a project. Create a directory to store your Django projects in (I use
/home/joe/django on my Mac OS X system) and navigate to that directory.
From there, issue the following command:
django-admin.py startproject testproject.
This will create a new directory within your projects directory called
testproject. This directory contains four files: __init__.py, manage.py,
settings.py, and urls.py. Don't worry about what these files do right now; we are going to jump ahead and run the project. Make sure you are
in the project folder (use
cd testproject at the prompt) and issue the
following command:
python manage.py runserver. You should see the output shown below.
Listing 4. Running the Django development server
Validating models... 0 errors found Django version 1.0.2 final, using settings 'testproject.settings' Development server is running at Quit the server with CONTROL-C.
This message tells us that the development server is now running at the URL. Open your favorite Web browser and paste in this URL in the address bar. You should see a page like the one shown below.
Figure 1. Welcome to Django Page
You now have a working Django development environment up and running. It's worth mentioning that although you can run full-fledged Django applications in this environment, it is not suitable for use in a production environment. We will cover deploying Django applications for production use later in this article.
The anatomy of a Django application
Django's architecture is loosely based on the Model-View-Controller (MVC) pattern in that it separates application logic, user interface (UI), and data-access layers, with the goal of allowing each layer to be modified independently, without affecting the other layers. According to Django documentation, however, Django follows a similar pattern: what it refers to as a Model-Template-View (MTV) architecture. The Model can be seen as the data-access layer, where the application interacts with any databases and information sources. The Template is the layer that defines how the data should be presented to the user, whereas this is considered the View layer in an MVC pattern. In an MTV architecture, the View layer describes what data should be presented to the user. It does not define exactly how it should be presented; it delegates that task to the template layer. As for MVC's Controller layer, Django sees this as being the framework itself, as it determines the appropriate view to send requests to, as defined in the URL configuration.
In addition to Models, Templates, and Views, Django offers some advanced features out of the box, such as URL configurations, an automatic administrative interface, caching, and more. Like Python, one of the key philosophies behind Django is the batteries-included approach, meaning that it comes with a large standard library of additional packages you can use in your applications without additional downloads.
The model layer of a Django application is handled by Django's data-access layer. Within this layer, you will find everything related to the data: connection settings, validation parameters, relations, etc. Out of the box, Django includes support for PostgreSQL (the preferred database of the creators of Django), MySQL, SQLite, and Oracle. Which database to use is stored in a settings file, and the model layer is the same no matter what option you choose.
Models in Django can be seen as descriptions of the database table schemas, represented in Python code. Django uses the model to generate and execute SQL statements in the database, which in turn return a result. Django then translates to a Python data structure, which can be used by your Django application. An obvious advantage here is that you can hot-swap among different databases systems (for example, change from MySQL to PostgreSQL) without having to change your models.
The code in Listing 5 is an example of a model definition. This would generally be stored in a models.py file in a Django application's directory.
Listing 5. Sample Django Model
from django.db import models class Person(models.Model): first_name = models.CharField(max_length=30) last_name = models.CharField(max_length=30) email = models.EmailField() date_of_birth = models.DateField()
The Template layer of a Django application allows you to separate the UI or presentation layout of a Web application from its data. It uses placeholder variables and simple logic statements to define what data should be populated into the template. Templates usually produce HTML output, but can also produce XML or any other type of document.
The idea behind the Template layer is that the presentation layer code is separate from the business-layer code. This means a Python developer can focus on developing the software, while leaving a Web designer to work on the templates. It also means that the developer and the designer can work on the same project at the same time, as the two components are completely separate from one another.
It's important to note that Django's template system does not allow Python
code to be executed directly from the template. It offers a
rudimentary set of programming-style features, such as variables, logic
statements (
if statements), and looping constructs (
for loops), which should offer more than enough logic required for the presentation of data. Listing 6 is an example of what a Django template would look like.
Listing 6. Sample Django Template
<html> <head> <title>Your message has been sent</title> </head> <body> <h1>Thank you!</h1> <p>Just wanted to say a quick thanks, {{ name }}, for the message you have just sent.</p> <p>It was sent on {{ sent_date|date:"j F Y" }}. We aim to respond within 48 hours.</p> <p>Thanks again!</p> </body> </html>
Listing 7 shows how this template would be used in a Django application.
Listing 7. Loading the sample Django Template in a view
def send_message(request): name = "Joe Lennon" sent_date = datetime.datetime.now() return render_to_response('thankyou.html', locals())
View functions, or "views" as they are more commonly known, are basically
Python functions that accept a request parameter and return a response.
The request is typically a request from the Web server, and the view takes
any parameters passed along with this request. The view then performs the
logic required to determine the appropriate response before returning that
response. Views can be stored anywhere in a Django application, but are
usually stored in a file named views.py. Listing 5 is an example of
a view function, named
send_message. It accepts
the
request parameter in its definition and returns a rendered template
(thankyou.html) as its response.
You just read that views can be stored anywhere. If that is the case,
how does Django know where to find them? This leads us on to URLconfs,
which define what URLs point to what views. URLconfs are stored in a file
called urls.py and basically map a URL to a view function. For example,
the url /send_message/ might map to our send_message view, as described in
Listing 7. In fact, the way URLconfs work allows them to be used for
pretty URLs out of the box — in other words, instead of using query strings
like
myfile.php?param1=value1, your URL might be /myfile/value1/.
The code is Listing 8 provides an example of a urls.py file, which connects
the URL
/send_message/ to our
send_message view function, as defined in Listing 7.
Listing 8. Sample URLconf
from django.conf.urls.defaults import * from testproject.views import send_message urlpatterns = patterns('', ('^send_message/$', send_message), )
One of the most interesting and talked-about features of Django is its automatic administrative interface. Web application developers who have worked on projects that require back-end admin interfaces to be developed in addition to the front end may be able to relate to the frustration and boredom that comes from developing such interfaces. Admin interfaces are usually mind-numbing, don't require much skill, and don't flex your programming muscle in any way. Django's automatic admin interface feature will be of enormous value, as it takes this requirement out of the equation by automating the entire task.
Once you have created the models for your application and set up your database settings, you can enable the admin interface for your application. Once you have done so, simply point your browser to and log in to manage the back end of your Django application. The interface is highly customizable and features excellent user and group-based authentication controls. A screenshot of it in action is shown below.
Figure 2. Django automatic admin interface in action
We have given a high-level overview of how a Django application is created and of how the MTV pattern it is based on works. We have looked at the concepts of models, templates, views, and URLconfs; and we have seen a glimpse of Django's brilliant automatic administration interface system. If you are looking for an in-depth guide to developing Django applications, visit the official Django project Web site and read the documentation there or read the Django Book (see Resources). Both offer an excellent assumption-free guide to all things Django and cover much more detail than we can get into here.
Next, take a look at taking Django applications and deploying them to production servers.
Readying your Django application for deployment
As we have seen, the Django framework conveniently includes a development server, which is ideal for debugging and testing your Django application. Unfortunately, this server is only designed to run in a local environment and could not withstand the pressures of a production Web application used by many people concurrently. For that, you need to deploy Django to a production-grade Web server, such as Apache or lighttpd. Explore some steps that need to be taken to make your application production-ready, then learn what's involved in preparing your Web server for serving up your Django application.
Before discussing how to set up your production environment for your Django application, there are a few things you need to do in your Django application's settings. It is important to make these changes because any vulnerabilities in your Django application may be made public, thanks to debug error messages, etc.
Naturally, you won't want to change these settings in your development
environment, as debug messages and errors are extremely useful when
maintaining your application. To solve this, you could maintain
two separate settings files: one for your development server and one for
your production server. Alternatively, you could employ the following
trick to keep them in the same file and tell Django to only use the
development settings if it is in the development environment. To do this,
you would lay out your settings.py file in the following way (obviously,
replace
joe-mac-mini with your development server's hostname, as shown in
Listing 9).
Listing 9. Separate settings for development and production environments
import socket if socket.get_hostname() == 'joe-mac-mini': #Development Server Settings go here else: #Production Server Settings go here
Now that we've looked at keeping separate settings for our two
environments, let's examine the settings we need to change in our
production environment. The two essential settings you must change in
your production environment are
DEBUG and
TEMPLATE_DEBUG. These are set to
True by default when you create your Django application using
django-admin.py
startproject. It is essential that you change this to
False for your production environment. In your settings.py, in the
Production section, this line should read as follows:
DEBUG = TEMPLATE_DEBUG = False.
By default, Django is set up to send an e-mail anytime an unhandled
exception is raised in your Django application. To enable this
feature, tell Django who it should send the e-mail to. This is
done with the
ADMINS setting in the settings.py file.
Listing 10. Defining application administrators
ADMINS = ( ('Joe Lennon', '[email protected]'), )
If you come across errors in your code when developing your Django application, you may have noticed the error pages Django generates, full of helpful information to assist you in finding the root of the problem. When you switch debug mode off, these nice error pages disappear, as they are a potential security threat. As a result, if someone comes across an error (for example, a 404 Page Not Found, 403 Forbidden, or 500 Internal Server Error), he will only see an ugly error code page. To rectify this, it is advised to create nice, explanatory error template pages and put them in your application's template folder. Each template should be named according to the error code it represents. For Page Not Found, you should name the file 404.html; for Internal Server Error, use 500.html; etc.
Now that we have configured our settings for use in a production environment, we will show you how this environment should be set up to house your Django application. Although it is possible to run Django using FastCGI and lighttpd, a more typical setup is Apache and mod_python. We will now look at how to deploy to a server running Apache and mod_python. Then we'll take a brief look at deploying to a shared Web-hosting environment, where access to httpd.conf is forbidden.
Deploying Django applications to Apache with mod_python
According to the Django documentation, a setup of the Apache Web server running mod_python is the recommended option for deploying Django applications. Django supports setups with at least Apache HTTP Server V2.0 and mod_python V3.0 and later. mod_python is an Apache module that integrates support for the Python programming language into the Web server. It is much faster than the traditional CGI method of executing Python scripts.
To load the mod_python module into Apache, add the following
line to your server's httpd.conf file:
LoadModule python_module /usr/lib/apache2/modules/mod_python.so.
In addition to loading the mod_python module, you also need to set up
a
Location directive that tells Apache what URL to associate with your
Django application. For the sake of example, the settings here are what
would apply to the testproject project created earlier.
Listing 11. testproject
Location directive
<Location "/testproject"> SetHandler python-program PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE testproject.settings PythonDebug Off </Location>
This tells Apache that your Django testproject project is accessible via the /testproject URL. For example, if your server's domain name is example.com, your application would be accessible via. To load these new settings into Apache, simply restart the Apache server.
Django's developers highly recommend that you do not serve media files (such as images, video, audio, etc.) from the same Web server as your Web application, but in many cases, that is not an option — at least initially. In order to set up an area of your Web site where media files can be served, you can add the following directive to your httpd.conf file.
Listing 12. Telling Apache not to use mod_python for media files
<LocationMatch "\.(png|gif|jpg|mov|mp3|avi|wav)$"> SetHandler None </LocationMatch>
That's all there is to setting up Apache and mod_python for deploying Django to a production Web server. Next, we'll take a look at a common deployment scenario where someone is deploying to a shared Web-hosting server where they are not allowed to modify httpd.conf.
Deploying Django applications to a shared Web-hosting environment
Unfortunately, dedicated servers and virtual private servers tend to be quite expensive and, as a result, these are not viable options for deployment for everyone. It is common that you will first deploy a Web application on a shared hosting environment, upgrading to dedicated solutions as the application grows in popularity. Luckily, most shared Web-hosting providers include Python support, so it is possible to deploy Django applications in this scenario.
Unlike a dedicated environment, end users typically do not have the option of running a separate server process or editing the httpd.conf configuration file. This means they cannot make the changes outlined in the previous section, so they cannot get Django up and running in this manner. Fortunately, it is possible to deploy Django to shared hosting environments, using Web server-spawned processes that execute a FastCGI program. Create a file called .htaccess and place it in the same directory you are deploying your Django application to.
Listing 13. .htaccess file
AddHandler fastcgi-script .fcgi RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ testproject.fcgi/$1 [QSA,L]
Then create a small Python script that informs Apache of
the various settings for your Django project and executes the FastCGI
program. The name of the file is not important, but it must be the same as
the file name in the
RewriteRule line of .htaccess. In Listing 14, we used
the file name testproject.fcgi, so that's what I will call my script.
Listing 14. testproject.fcgi file
#!/usr/bin/python import sys, os sys.path.insert(0, "/home/joelennon/python") os.environ['DJANGO_SETTINGS_MODULE'] = "testproject.settings" from django.core.servers.fastcgi import runfastcgi runfastcgi(method="threaded", daemonize="false")
Be sure to make the file executable. If you have shell access to your
shared hosting server, log in and change to the directory the file is
contained in and run
chmod 755 testproject.fcgi.
If you do not have shell access, you can change file permissions with most
decent FTP clients. Anytime you change your application's code, you
need to change the timestamp on this file. This tells Apache that the
application has been updated, and it will continue to restart your Django
application. If you have shell access, this is as simple as running
touch testproject.fcgi. If you do not have shell access, you can update the timestamp of the file
by reuploading it or by editing it and resaving it.
If you prefer not to get your hands dirty with these configuration files, you could always avail of a hosting service designed to support Django applications out of the box. The popular MediaTemple hosting provider offers a Django GridContainer add-on to its GridService offering, starting from $20/month for 256 MB of RAM. The GridContainer runs on a pre-tuned lighttpd/FastCGI setup, and the amount of RAM can be increased to scale along with your application.
Scaling your Django deployment
If your Django application is successful, it is likely that you will need your deployment to be as scalable as possible. It's common for Web applications to work fine under average loads, but phenomena like the Digg effect can forward so much traffic to the application that it buckles under the surge in load. Fortunately, Django and Python are highly scalable by nature, but there are other things you need to consider as your application grows.
If you are running your Django application on a shared hosting environment and are starting to feel that it is outgrowing the limited resources available to it in such a space, your obvious first port of call is the move to a dedicated machine. If cost is an issue, virtual private servers offer an inexpensive intermediary between shared hosting and a dedicated server.
As your application grows, even the resources of a dedicated server can become sparse very quickly. The following are some remedies that may ease the burden on your server:
- Turn off any unused processes or daemons, such as mail servers, streaming servers, gaming servers or any other unnecessary processes, that are hogging precious CPU time and RAM.
- Farm off your media files to a cloud platform, such as Amazon S3. This will allow you to only use your Web server for Django requests and keep your media on a separate server.
- Turn off Apache's Keep-Alive option. Keep-Alive is an extension to the HTTP protocol, which allows persistent connections over HTTP. This, in turn, allows multiple requests to be sent down over the same TCP connection, dramatically speeding up the serving of static HTML documents and images. Unfortunately, this extension can have a negative effect on the performance of a Django application. Please note that you should only turn this option off if you have moved your media files to a different server. To turn off Keep-Alive, find the relevant line in your httpd.conf file and change it to Keep-Alive Off.
- Use Django's built-in cache framework. It is powered by memcached, a popular distributed memory-object caching system. Effective caching can drastically enhance the performance of your Django applications.
- Upgrade your server. Dedicate as much RAM as possible, and if disk space is an issue, consider adding a new disk. If your server is struggling, it's more than likely that it's due to the RAM being used up. Don't waste your money upgrading processors — spend the money on RAM, instead.
- Buy another server. There may come a time when your server just can't handle the load from your Django application on its own. As soon as your single server setup starts to give way, add another server. You should run your Web server on one machine and your database server on the other. Be sure to use the machine with more RAM for your database server. If required, upgrade the new machine with more RAM and disk space when necessary.
- Use database replication. If your database server is running out of resources, you can ease the burden by replicating it across more than one server. Once replication is in place, you can add servers when required to provide additional resources.
- Add redundancy. With large-scale applications, having a single point of failure for your Web server or database server is a disaster waiting to happen. You should add redundant servers where possible, which will take over in the event of the primary server failing. Additionally, using load-balancing hardware or software, such as mod_proxy, to distribute the traffic across your servers can drastically increase the performance of your application.
It is important to consider a scaling path for your Django application as early as possible. This allows you to put a plan of action in place that covers every possible scenario. So, if the Digg effect has your application on its knees, you can simply kick the next stage of your scaling plan into action and embrace your new users with open arms — and more importantly, a fast-running application.
Summary
We have looked at the Django framework from both ends of the spectrum — from a developer with no prior experience using the framework starting out to someone who has a fully ready Django application waiting to be deployed looking for a guide on how to tackle deploying the application to the Web. We've also examined what to consider in terms of scalability for the future. We looked at what a Django application is made up of and learned about the Model-Template-View (MTV) pattern it is based on. We have seen that Django is a lightweight, easy-to-install, and easy-to-learn framework with excellent documentation and a vibrant community surrounding it, making it a great framework for your next Web application.
Resources
Learn
- Check out the Django documentation for a plethora of tutorials and articles about Django and how to use it to develop powerful Web applications.
- Django Book: Read a free online version of an excellent book about Django, covering everything from introductory concepts to deployment, internationalization and security.
- Dive Into Python: Read a free online version of this comprehensive book on the Python programming language.
- Read the Wikipedia entry for Django.
- Visit DjangoSnippets.org for a multitude of reusable, user-contributed Django code snippets you can freely use in your own projects.
- "Develop for the Web with Django and Python" offers an in-depth guide to the fundamentals of developing with the Django framework.
- developerWorks Web development zone: Expand your site development skills with articles and tutorials that specialize in Web technologies.
-
- Visit the Django Project to download the Django framework and subscribe to the official Django blog.
-. | http://www.ibm.com/developerworks/opensource/library/os-django/index.html | CC-MAIN-2013-48 | en | refinedweb |
09 November 2010 11:42 [Source: ICIS news]
RIO DE JANEIRO (ICIS)--Attendance at the 30th Latin American Petrochemical Association (APLA) annual meeting in ?xml:namespace>
“We have 815 delegates registered as of now,” APLA marketing executive Fernando Poppi said on the last day of the event in Rio de Janeiro, Brazil.
That compares with 616 registered delegates in
Poppi said delegates travelled to
APLA is expected to announce the venue for next year’s conference later on Tuesday.
It was rumoured at the event | http://www.icis.com/Articles/2010/11/09/9408419/apla-10-event-attendance-nears-record-on-last-day.html | CC-MAIN-2013-48 | en | refinedweb |
On Sat, Apr 16, 2011 at 03:49, YunQiang Su <[email protected]> wrote: > I got it. > > It used it like this > > #if !defined(__linux__)&&!defined(__APPLE__) > { > int immed = 1; > if (ioctl(pfd_, BIOCIMMEDIATE, &immed) < 0) { > fprintf(stderr, > "warning: pcap/live (%s) couldn't set immed\n", > name()); > perror("ioctl(BIOCIMMEDIATE)"); > } > } > #endif > > On linux , BIOCIMMEDIATE is not used, but on BSD, it did being used, > but did *not* include net/bpf.h > > Then there are 2 ways to fix this bug: > 1. also disable the above code block on BSD. > 2. include net/bpf.h on BSD platform. > > Which one is better ? > -- > YunQiang Su > I prefer to attempting to use this piece of code first. Because using BIOCIMMEDIATE could very probably get some advantages over not using them on kbsd platforms. Here is a thread about it in freebsd-arch mailing list: >. -- Regards, Aron Xu | http://lists.debian.org/debian-bsd/2011/04/msg00032.html | CC-MAIN-2013-48 | en | refinedweb |
18 May 2012 08:33 [Source: ICIS news]
KUALA LUMPUR (ICIS)--Thailand-based Indorama Ventures (IVL) is exploring an investment in a US-based cracker with multiple partners, a senior executive said on Friday.
“We are still working on it, evaluating ethylene pricing and investment opportunities with various partners. We will consider all possible situations,” said D.K. Agarwal, CEO of Indorama Polymers, an IVL subsidiary.
Agarwal spoke to ICIS on the sidelines of the Asia Petrochemical Industry Conference (APIC) in ?xml:namespace>
A world-scale ethane cracker would have around 1.5m tonnes/year of ethylene capacity, he noted.
“But we require only 300,000 tonnes/year of ethylene for our
The figure includes IVL’s $795m buyout of US EO/EG producer Old World Industries on 1 April, said Agarwal.
The company also still needs to buy 450,000 tonnes/year of MEG in
On 29 February, Indorama announced at its annual investor meeting that it already completed a pre-feasibility study for a 1.3m tonne/year cracker | http://www.icis.com/Articles/2012/05/18/9560954/apic-12-indorama-explores-us-cracker-with-multiple.html | CC-MAIN-2013-48 | en | refinedweb |
Forgive me if I am asking an obvious question (maybe I missed it in the docs somewhere?) but has anyone found a good way to organize their URLs in
I am using Jeresy Jax-RS to build a web service. Now I need to get the url of the request with the port # if one exist.
So if my service runs ...
Is there any easy way to provide a case-insensitive URLs in a JAX-RS web service? The goal of this is to produce a web service which is a "lenient acceptor."1
I imagine ...
I think this is not possible, but I need confirmation before throwing it away...
I have a GET REST endpoint with this pattern
/networks/{networkId}/publishers/{publisherId}/ratings
I'm working on designing a RESTFul service which provides CRUD operations for various domain objects. One such object is Person.
We have the following services:
GET /person/list?type=Infant
iam working with jersey restful webservices with my sql backend support....Till now iam passing parameters to @GET method in URL after server starts.
I want to develop rest api.such as:
and get params from urlparams exp."meg"&"name"
I am using jersey to develop a Restful post method
it dose not make it ...
I read some websites rest APIs , most urls of this APIs are just like¶m=22 however in books teaching REST, most of urls are like .
Whats the difference? ...¶m=22
How do you encode a path parameter (not form-url-encoded) but just a single URL that's appended in the format:
public String method(@PathParam("url") String url) {
}
We are developing a restful api using jersey (1.9.1) and tomcat 5.5.
A given resource is identified with a urn and we would like to address a specific instance of that resource. ...
Inside a Jersey REST method I would like to forward to an another website. How can I achieve that?
@Path("/")
public class News {
@GET
...
I searched this forum.. found a similar topic but not the answer... Case: I am implementing a REST framework with Jersey.. My server is Jetty.. Link: The base of my implementation and problem...How do i create an embedded Jetty instance that uses Jersey Alright, now the exception thrown.. 2010-03-03 06:17:38,930 ERROR [httpSSLWorkerThread-80-3] StatisticsManager.getStats - Error updating Stream Service .. GET ... | http://www.java2s.com/Questions_And_Answers/Java-Enterprise/jersey/URL.htm | CC-MAIN-2013-48 | en | refinedweb |
Patterns from SOA Design Patterns by Thomas Erl, Part 1
The first draft of SOA Design Patterns had 60 patterns that were reviewed by more than 100 selected SOA specialists from all over the world. During the same time the draft was subject to public review on soapatterns.org. The SOA community was invited to contribute with their own patterns, ones they had used and had been validated in production. The response led to a collection of 34 new patterns. The end result was a catalog of 85 individual and compound patterns plus 28 candidate patterns – as today - subject to further review and validation by the SOA community. These patterns can be used as guidelines for solid SOA design and implementation. In this article we present 3 Inventory Governance Patterns from chapter 10 of the book: Canonical Expression, Metadata Centralization, and Canonical Versioning.
When first designing a service inventory, there are steps that can be taken to ensure that the eventual effort and impact of having to govern the inventory is reduced. This chapter provides a set of patterns that supply some fundamental design-time solutions specifically with the inventory’s post-implementation evolution in mind.
Canonical Expression (275) refines the service contract in support of increased discoverability, which goes hand-in-hand with Metadata Centralization (280), a pattern that essentially establishes a service registry for the discovery of service contracts. These patterns are further complemented by Canonical Versioning (286), which requires the use of a consistent, inventory-wide versioning strategy.
All of these patterns are considered fundamental to inventory governance in that they support and are influenced by the Service Discoverability principle, which actually shapes service meta information in such a manner that it can be effectively discovered and interpreted.
Note - The governance patterns in this chapter focus on fundamental technical and design-related governance issues only. The upcoming title SOA Governance as part of this book series will provide a collection of additional technical and organizational best practices and patterns.
Canonical Expression
How can service contracts be consistently understood and interpreted?
Table 10.1
Profile summary for the Canonical Expression pattern.
Problem
Service contracts delivered or extended by different projects and at different times are naturally shaped by the architects and developers that work with them. The manner in which the service context and the service’s individual capabilities are defined and expressed through the contract syntax can therefore vary. Some may use descriptive and verbose conventions, while others may use terse and technical formats. Furthermore, the actual terms used to express common or similar capabilities may also vary.
Because services are positioned as enterprise resources, it is fully expected that other project teams will need to discover and interpret the contract in order to understand how the service can be used. Inconsistencies in how technical service contracts are expressed undermine these efforts by introducing a constant risk of misinterpretation (on a technical level). The proliferation of these inconsistencies furthermore places a convoluted face on a service inventory, increasing the effort to effectively navigate various contracts to study possible composition design options.
Solution
Standardized naming conventions can be applied to the delivery of all service contracts so as to ensure the consistent expression of service contexts and capabilities (Figure 10.1).
Figure 10.1
The expression of service contracts is aligned across services.
Application
A set of naming and functional expression conventions needs to be established as formal design standards. The realization of consistent contract design is then attained via the disciplined use of these conventions within common analysis and design processes.
An example of a standard associated with contract expression is the CRUD (create, read, update, delete) convention traditionally used to outfit components with a predictable set of methods. Entity services in particular often require these types of data processing functions, and using standardized verbs to express them supports the application of this pattern.
With Web services in particular, this pattern will tend to impact the design of WSDL definitions, as illustrated in Figure 10.2.
Figure 10.2
The WSDL definitions of the four services are affected by Canonical Expression.
Note - This pattern can be applied regardless of whether the service contract is decoupled.
Impacts
The relevance of Canonical Expression may at first appear trivial. However, when building a collection of services, especially within larger enterprise environments, a consistent functional expression significantly reduces tangible risk factors.
The primary requirement to successfully applying this pattern is the incorporation and enforcement of the required design standards. If a formal design process has already been established in support of Decoupled Contract (401) and Canonical Schema (158), then the effort to include a step dedicated to Canonical Expression is usually minor.
Note also that unlike Canonical Schema (158), which often must be limited to domain service inventories due to its governance impact, this pattern can more easily be positioned as an enterprise-wide standard. This benefits the enterprise as a whole as consistent expression is established across all domains.
Relationships
The naming conventions introduced by Canonical Expression influence how several other patterns are applied (as listed at the top of Figure 10.3). This pattern fundamentally supports the goals of Contract Centralization (409) and Metadata Centralization (280) by enhancing the intuitiveness of service identification and reuse.
Figure 10.3
Canonical Expression keeps the external expression of service contracts consistent, thereby affecting contract and context-related patterns.
Case Study Example
An early pilot version of the Inventory Processing service has been used for testing purposes. It consists of a Web service that was auto-generated using a development tool that derived the Web service contract from component class interfaces that exist as part of the custom legacy inventory management system.
Although this Web service has been valuable for various assessment purposes, once architects take a closer look at the actual Web service contract code, they detect some content that raises concerns:
- The Web service operations inherited the cryptic legacy component method names.
- Several of the Web service operations have input and output message schemas that are derived from input and output legacy method parameters that are too granular for message-based service interaction.
- There is no real concept of an inventory record because it was not supported within the legacy component API.
These and other issues prompt Cutit to move ahead with a formal design process that requires the definition of service contracts prior to the development of underlying logic. This design process is completed subsequent to a formal analysis and modeling process during which architects collaborate with business analysts to define conceptual service candidates. These candidates then form the basis of the physical service designs.
Architects and developers can now avoid irregularities and problematic characteristics within service contracts because they have gained control of the definition of these contracts.
Metadata Centralization
How can service metadata be centrally published and governed?
Table 10.2
Profile summary for the Metadata Centralization pattern.
Problem
When growing a service inventory and fostering fundamental qualities such as those realized by Service Normalization (131) and Logic Centralization (136), there is a constant risk of project teams inadvertently (or sometimes even intentionally) delivering new services or service capabilities that already exist or are already in development (Figure 10.4).
Figure 10.4
Without an awareness of the full range of existing and upcoming services, there is a constant risk that project teams will deliver service logic that already exists or is already in development.
This leads to undesirable results, most notably:
- the introduction of redundant service logic, which runs contrary to Logic Centralization (136)
- the introduction of overlapping service contexts, which runs contrary to Service Normalization (131)
- an overall less effective service inventory and technology architecture, bloated and convoluted by the added redundancy and denormalization and in need of additional governance effort
All of these characteristics can undermine an SOA initiative by reducing its strategic benefit potential.
Solution
A service registry is established as a central part of the surrounding infrastructure and is used by service owners and designers to:
- register existing services and capabilities
- register services and capabilities in development
As emphasized in discovery-related governance patterns, the registration process requires that discovery information be recorded in a highly descriptive and communicative manner so that it can be used by project teams to:
- locate and interpret existing services and learn about their functional contexts and boundaries
- locate and interpret service capabilities and learn about their invocation and interaction requirements
By providing a current and well-maintained registry of service contexts and capabilities, effective service discovery can be achieved (Figure 10.5).
Figure 10.5
The fundamental discovery process during which a human locates a potential service via a service registry representing the service inventory and then interprets the service to determine its suitability.
Note - Metadata Centralization is clearly a design pattern associated with the Service Discoverability design principle and the discovery of services in general. Why then is it not simply called Service Discovery?
Service discovery itself is a process that is carried out once an enterprise has successfully applied Metadata Centralization to its architecture and the Service Discoverability design principle to its services. The process of service discovery is therefore related to a set of SOA governance patterns documented separately in the upcoming title SOA Governance that will be released as part of this book series.
Application
The application of this pattern requires the following common steps:
- Regularly apply the Service Discoverability principle to all service contracts being modeled and designed.
- Use service profiles and supporting processes to standardize the documentation of service and capability metadata. For example, a common part of service profiles is a standard vocabulary used for keywords that are attached to the service registry records.
- Implement a reliable service registry product and position it as a standard part of the supporting infrastructure.
Finally, formal processes for the registration and discovery of services and capabilities need to be established.
Note - This pattern can be applied to a single service inventory or multiple domain inventories, depending on the ability of the service registry product to associate domains with service profile records. For a service profile template and descriptions of service discovery and interpretation processes, see Chapters 16 and 12, respectively, in SOA Principles of Service Design.
Impacts
Service registration and discovery processes are key success factors for the effective governance of a service inventory. If the processes are not respected or followed consistently by project teams or if the registry is not kept current, then the value potential of Metadata Centralization will severely diminish.
From a design perspective, however, this pattern will introduce the need for metadata standardization, as per the Service Discoverability principle. It will further require that metadata documentation and registration become part of the standard service delivery lifecycles.
There may further be a need to create a new organizational role in support of realizing Metadata Centralization. A person or a group would act as service registry custodian and assume responsibility for collecting the required metadata and maintaining the registry.
Relationships
Metadata Centralization essentially establishes a service registry, which is key to ensuring the long-term successful application of Logic Centralization (136) and Contract Centralization (409). If the correct services and their contracts can be effectively located (discovered), then the risk of inadvertently introducing redundant logic into an environment is reduced, further supporting Service Normalization (131).
Agnostic services represent the primary type of service for which metadata needs to be centralized for discovery purposes, which is why this pattern is especially relevant to services defined as a result of Entity Abstraction (175) and Utility Abstraction (168).
Figure 10.6
Metadata Centralization facilitates discovery and therefore relates to other patterns that rely on design-time awareness in order to be consistently applied.
Case Study Example
As explained in the Logic Centralization (136) example from Chapter 6, the original functional overlap between the Alleywood Areas and Region services could have gone undetected, resulting in the quality and integrity of the service inventory being negatively affected. For this reason, it was determined early on that a service registry would be required to support Service Normalization (131) and ensure the consistent application of Logic Centralization (136).
However, due to the decision to establish separate domain service inventories, architects struggle with the option of implementing a separate service registry for each inventory. Although it would continue to allow each group to govern their respective service collection independently, it would establish two different repositories.
It is anticipated that Alleywood and Tri-Fold services will need to interoperate. Those creating cross-inventory compositions will therefore need to issue separate queries in order to discover the required service capabilities.
The awkwardness of this governance architecture eventually prompts McPherson to establish a central enterprise service registry instead (Figure 10.7). This registry is governed by the McPherson Enterprise Group and allows Alleywood and Tri-Fold project teams to search each others’ inventories.
Figure 10.7
A global registry that spans services across Alleywood and Tri-Fold inventories.
Canonical Versioning
How can service contracts within the same service inventory be versioned with minimal impact?
Table 10.3
Profile summary for the Canonical Versioning pattern.
Problem
When service contracts within the same service inventory are subjected to different versioning approaches and conventions, post-implementation contract-level disparity emerges, compromising interoperability and effective service governance Figure 10.8). This can negatively impact design-time consumer development, runtime service access, service reusability, and the overall evolution of the service inventory as a whole.
Figure 10.8
Services that have been versioned differently become challenging to compose and interoperate and also difficult to interpret.
Solution
Service contracts within the same inventory are versioned according to the same conventions and as part of the same overall versioning strategy (Figure 10.9). This ensures a consistent governance path for each service, thereby preserving contract standardization and intra-inventory compatibility and interoperability.
Figure 10.9
When services are versioned according to the same overarching strategy, they can retain their original standardization and interoperability and are more easily understood by consumer designers.
Application
This pattern generally requires that a single versioning strategy be chosen, comprised of a series of rules and conventions that essentially become governance standards.
Canonical Versioning approaches can vary depending on the complexion of the enterprise, existing versioning or configuration management methodologies that may already be in place, and the nature of the overall governance strategy that may have also been established.
There are three common strategies that provide a baseline set of rules:
- Strict – Any compatible or incompatible change results in a new version of the service contract. This approach does not support backwards or forwards compatibility and is most commonly used when service contracts are shared between partner organizations and when changes to a contract can have legal implications.
-.
Note - The terms “backwards compatibility" and “forwards compatibility" are explained in the description for Compatible Change (465) in Chapter 16. For examples of each of these versioning strategies, see Chapters 20–23 in Web Service Contract Design and Versioning for SOA.
Impacts
There is the constant risk that project teams will continue to use their own versioning approaches, or rely too heavily on patterns like Concurrent Contracts (421), which allows them to simply add new contracts to an existing service.
The successful application of any versioning strategy will require strong support for the adherence to its rules and conventions to the extent that the chosen versioning approach becomes an inventory-wide standard on par with any other design standard. This introduces the need for a new organizational role that is tasked with enforcing the processes and syntactical characteristics that are defined as part of the strategy.
Relationships
Canonical Versioning essentially formalizes the application of Compatible Change (465), Version Identification (472), and Termination Notification (478), in that the overarching strategy established by this pattern will determine how and to what extent each of these more specific versioning patterns is applied.
The application of Metadata Centralization (280) results in a service registry that enables effective discovery of different contract versions and Canonical Expression (275) implements characteristics in service contracts that improve their legibility. Both of these patterns therefore aid the goals of Canonical Versioning.
Figure 10.10
Canonical Versioning is primarily related to other versioning patterns.
Note - Before continuing with this case study example, be sure to read up on the Policy Check service that was defined in the State Repository (242) case study example and then later positioned to support multiple inventories as part of the example for Cross-Domain Utility Layer (267).
Case Study Example
The FRC announced that due to new government legislation, it has revised some of its policies. This changes the policy data that was being made available electronically via its public Web services.
The Alleywood Policy Check service was originally positioned to shield the rest of the Alleywood service inventory from these types of changes by providing the sole access point for FRC policy data. Although its service logic can be augmented to accommodate changes to the FRC services, architects soon realize that they cannot prevent having to issue a new version of the Policy Check contract because the FRC has added new content and structure into their policy schemas.
Being the first time they’ve had to contend with a major versioning issue, the Alleywood team decides that some formal approach needs to be in place before they proceed. After some research into common versioning practices and further deliberation, they produce a versioning strategy comprised of a set of specific conventions and rules:
Version Identification (472) will be applied as follows:
- Version information will be expressed in major numbers displayed left of the decimal point and minor version numbers displayed to the right of the decimal point (e.g., “1.0").
- Minor and major contract version numbers will be expressed using the WSDL documentation element by displaying the word “Version" before the version number (e.g., <documentation>Version 1.0</documentation>)
- Major version numbers will be appended to the WSDL definition’s target namespace and prefixed with a “v" as shown here:
Compatible Change (465) will be applied as follows:
- A compatible change in the WSDL definition increments the minor version number and does not change the WSDL definition target namespace.
- A compatible change in the XML Schema definition increments the minor version number and does not change the XML Schema or WSDL definition target namespaces.
- An incompatible change in the WSDL definition increments the major version number and forces a new WSDL target namespace value.
- An incompatible change in the XML Schema definition increments the major version number and forces a new target namespace value for both the XML Schema and WSDL definitions.
The previously described scenario results in a set of incompatible changes that requires that the major version number of the Policy Check service contract be incremented from 1.0 to 2.0.
This example demonstrates the beginning of the Policy Check XML Schema and WSDL definitions after this change has occurred:<xsd:schema xmlns: <xsd:annotation> <xsd:documentation> Version 2.0 </xsd:documentation> </xsd:annotation> ... </xsd:schema> <definitions name="Policy Check" targetNamespace= "" xmlns="" xmlns: <documentation> Version 2.0 </documentation> ... </definitions>
Example 10.1
Fragments from the Policy Check Web service contract documents that show the effects of applying a versioning strategy.
The Alleywood architects acknowledge that defining the versioning approach is only the first step. In order for Canonical Versioning to be fully realized, these new rules and standards must be applied to any future service contracts that need to be versioned. This leads to the creation of a new process that is placed under the jurisdiction of the governance group..
Book Review
by
Carlos Perez
Here's my review of the book "SOA Design | http://www.infoq.com/articles/3-SOA-Design-Patterns-Thomas-Erl | CC-MAIN-2013-48 | en | refinedweb |
#include <deal.II/grid/manifold_lib.h>
Manifold description for a polar coordinate system.
You can use this Manifold object to describe any sphere, circle, hypersphere or hyperdisc in two or three dimensions, both as a co-dimension one manifold descriptor or as co-dimension zero manifold descriptor, provided that the north and south poles (in three dimensions) and the center (in both two and three dimensions) are excluded from the Manifold (as they are singular points of the polar change of coordinates).
The two template arguments match the meaning of the two template arguments in Triangulation<dim, spacedim>, however this Manifold can be used to describe both thin and thick objects, and the behavior is identical when dim <= spacedim, i.e., the functionality of PolarManifold<2,3> is identical to PolarManifold<3,3>.
This class works by transforming points to polar coordinates (in both two and three dimensions), taking the average in that coordinate system, and then transforming the point back to Cartesian coordinates. In order for this manifold to work correctly, it cannot be attached to cells containing the center of the coordinate system or the north and south poles in three dimensions. These points are singular points of the coordinate transformation, and taking averages around these points does not make any sense.
Definition at line 63 of file manifold_lib.h.
The Constructor takes the center of the spherical coordinates system. This class uses the pull_back and push_forward mechanism to transform from Cartesian to spherical coordinate systems, taking into account the periodicity of base Manifold in two dimensions, while in three dimensions it takes the middle point, and project it along the radius using the average radius of the surrounding points.
Definition at line 122 of file manifold_lib.cc.
Make a clone of this Manifold object.
Implements Manifold< dim, spacedim >.
Definition at line 132 of file manifold_lib.cc.
Pull back the given point from the Euclidean space. Will return the polar coordinates associated with the point
space_point. Only used when spacedim = 2.
Implements ChartManifold< dim, spacedim, spacedim >.
Definition at line 192 of file manifold_lib.cc.
Given a point in the spherical coordinate system, this method returns the Euclidean coordinates associated to the polar coordinates
chart_point. Only used when spacedim = 3.
Definition at line 158 of file manifold_lib.cc.
Given a point in the spacedim dimensional Euclidean space, this method returns the derivatives of the function \(F\) that maps from the polar coordinate system to the Euclidean coordinate system. In other words, it is a matrix of size \(\text{spacedim}\times\text{spacedim}\).
This function is used in the computations required by the get_tangent_vector() function.
Refer to the general documentation of this class for more information.
Definition at line 231 of file manifold_lib.cc.
Helper function which returns the periodicity associated with this coordinate system, according to dim, chartdim, and spacedim.
Definition at line 141 of file manifold_lib.cc.
The center of the spherical coordinate system.
Definition at line 116 of file manifold_lib.h. | https://dealii.org/developer/doxygen/deal.II/classPolarManifold.html | CC-MAIN-2020-10 | en | refinedweb |
The more you do programming, the more you will here about how you should test your code. You will hear about things like Extreme Programming and Test Driven Development (TDD). These are great ways to create quality code. But how does testing fit in with Jupyter? Frankly, it really doesn’t. If you want to test your code properly, you should write your code outside of Jupyter and import it into cells if you need to. This allows you to use Python’s unittest module or py.test to write tests for your code separately from Jupyter. This will also let you add on test runners like nose or put your code into a Continuous Integration setup using something like Travis CI or Jenkins.
However all is now lost. You can do some testing of your Jupyter Notebooks even though you won’t have the full flexibility that you would get from keeping your code separate. We will look at some ideas that you can use to do some basic testing with Jupyter.
Execute and Check
One popular method of “testing” a Notebook is to run it from the command line and send its output to a file. Here is the example syntax that you could use if you wanted to do the execution on the command line:
jupyter-nbconvert --to notebook --execute --output output_file_path input_file_path
Of course, we want to do this programmatically and we want to be able to capture errors. To do that, we will take our Notebook runner code from my exporting Jupyter Notebook article and re-use it. Here it is again for your convenience:
# notebook_runner.py import nbformat import os from nbconvert.preprocessors import ExecutePreprocessor def run_notebook(notebook_path): nb_name, _ = os.path.splitext(os.path.basename(notebook_path)) dirname = os.path.dirname(notebook_path) with open(notebook_path) as f: nb = nbformat.read(f, as_version=4) proc = ExecutePreprocessor(timeout=600, kernel_name='python3') proc.allow_errors = True proc.preprocess(nb, {'metadata': {'path': '/'}}) output_path = os.path.join(dirname, '{}_all_output.ipynb'.format(nb_name)) with open(output_path, mode='wt') as f: nbformat.write(nb, f) errors = [] for cell in nb.cells: if 'outputs' in cell: for output in cell['outputs']: if output.output_type == 'error': errors.append(output) return nb, errors if __name__ == '__main__': nb, errors = run_notebook('Testing.ipynb') print(errors)
You will note that I have updated the code to run a new Notebook. Let’s go ahead and create a Notebook that has two cells of code in it. After creating the Notebook, change the title to Testing and save it. That will cause Jupyter to save the file as Testing.ipynb. Now enter the following code in the first cell:
def add(a, b): return a + b add(5, 6)
And enter the following code into cell #2:
1 / 0
Now you can run the Notebook runner code. When you do, you should get the following output:
[{'ename': 'ZeroDivisionError', 'evalue': 'integer division or modulo by zero', 'output_type': 'error', 'traceback': ['\x1b[0;31m\x1b[0m', '\x1b[0;31mZeroDivisionError\x1b[0mTrace']}]
This indicates that we have some code that outputs an error. In this case, we did expect that as this is a very contrived example. In your own code, you probably wouldn’t want any of your code to output an error. Regardless, this Notebook runner script isn’t enough to actually do a real test. You need to wrap this code with testing code. So let’s create a new file that we will save to the same location as our Notebook runner code. We will save this script with the name “test_runner.py”. Put the following code in your new script:
import unittest import runner class TestNotebook(unittest.TestCase): def test_runner(self): nb, errors = runner.run_notebook('Testing.ipynb') self.assertEqual(errors, []) if __name__ == '__main__': unittest.main()
This code uses Python’s unittest module. Here we create a testing class with a single test function inside of it called test_runner. This function calls our Notebook runner and asserts that the errors list should be empty. To run this code, open up a terminal and navigate to the folder that contains your code. Then run the following command:
python test_runner.py
When I ran this, I got the following output:
F ====================================================================== FAIL: test_runner (__main__.TestNotebook) ---------------------------------------------------------------------- Traceback (most recent call last): File "test_runner.py", line 10, in test_runner self.assertEqual(errors, []) AssertionError: Lists differ: [{'output_type': u'error', 'ev... != [] First list contains 1 additional elements. First extra element 0: {'ename': 'ZeroDivisionError', 'evalue': 'integer division or modulo by zero', 'output_type': 'error', 'traceback': ['\x1b[0;31m---------------------------------------------------------------------------\x1b[0m', '\x1b[0;31mZeroDivisionError\x1b[0m ' 'Trace']} Diff is 677 characters long. Set self.maxDiff to None to see it. ---------------------------------------------------------------------- Ran 1 test in 1.463s FAILED (failures=1)
This clearly shows that our code failed. If you remove the cell that has the divide by zero issue and re-run your test, you should get this:
. ---------------------------------------------------------------------- Ran 1 test in 1.324s OK
By removing the cell (or just correcting the error in that cell), you can make your tests pass.
The py.test Plugin
I discovered a neat plugin you can use that appears to help you out by making the workflow a bit easier. I am referring to the py.test plugin for Jupyter, which you can learn more about here.
Basically it gives py.test the ability to recognize Jupyter Notebooks and check if the stored inputs match the stored outputs and also that Notebooks run without error. After installing the nbval package, you can run it with py.test like this (assuming you have py.test installed):
py.test --nbval
Frankly you can actually run just py.test with no commands on the test file we already created and it will use our test code as is. The main benefit of adding nbval is that you won’t need to necessarily add wrapper code around Jupyter if you do so.
Testing within the Notebook
Another way to run tests is to just include some tests in the Notebook itself. Let’s add a new cell to our Testing Notebook that contains the following code:
import unittest class TestNotebook(unittest.TestCase): def test_add(self): self.assertEqual(add(2, 3), 5)
This will test the add function in the first cell eventually. We could add a bunch of different tests here. For example, we might want to test what happens if we add a string type with a None type. But you may have noticed that if you try to run this cell, you get to output. The reason is that we aren’t instantiating the class yet. We need to call unittest.main to do that. So while it’s good to run that cell to get it into Jupyter’s memory, we actually need to add one more cell with the following code:
unittest.main(argv=[''], verbosity=2, exit=False)
This code should be put in the last cell of your Notebook so it can run all the tests that you have added. It is basically telling Python to run with verbosity level of 2 and not to exit. When you run this code you should see the following output in your Notebook:
test_add (__main__.TestNotebook) ... ok ---------------------------------------------------------------------- Ran 1 test in 0.003s OK
You can do something similar with Python’s doctest module inside of Jupyter Notebooks as well.
Wrapping Up
As I mentioned at the beginning, while you can test your code in your Jupyter Notebooks, it is actually much better if you just test your code outside of it. However there are workarounds and since some people like to use Jupyter for documentation purposes, it is good to have a way to verify that they are working correctly. In this chapter you learned how to run Notebooks programmatically and verify that the output was as you expected. You could enhance that code to verify certain errors are present if you wanted to as well.
You also learned how to use Python’s unittest module in your Notebook cells directly. This does offer some nice flexibility as you can now run your code all in one place. Use these tools wisely and they will serve you well.
Related Reading
- Testing Jupyter Notebooks
- The nbval package
- Testing and Debugging Jupyter Notebooks
- StackOverflow: Unit tests for functions in a Jupyter notebook? | https://www.blog.pythonlibrary.org/2018/10/16/testing-jupyter-notebooks/ | CC-MAIN-2020-10 | en | refinedweb |
Many times you would have wanted to have one view/dashboard of all the Github issues created for your open source repositories. I have almost 150 repositories and it becomes really hard to find which are the priority ones to be fixed. In this post we will see how you can create a one dashboard/report to view all your github issues in a page using Azure Function(3.X with Typescript) and Azure CosmosDB.
PreRequisities:
You will need to have an Azure Subscription and a Github Account. If you do not have an Azure subscription you can simply create one with free trial. Free trial provides you with 12 months of free services. We will use Azure Function and CosmosDB to build this solution.
Step 1 : Create Resource Group
Inorder to manage deploy the function app and cosmosdb we first need to create Resource Group. You can create one named “gh-issue-report”
Step 2: Create the Azure Cosmosdb Account
To store the related data of the GitHub issue we need to create a CosmosDB account. To Create CosmosDB account, navigate to the Azure portal and click the Create Resource. Search for Azure Cosmosdb on the market place and create the account as follows.
Step 3: Create the Function app
If you have noticed my previous blog, i have mentioned about how to create an Azure function. Here is an image of the Function App i created.
Create Typescript Function:
As you see i have selected Runtime stack as Node.js which will be used to run the function written with Typescript. Open Visual Studio Code(Make sure you have already installed the VSCode with the function core tools and extension). Select Ctrl + Shif + P to create a new Function Project and select the language as Typescript.
Select the template as Timer trigger as we need to run every 5 minutes and you need to configure the cron expression (0 */5 * * * *) as well. (You can have custom time)
Give the function name as gitIssueReport, You will see the function getting created with the necessary files.
Step 4 : Add Dependencies to the Function App
Let’s try to add the necessary dependencies to the project. We will use bluebird as a dependency to handle the requests. Also gh-issues-api library to interact with Github and get the necessary issues. You need to add the dependencies in the package.json folder under dependencies.
"dependencies": { "@types/node": "^13.7.0", "bluebird": "^3.4.7", "gh-issues-api": "0.0.2" }
You can view the whole package.json here.
Step 5: Set Output Binding
Let’s set the output binding to CosmosDB to write the issues to the collection. You can set it by modifying the function.json as
{ "type": "cosmosDB", "name": "issueReport", "databaseName": "gh-issues", "collectionName": "open-issues", "createIfNotExists": true, "connectionStringSetting": "gh-issue_DOCUMENTDB", "direction": "out" }
Where type cosmosDB denotes the database output binding and you can see that the database name and collection as configured.
Step 6 : Code to Retrieve the Github Repository Issues
The actual logic of the function is as follows,
import Promise = require('bluebird'); import { GHRepository, IssueType, IssueState, IssueActivity, IssueActivityFilter, IssueLabelFilter, FilterCollection } from 'gh-issues-api'; export function index(context: any, myTimer: any) { var timeStamp = new Date().toISOString(); if(myTimer.isPastDue) { context.log('Function trigger timer is past due!'); } const repoName = process.env['repositoryName']; const repoOwner = process.env['repositoryOwner']; const labels = [ 'bug', 'build issue', 'investigation required', 'help wanted', 'enhancement', 'question', 'documentation', ]; const repo = new GHRepository(repoOwner, repoName); var report = { name: repoName, at: new Date().toISOString() }; context.log('Issues for ' + repoOwner + '/' + repoName, timeStamp); repo.loadAllIssues().then(() => { var promises = labels.map(label => { var filterCollection = new FilterCollection(); filterCollection.label = new IssueLabelFilter(label); return repo.list(IssueType.All, IssueState.Open, filterCollection).then(issues => report[label] = issues.length); }); var last7days = new Date(Date.now() - 604800000) var staleIssuesFilter = new IssueActivityFilter(IssueActivity.Updated, last7days); staleIssuesFilter.negated = true; var staleFilters = new FilterCollection(); staleFilters.activity = staleIssuesFilter; promises.push([ repo.list(IssueType.Issue, IssueState.Open).then(issues => report['total'] = issues.length), repo.list(IssueType.PulLRequest, IssueState.Open).then(issues => report['pull_request'] = issues.length), repo.list(IssueType.All, IssueState.Open, staleFilters).then(issues => report['stale_7days'] = issues.length) ]); return Promise.all(promises); }).then(() => { var reportAsString = JSON.stringify(report); context.log(reportAsString); context.bindings.issueReport = reportAsString; context.done(); });; }
You can see that the document is set as a input to the CosmosDB with the binding named issueReport.
Step 7: Deploy the Function
Deploy the Function App. You can deploy the function app to the Azure with the keys Ctrl+Shift+P and select Deploy to the Function App
Step 8 : Verify/Install the Dependencies
Once the deployment is succesfful, Navigate to Azure portal and open the function app to make sure that everything looks good. If you dont see the dependencies make sure to install the dependencies manually by navigating to the Kudu Console of the function App.
Note : Make sure to stop the Function app before you head over to Kudu.
ick bluebird to install the package. Also do the same for gh-issues-api
Step 8: Set Environment Variables (Repository)
As you could see in the above code, we are setting two environment variables to read the repository name and the repository owner which are needed to fetch the issues information. You can set those variable son the Azure portal as follows.
Navigate to the Overview tab for your function and click Configuration. As you can see below I’ve configured those values.
Step 9: Verify the Output Binding
Just to make sure that our settings in the function.json has been reflected or not navigate to the Functions and select the Function and make sure all the binding values are correct. If not create a new binding to cosmosdb account you created as mentioned in the step Step 3 (Instead of Twilio select Cosmosdb)
Step 10 : Run and Test the Function
Now its time to see the function app running and issues being reported. Navigate to your function app and click Run. You can see the Function Running as shown below.
Step 11: Check Live App Metrics
If you see any errors you can always navigate to Monitor section of the Function app and select Live App Metrics
Step 12: Verify the data in cosmosdb
If everything goes well, you can navigate to Cosmosdb Account and open the collection with the data Explorer.
You will see that there are many documents inserted in the collection.
Now you can modify this function to retrieve the issues from all of your repositories and use the data stored in the cosmosdb collection to build a dashboard to show the issues with priority. Also you can make use of this post to send a notification to someone about the issue as well.
Hope this simple function will help someone to build a dashboard out of the data collected and make them more productive.Cheers! | https://sajeetharan.com/2020/02/02/create-github-issue-report-with-azure-function-and-cosmosdb/ | CC-MAIN-2020-10 | en | refinedweb |
Wikiversity:Curators
A Wikiversity curator is a user who has rights to manage content on Wikiversity, including delete, rollback, import from other wikis, and protect pages.
How does one become a curator?[edit]
Any Wikiversity participant willing to do a lot of dull and boring work for the community can become a curator. If you have a good editing record then you are likely to be trusted and granted the privileges and responsibilities of curatorship. If you are still interested in being a curator, here is the process:
What can curators do?[edit]
Deletion of pages[edit]
Curators. Curators may delete pages, but they do not have undelete rights.
Before you delete a page, read: Wikiversity:Welcome templates. See also: Wikiversity:Deletion policy.
Edit and move protection of pages[edit]
Curators can protect pages to prevent editing. There are two types of page protection: semi-protection, which prevents anonymous and new users from editing, and full protection, which prevents all non-custodians and non-curators from editing. A page can also be protected to prevent moving. Page protection can be lifted by any custodian or curator upon request, which may be submitted at Wikiversity:Request custodian action. Page protections and unprotections can be monitored by viewing the protection log.
Rollback[edit]
Curators.
Import[edit]
Curators have access to the Import tool, to bring materials from Wikipedia, Wikibooks, Beta.Wikiversity, Wikiquote, and Wikisource.
How are curators expected to act?[edit]
Curators curators[edit]
If you have a question about an action (page deletion, page protection, user block, editing MediaWiki namespace pages, violation of Wikiversity policy or some other action that does serious damage to the project) by a Wikiversity curator, the first thing to do is leave a message on that curator's user discussion page. Curators should always be able to explain how their actions support the Wikiversity project. If you cannot get satisfaction from discussion with the curator, follow up with their mentor. Actions of a curator are ultimately the responsibility of their mentor(s). Any custodian may remove a curator's rights, but first try to resolve all curator problems by discussion. Post a request at Wikiversity:Request custodian action if discussion does not successfully resolve the issue.
Notes[edit]
- Curatorship is a responsibility, not a right. While everyone is encouraged to apply for curatorship, the position is not suited for everyone. Please also note that in all instances not listed above, curators have no more power or weight than other users. "Curatorship is not a big deal."
- Curators should set their "user preferences" so as to provide for email contacts from other Wikiversity participants. If you do not use email, then you must make yourself easily available by some other means such as IRC chat. | https://en.wikiversity.org/wiki/Wikiversity:Curator | CC-MAIN-2020-10 | en | refinedweb |
The D Language Foundation is pleased to present version 2.084.0 of DMD, the reference D compiler. It’s available for download at dlang.org, where you can also find the full changelog. There are a few changes and new features to be found, as usual, along with 100 closed Bugzilla issues this time around.
Finer Control Over Run-time Checks
The new compiler flag
-check is a feature that grew out of DIP 1006 and some internal discussions around its content. The flag allows overriding the default behavior of six specific categories of run-time checks by specifically turning them
on or
off:
assert– assertion checks
bounds– bounds checks
in–
incontracts
invariant– class and struct invariants
out–
outcontracts
switch– default switch cases
For example, when compiling without
-release, all run-time checks are enabled by default. To disable only assertion checks:
dmd -check=assert=off foo.d
This can be further refined with the new
-checkaction flag, which determines how the program will respond when an assertion, bounds, or switch check fails. There are four options:
D,
C, and
halt.
D– the default D behavior, which is to throw an
Errorto indicate an unrecoverable condition.
C– behave as a C program by calling the assertion failure function in the C runtime.
halt– execute a halt instruction to terminate the program.
Listed in the language documentation is a fourth option:
context. This causes failed checks to throw an
Error to indicate an unrecoverable condition, and also print the error context. It isn’t present in this release, but is coming in DMD 2.085 (the online documentation is generated from the DMD master branch).
Save Your Mixins
One of D’s most popular and powerful features is the
mixin statement, commonly referred to as string mixins to avoid confusion with template mixins. Unfortunately, given that string mixins can be composed from multiple compile-time function calls, they are also notoriously painful to debug as they grow in complexity. The new
-mixin compiler option aims to do away with that pain.
Consider the following simple (contrived) example, which attempts to generate a function call with a string mixin:
import std.stdio; void hello() { writeln("Hello!"); } void main() { mixin(hello.stringof ~ "();"); }
Save as
hello.d, compile with
dmd hello, and you’ll see an error along these lines:
hello.d-mixin-6(6): Error: function expected before (), not hello() of type void
The error does say exactly what the problem is, but even in this simple case it may require re-reading the message a few times before working out what it’s actually saying. So let’s recompile with the
-mixin flag. It requires a file name. I’ve selected
mixed.txt:
dmd -mixin=mixed.txt hello.d
Now we see this output:
mixed.txt(110): Error: function expected before (), not hello() of type void
See the difference? The error now refers to a line number in a file with the name we provided, rather than a line in the autogenerated
hello.d-mixin-6 to which we couldn’t refer. Open
mixed.txt and navigate to line 110 to find the generated code, along with a comment at line 109:
// expansion at foo.d(6) hello()();
And now the error is quite clear. Invoking
.stringof on a function provides you with the function name including the parentheses, so there’s no need to append parentheses to the result. We can now change the example so that it will compile:
void main() { mixin(hello.stringof ~ ";"); }
Anyone making significant use of string mixins to generate code will undoubtedly find this feature useful. It will be particularly helpful for the maintainers of D-friendly IDEs and plugins to make the user experience more convenient.
New DUB features
DMD 2.084.0 ships with version 1.13.0 of DUB, the D build tool and package manager. It gets some new goodies with this release.
The new
add command is a convenience to add dependencies to a project’s package recipe. No need to worry about the syntax and whether the recipe is written using JSON or SDLang. Simply run dub with the
add command, specifying one or more dub packages, and the recipe will be modified accordingly. For example, to add the BindBC bindings for the GLFW and OpenGL C libraries:
dub add bindbc-glfw bindbc-opengl
This will add the latest version of each library. This can be restricted to a specific version by appending
= to the package name along with the normal DUB syntax for version specifications. This can also be used to change the version specification of an existing dependency.
For those unfamiliar with DUB, executing
dub run, or simply
dub, in a directory containing a dub recipe will build a project according to the recipe and, if the project is an executable, run it once the build completes. Now, there are two new recipe directives that can be used to achieve more specialized goals.
preRunCommands specifies commands to execute before the DUB target is run, and
postRunCommands specifies commands to execute when the run is complete. See the DUB package recipe documentation for the JSON syntax or the SDLang syntax, under “Build Settings” in each, to see what they look like.
That’s Not All
Regarding the 100 closed Bugzilla issues, two points should be made.
First is that among many of the Pull Request merges that closed those issues, you’ll find Nicholas Wilson’s GitHub handle. Nicholas is, of course, the community member the D Language Foundation asked to serve as PR Manager, to be paid through a fundraising campaign. He’s been reviving old PRs and making sure new ones don’t go stale. This release is evidence that the initiative is paying off. And the icing on the cake is that the D community enabled us to meet our fundraising target well before our February 14th deadline. Thanks!
Second, a point relevant to the #dbugfix campaign. While I was disappointed that participation in nominating Bugzilla issues on Twitter and in the Forums dwindled to near zero, the previous nominations were not forgotten. The original goal was to fix at least two nominated issues per cycle, so several nominated bugs were never selected. However, thanks to Eduard Staniloiu and Razvan Nitu, two among that group are now closed and fixed in this release:
- #13300 – pure function ‘std.array.Appender!(T[]).Appender.ensureAddable’ cannot call impure function ‘test.T.__fieldPostBlit’,
- #18572 – AliasSeq default arguments are broken
I’m still happy to take #dbugfix nominations. If you’ve got a Bugzilla issue that’s bugging you, tweet a link to it with #dbugfix in the text, or start a thread in the General forum with #dbugfix in the title. I’ll make a note of it and, rather than counting votes and selecting two of the top five, see if I can find someone to do something about it. | https://dlang.org/blog/2019/01/05/dmd-2-084-0-has-arrived/ | CC-MAIN-2020-10 | en | refinedweb |
Swift Abstract Syntax Tree
Swift Abstract Syntax Tree is an initiative to parse Swift Programming Language in Swift itself. The output of this utility is the corresponding Abstract Syntax Tree (AST) of the source code.
The AST produced in this tool is intended to be consumed in various scenarios. For example, source-to-source transformations like swift-transform and linting tools like swift-lint.
Refactoring, code manipulation and optimization can leverage this AST as well.
Other ideas could be llvm-codegen or jvm-codegen (thinking about JSwift) that consumes the AST and converts them into binary or bytecode. I have some proof-of-concepts, llswift-poc and jswift-poc, respectively. If you are interested in working on the codegens, send me email [email protected].
Swift Abstract Syntax Tree is part of Yanagiba Project. Yanagiba umbrella project is a toolchain of compiler modules, libraries, and utilities, written in Swift and for Swift.
A Work In Progress
The Swift Abstract Syntax Tree is still in active development. Though many features are implemented, some with limitations.
Pull requests for new features, issues and comments for existing implementations are welcomed.
Please also be advised that the Swift language is under rapid development, its syntax is not stable. So the details are subject to change in order to catch up as Swift evolves.
Requirements
Installation
Standalone Tool
To use it as a standalone tool, clone this repository to your local machine by
git clone
Go to the repository folder, run the following command:
swift build -c release
This will generate a
swift-ast executable inside
.build/release folder.
Adding to
swift Path (Optional)
It is possible to copy the
swift-ast to the
bin folder of
your local Swift installation.
For example, if
which swift outputs
/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
Then you can copy
swift-ast to it by
cp .build/release/swift-ast /Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/swift-ast
Once you have done this, you can invoke
swift-ast by
calling
swift ast in your terminal directly.
Embed Into Your Project
Add the swift-ast dependency to
Package.swift:
// swift-tools-version:5.0 import PackageDescription let package = Package( name: "MyPackage", dependencies: [ .package(url: "", from: "0.19.9") ], targets: [ .target(name: "MyTarget", dependencies: ["SwiftAST+Tooling"]), ], swiftLanguageVersions: [.v5] )
Usage
Command Line
Simply append the path of the file to
swift-ast. It will dump the AST to the
console.
swift-ast path/to/Awesome.swift
Multiple files can be parsed with one call:
swift-ast path1/to1/foo.swift path2/to2/bar.swift ... path3/to3/main.swift
CLI Options
By default, the AST output is in a plain text format without indentation nor color highlight to the keywords. The output format can be changed by providing the following option:
-print-ast: with indentation and color highlight
-dump-ast: in a tree structure
-diagnostics-only: no output other than the diagnostics information
In addition,
-github-issue can be provided as the first argument option,
and the program will try to generate a GitHub issue template with pre-filled
content for you.
Use AST in Your Code
Loop Through AST Nodes
import AST import Parser import Source do { let sourceFile = try SourceReader.read(at: filePath) let parser = Parser(source: sourceFile) let topLevelDecl = try parser.parse() for stmt in topLevelDecl.statements { // consume statement } } catch { // handle errors }
Traverse AST Nodes
We provide a pre-order depth-first traversal implementation on all AST nodes.
In order to use this, simply write your visitor by conforming
ASTVisitor
protocol with the
visit methods for the AST nodes that are interested to you.
You can also write your own traversal implementations
to override the default behaviors.
Returning
false from
traverse and
visit methods will stop the traverse.
class MyVisitor : ASTVisitor { func visit(_ ifStmt: IfStatement) throws -> Bool { // visit this if statement return true } } let myVisitor = MyVisitor() let topLevelDecl = MyParse.parse() myVisitor.traverse(topLevelDecl)
Development
Build & Run
Building the entire project can be done by simply calling:
make
This is equivalent to
swift build
The dev version of the tool can be invoked by:
.build/debug/swift-ast path/to/file.swift
Running Tests
Compile and run the entire tests by:
make test
Ryuichi Sai
-
- [email protected]
License
Swift Abstract Syntax Tree is available under the Apache License 2.0. See the LICENSE file for more info.
Github
You may find interesting
Dependencies
Used By
Total: 0
Releases
0.4.2 - 2018-04-01 00:21:56
- Newline escape in new lines
- Support Postfixes in KeypathExpression
- Lex long decimals
- Parsing additional Swift 4 features
- Allow dollar signed used as identifier body
- Introduce Identifier as a replacement of plain string
- Swift 4.1 migration
- Bug fixes
- code refactoring
0.4.1 - 2017-08-20 16:41:29
- Catch up with Bocho 0.1.0 release
0.4.0 - 2017-08-14 17:07:26
- Introduce libTooling
- Assign lexical parent
- Introduce sequence expression, and when enabled, fold it with hardcoded operator precedences
- Support shebang
- Other minor changes
0.3.5 - 2017-06-22 17:01:04
- Force label name has to be on the same line as
breakor
continuestatement
- swift-lint 0.1.3 integration and minor refactorings
0.3.4 - 2017-06-16 01:28:14
- The source code comments are now exposed through
TopLevelDeclaration
0.3.3 - 2017-06-15 19:39:48
- Fix raw-value typed enum declaration by supporting boolean literals
0.3.2 - 2017-06-15 17:32:15
- Extended OptionalPattern
0.3.1 - 2017-06-11 16:23:32
- Updated to Swift 4 package manager description
0.3.0 - 2017-06-11 02:41:05
- Parse Swift 4 new language features
- Leverage Swift 4 feature to improve code quality of this repo
0.2.0 - 2017-05-23 13:10:37
- Dump AST to terminal
- Print AST to terminal
- Enhancements to diagnostic messages
- Improvements to test coverage
- Github issue generation
- Assign values for
#file,
#line, and
#column
- Remove deprecated dynamic-type expression
0.1.4 - 2017-04-25 03:08:12
- Parse Swift 3.1
- ASTVisitor with default traversal implementation
0.1.3 - 2015-12-31 09:34:24
- ASTContext contains the original source file
0.1.2 - 2015-12-26 08:45:39
- Assign source code range to token
- Assign source code range to AST nodes
- Minor improvements to
ImportDeclaration
- Introduce
ImportKind
- Additional error handling
- Code quality enhancements
0.1.1 - 2015-12-25 04:09:27
Initial release with tokenizing source code and parsing import declaration.
Releasing in order to be kicked up by SPM. | https://swiftpack.co/package/yanagiba/swift-ast | CC-MAIN-2020-10 | en | refinedweb |
shown in other instead.
Creating C++ Application
A Unigine-based application can be implemented by means of C++ only, without using UnigineScript.
This article describes how to add logic to your project by using the C++ language. Code written in C++ is the same for all supported platforms: Windows and Linux. The difference is in the way of compiling the project.
See also#
- Articles in the Development for Different Platforms section to learn more on how to prepare the development environment, install the UNIGINE SDK and build the application for different platforms.
- Articles on Typical Architecture of a Unigine-Based Application and Engine Architecture for better understanding of the Unigine C++ API operation in the engine architecture.
- Examples located in the <UnigineSDK>/source/samples/Api and <UnigineSDK>/source/samples/App folders. edition.
- API+IDE — choose C++ to start working with the C++ API.
This parameter depends on a platform:
- On Windows, you can create C++ Visual Studio 2015 project.
- On Linux you can create C++ GNU make project.
- Architecture - specify the architecture of your target platform.
- Precision - specify the precision. In this example we will use double precision.
- Click the Create New Project button. The project will appear in the projects list.
You can run your project by clicking the Run button.
Implementing C++ Logic#
In this section we will add logic to the empty C++ application project.
The following example shows how to rotate the material ball that was created by default in your project.
If you created the C++ project for Visual Studio:
- Choose your C++ project in the UNIGINE SDK Browser and click the Edit Code button to open the project in IDE.
If you created C++ GNU make project:
- On the created C++ project, click on the Other Actions button and then the Open Folder button.
- Go to the <YOUR PROJECT>\source\ folder and open the AppWorldLogic.cpp file with any plain text editor.
- Write (or copy) the following code in your project's AppWorldLogic.cpp file.Source code (C++)
#include "AppWorldLogic.h" #include "UnigineEditor.h" #include "UnigineGame.h" using namespace Unigine; // define pointer to node NodePtr node; AppWorldLogic::AppWorldLogic() { } AppWorldLogic::~AppWorldLogic() { } int AppWorldLogic::init() { // get the material ball node node = World::getNodeByName("material_ball"); return 1; } int AppWorldLogic::shutdown() { return 1; } int AppWorldLogic::update() { // get the frame duration float ifps = Game::getIFps(); // set the angle of rotation double angle = ifps * 90.0f; // set the angle to the transformation matrix Unigine::Math::dmat4 transform = node->getTransform(); transform.setRotateZ(angle); // set new transformation to the node node->setTransform(node->getTransform() * transform); return 1; } int AppWorldLogic::postUpdate() { return 1; } int AppWorldLogic::updatePhysics() { return 1; } int AppWorldLogic::save(const Unigine::StreamPtr &stream) { UNIGINE_UNUSED(stream); return 1; } int AppWorldLogic::restore(const Unigine::StreamPtr &stream) { UNIGINE_UNUSED(stream); return 1; }
If your use Visual Studio, do the following:
-. | https://developer.unigine.com/en/docs/future/code/cpp/application?rlang=cpp | CC-MAIN-2020-10 | en | refinedweb |
Rebuild Curve
From The Foundry MODO SDK wiki
This example shows how to create a command which can read curve attributes. It is made up of the command itself, as well as a visitor object which we use to scan a mesh layer for curve polygons.
import lx import lxu.select import lxu.command import lxifc from math import floor def dSqr(vec1, vec2): '''Quick function to get the distance squared between two points''' dist = 0.0 for i in range(3): dist += (vec2[i] - vec1[i])**2 return dist class CurveFinder(lxifc.Visitor): '''This visitor object will scan through selected polygons and create a dictionary of those polygons who are curves. The ID of the polygon will be the key, and the value will be the position or its root vertex. Later, we'll compare this position to the position of curve objects we get to make sure we're only affecting the curves the user intends.''' def __init__(self, polyAccessor, pointAccessor): self.poly = polyAccessor self.point = pointAccessor self.curves = {} def vis_Evaluate(self): if self.poly.Type() == lx.symbol.iPTYP_CURVE: p = self.poly.VertexByIndex(0) self.point.Select(p) pos = self.point.Pos() ID = self.poly.ID() self.curves[ID] = pos class CurvePoints(lxu.command.BasicCommand): def __init__(self): '''After initializing the command, arguments are added one by one. These arguments can then be referenced later by the order they were added, so the first argument will be index 0, the second index 1, etc.''' lxu.command.BasicCommand.__init__(self) self.dyna_Add('mode', lx.symbol.sTYPE_BOOLEAN) self.dyna_Add('remove', lx.symbol.sTYPE_BOOLEAN) self.dyna_Add('count', lx.symbol.sTYPE_INTEGER) self.basic_SetFlags(2, lx.symbol.fCMDARG_OPTIONAL) self.dyna_Add('distance', lx.symbol.sTYPE_DISTANCE) self.basic_SetFlags(3, lx.symbol.fCMDARG_OPTIONAL) '''Adding a argument requires an internal string name for the argument, and a type for the argument. By default all arguments are required, but we can use the basic_SetFlags method to change that if we need to''' def fire(self, CMD): '''Call commands without lx.eval''' lx.service.Command().ExecuteArgString(-1, lx.symbol.iCTAG_NULL, CMD) def cmd_ArgUserName(self,index): '''This sets the user name for the argument of a given index. Ideally, these user names should exist in a message table inside of a config, but for this example it's not really all that important''' argNames = ['Use Spacing Distance', 'Remove Original Curve', 'Point Count', 'Point Spacing'] return argNames[index] def cmd_UserName(self): '''This sets the user name for the command itself. Again, we should ideally use a message table, but we will just use a string for simplicity''' return 'Rebuild Curve' def cmd_DialogInit(self): '''If the user doesn't provide all required arguments, modo opens a dialog to ask for them. When this happens, the cmd_DialogInit is called which allows us to set default values for our the command arguments''' self.attr_SetInt(0, 0) self.attr_SetInt(1, 1) self.attr_SetInt(2, 10) self.attr_SetFlt(3, .05) def cmd_ArgEnable(self,arg): '''When the command dialog opens and when the user interacts with it, modo calls this method for each of our arguments. That gives us a chance to read what the current values for arguments are, and choose if we need to disable an argument''' if self.attr_GetInt(0) and arg == 2: lx.throw(lx.symbol.e_CMD_DISABLED) elif not self.attr_GetInt(0) and arg == 3: lx.throw(lx.symbol.e_CMD_DISABLED) return lx.symbol.e_OK def cmd_Flags(self): '''cmd_Flags tells modo what kind of command we have. Since we're changing the state of the scene by altering geometry, we need to make this an undoable command''' return lx.symbol.fCMD_MODEL | lx.symbol.fCMD_UNDO def getCurve(self, layerScan): '''For organization, the command is split into two parts. This method grabs a curve object from the current mesh layer and returns it to the main execution method. If a curve is selected, we find which curve in the curve Group is the same as the selected one, and return it. Otherwise, we return all curves we find.''' # Layerscan lets us get the mesh as an item object, and as an editable mesh. curveItem = layerScan.MeshItem(0) self.mesh = layerScan.MeshEdit(0) self.poly = self.mesh.PolygonAccessor() self.points = self.mesh.PointAccessor() '''Getting a curve Group object from an item requires reading the crvGroup channel, casting the channel's value object as a value reference, and finally casting the object from the value reference as a curve group. The curve group is an item which contains all the curves within the provided item. We access an individual curve by using the ByIndex() method''' chanRead = self.scene.Channels(None,0) crvChan = curveItem.ChannelLookup('crvGroup') crvValObj = chanRead.ValueObj(curveItem, crvChan) valRef = lx.object.ValueReference(crvValObj) curveGroup = lx.object.CurveGroup(valRef.GetObject()) if curveGroup.Count()<1: return None '''We use our visitor to scan through any selected polygons and check if they are curves. If so, we make a dictionary with those curve IDs and the root vertex position.''' curveVis = CurveFinder(self.poly, self.points) selM = lx.service.Mesh().ModeCompose('select', None) self.poly.Enumerate(selM, curveVis, 0) '''For every curve that the visitor found, we want to pair it with a curve object from the curve group. We'll make a new dictionary, where the curve's ID is the key again, but the value will be the curve object itself.''' retCurves = {} for selCurve in curveVis.curves: dist = 9**99 '''There are no direct links to find which curve in the curve group belongs to a specific polygon in a mesh, so we need to compare the root position of the curve with the root vertices position. We assume the curve object with a root position closest to a given root vertex is the same curve.''' for i in range(curveGroup.Count()): curve = curveGroup.ByIndex(i) pos = curve.Position() tmpD = dSqr(curveVis.curves[selCurve], pos) if tmpD < dist: dist = tmpD idx = i retCurves[selCurve] = curveGroup.ByIndex(idx) return retCurves def basic_Execute(self, msg, flags): '''Even though the point count and spacing distance arguments are both optional, one of them has to be set. We check that first.''' if self.dyna_IsSet(2) or self.dyna_IsSet(3): mode = self.attr_GetInt(0) '''Next we need to make sure we didn't get a spacing distance of 0 or less, or a point count of less than 2''' if mode: dist = self.attr_GetFlt(3) if dist <= 0: self.e_BADDIST() return lx.symbol.e_ABORT else: ptCount = self.attr_GetInt(2) if ptCount <2: self.e_BADPTS() return lx.symbol.e_ABORT '''Finally, we can get a layerscan object in order to check if there are curves in the mesh.''' self.scene = lxu.select.SceneSelection().current() ls = lx.service.Layer() lscan = ls.ScanAllocate(lx.symbol.f_LAYERSCAN_EDIT | lx.symbol.f_LAYERSCAN_MARKPOLYS) curves = self.getCurve(lscan) if not curves: self.e_NOCURVE() lscan.Apply() return lx.symbol.e_ABORT '''All our tests passed, and we have at least 1 curve object to rebuild''' killSwitch = self.attr_GetInt(1) for ID in curves: curve = curves[ID] '''First we get a list of all the positions we want to form our curve from. Then we generate points there using a Point object''' if mode: ptCount = int(floor(curve.Length()/dist)) pts = [] for i in range(ptCount): curve.SetLenFraction(float(i)/(ptCount-1)) pts.append(curve.Position()) ptList = [] for i in pts: ptList.append(self.points.New(i)) '''We need to add the IDs from each newly created point into a storage object, and using that storage object we can create a polygon''' pBuf = lx.object.storage('p') pBuf.setSize(len(ptList)) pBuf.set(ptList) polyID = self.poly.New(lx.symbol.iPTYP_CURVE, pBuf, len(ptList), 0) '''Finally we check if the user wants to remove the original curve. If they do, we need to remove each of the verts, and then the polygon''' if killSwitch: self.poly.Select(ID) for i in range(self.poly.VertexCount()): self.points.Select(self.poly.VertexByIndex(i)) self.points.Remove() self.poly.Remove() self.mesh.SetMeshEdits(lx.symbol.f_MESHEDIT_GEOMETRY) lscan.Apply() '''These error dialogs catch known cases of failures earlier in the process''' def e_NOCURVE(self): self.fire('dialog.setup error') self.fire('dialog.title {Rebuild Curve}') self.fire('dialog.msg {No curve was found in the current mesh layer.}') self.fire('dialog.open') def e_BADDIST(self): self.fire('dialog.setup error') self.fire('dialog.title {Rebuild Curve}') self.fire('dialog.msg {Distance must be greater than 0}') self.fire('dialog.open') def e_BADPTS(self): self.fire('dialog.setup error') self.fire('dialog.title {Rebuild Curve}') self.fire('dialog.msg {A minimum of 2 points must be used}') self.fire('dialog.open') lx.bless(CurvePoints, "curve.rebuild") | https://modosdk.foundry.com/wiki/Rebuild_Curve | CC-MAIN-2020-10 | en | refinedweb |
apollo-link-state
Manage your local data with Apollo Client
Read the announcement post! 🎉 | Video tutorial by Sara Vieira | apollo-link-state on GitHub
⚠️ As of Apollo Client 2.5, local state handling is baked into the core, which means it is no longer necessary to use
apollo-link-state. For help with migrating from
apollo-link-stateto Apollo Client 2.5, please refer to the migration guide.
Managing remote data from an external API is simple with Apollo Client, but where do we put all of our data that doesn't fit in that category? Nearly all apps need some way to centralize client-side data from user interactions and device APIs.
In the past, Apollo users stored their application's local data in a separate
Redux or MobX store. With
apollo-link-state, you no longer have to maintain a
second store for local state. You can instead use the Apollo Client cache as
your single source of truth that holds all of your local data alongside your
remote data. To access or update your local state, you use GraphQL queries and
mutations just like you would for data from a server.
When you use Apollo Client to manage your local state, you get all of the same benefits you know and love like caching and offline persistence without having to set these features up yourself. 🎉 On top of that, you also benefit from the Apollo DevTools for debugging and visibility into your store.
Quick start
To get started, install
apollo-link-state from npm:
npm install apollo-link-state --save
The rest of the instructions assume that you have already set up Apollo
Client in your application. After
you install the package, you can create your state link by calling
withClientState and passing in a resolver map. A resolver map describes how to
retrieve and update your local data.
Let's look at an example where we're using a GraphQL mutation to update whether our network is connected with a boolean flag:
import { withClientState } from 'apollo-link-state'; // This is the same cache you pass into new ApolloClient const cache = new InMemoryCache(...); const stateLink = withClientState({ cache, resolvers: { Mutation: { updateNetworkStatus: (_, { isConnected }, { cache }) => { const data = { networkStatus: { __typename: 'NetworkStatus', isConnected }, }; cache.writeData({ data }); return null; }, }, } });
To hook up your state link to Apollo Client, add it to the other links in your
Apollo Link chain. Your state link should be near the end of the chain, so that
other links like
apollo-link-error can also deal with local state requests.
However, it should go before
HttpLink so local queries and mutations are
intercepted before they hit the network. It should also go before
apollo-link-persisted-queries
if you are using persisted queries. Then, pass your link chain to the Apollo
Client constructor.
const client = new ApolloClient({ cache, link: ApolloLink.from([stateLink, new HttpLink()]), });
With Apollo Boost
If you are using
apollo-boost, it already includes
apollo-link-state underneath the hood for you.
Instead of passing the
link property when instantiating Apollo Client, you pass in
clientState.
import ApolloClient from 'apollo-boost'; const client = new ApolloClient({ clientState: { defaults: { isConnected: true }, resolvers: { Mutation: { updateNetworkStatus: (_, { isConnected }, { cache }) => { cache.writeData({ data: { isConnected }}); return null; } } } } });
How do we differentiate a request for local data from a request that hits our
server? In our query or mutation, we specify which fields are client-only with a
@client directive. This tells our network stack to retrieve or update the data
in the cache with our resolver map that we passed into our state link.
const UPDATE_NETWORK_STATUS = gql` mutation updateNetworkStatus($isConnected: Boolean) { updateNetworkStatus(isConnected: $isConnected) @client } `;
To fire off the mutation from your component, bind your mutation to your component via your favorite Apollo view layer integration just like you normally would. Here's what this would look like for React:
const WrappedComponent = graphql(UPDATE_NETWORK_STATUS, { props: ({ mutate }) => ({ updateNetworkStatus: isConnected => mutate({ variables: { isConnected } }), }), })(NetworkStatus);
What if we want to access our network status data from another component? Since
we don't know whether our
UPDATE_NETWORK_STATUS mutation will fire before we
try to access the data, we should guard against undefined values by providing a
default state as part of the state link initialization:
const stateLink = withClientState({ cache, resolvers: { Mutation: { /* same as above */ }, }, defaults: { networkStatus: { __typename: 'NetworkStatus', isConnected: true, }, }, });
This is the same as calling
writeData yourself with an initial value:
// Same as passing defaults above cache.writeData({ data: { networkStatus: { __typename: 'NetworkStatus', isConnected: true, }, }, });
How do we query the
networkStatus from our component? Similar to mutations,
just use a query and the
@client directive! With Apollo Link, we can combine
data sources, including your remote data, in one query.
In this example, the
articles field will either hit the cache or fetch from
our GraphQL endpoint, depending on our fetch policy. Since
networkStatus is
marked with
@client, we know that this is local data, so it will resolve from
the cache.
const GET_ARTICLES = gql` query { networkStatus @client { isConnected } articles { id title } } `;
To retrieve the data in your component, bind your query to your component via
your favorite Apollo view layer integration just like you normally would. In
this case, we'll use React as an example. React Apollo will attach both your
remote and local data to
props.data while tracking both loading and error
states. Once the query returns a result, your component will update reactively.
Updates to Apollo Client state via
apollo-link-state will also automatically
update any components using that data in a query.
const WrappedComponent = graphql(GET_ARTICLES, { props: ({ data: { loading, error, networkStatus, articles } }) => { if (loading) { return { loading }; } if (error) { return { error }; } return { loading: false, networkStatus, articles, }; }, })(Articles);
Now that you've seen how easy it is to manage your local state in Apollo Client,
let's dive deeper into how
apollo-link-state updates and queries your local data with defaults and resolvers.
Defaults
Often, you'll need to write an initial state to the cache so any components querying data before a mutation is triggered don't error out. To accomplish this, use the
defaults property for the default values you'd like to write to the cache and pass in your cache to
withClientState. Upon initialization,
apollo-link-state will immediately write those values to the cache with
cache.writeData before any operations have occurred.
The shape of your initial state should match how you plan to query it in your application.
const defaults = { todos: [], visibilityFilter: 'SHOW_ALL', networkStatus: { __typename: 'NetworkStatus', isConnected: false, } }; const resolvers = { /* ... */ }; const cache = new InMemoryCache(); const stateLink = withClientState({ resolvers, cache, defaults });
Sometimes you may need to reset the store in your application, for example when a user logs out. If you call
client.resetStore anywhere in your application, you will need to write your defaults to the store again.
apollo-link-state exposes a
writeDefaults function for you. To register your callback to Apollo Client, call
client.onResetStore and pass in
writeDefaults.
const cache = new InMemoryCache(); const stateLink = withClientState({ cache, resolvers, defaults }); const client = new ApolloClient({ cache, link: stateLink, }); const unsubscribe = client.onResetStore(stateLink.writeDefaults);
If you would like to unsubscribe this callback,
client.onResetStore returns an unsubscribe function. However, we don't recommend calling unsubscribe on your state link's
writeDefaults function unless you are planning on writing a new set of defaults to the cache.
Resolvers
Your resolvers are where all the magic happens to retrieve and update your local data in the Apollo cache. The resolver map is an object with resolver functions for each GraphQL object type. You can think of a GraphQL query or mutation as a tree of function calls for each field. These function calls resolve to data or another function call.
The four most important things to keep in mind about resolvers in
apollo-link-state are this:
- The cache is added to the context (the third argument to the resolver) for you so you can write and read data from the cache.
- The resolver should return an object with a
__typenameproperty unless you've overridden the
dataIdFromObjectfunction to not use
__typenamefor cache keys. This is necessary for Apollo Client to normalize the data in the cache properly.
- Resolver functions can return a promise if you need to perform asynchronous side effects.
- Query resolvers are only called on a cache miss. Since the first time you call the query will be a cache miss, you should return any default state from your resolver function.
If any of that sounds confusing, I promise it will be cleared up by the end of this section. Keep on reading! 😀
Default resolvers
You don't have to specify resolver functions for every field, however. If the return value from the parent object has the same property names as the fields requested in the child object, you won't need to specify a resolver. This is called a default resolver.
const getUser = gql` query { user(id: 1) @client { name { last first } } } `;
For this query, you will need to specify a resolver for
Query.user in your
resolver map. If
Query.user returns an object with a name property that
corresponds to an object with last and first properties, you do not need to
specify any additional resolvers. GraphQL takes care of that for you!
Resolver signature. Don't worry about this one too much for
apollo-link-state.
args: An object containing all of the arguments passed into the field. For example, if you called a mutation with
updateNetworkStatus(isConnected: true), the
argsobject would be
{ isConnected: true }.
context: The context object, which is shared by all links in the Apollo Link chain. The most important thing to note here is that we've added the Apollo cache to the context for you, so you can manipulate the cache with
cache.writeData({}). If you want to set additional values on the context, you can set them from within your component or by using
apollo-link-context.
info: Information about the execution state of the query. You will probably never have to use this one.
For further exploration, check out the
graphql-tools docs.
Async resolvers
apollo-link-state supports asynchronous resolver functions. These functions
can either be
async functions or ordinary functions that return a Promise.
This can be useful for performing side effects like accessing a device API. If
you would like to hit a REST endpoint with your resolver, we recommend checking
out
apollo-link-rest
instead, which is a more complete solution for using your REST endpoints with
Apollo Client.
For React Native and most browser APIs, you should set up a listener in a component lifecycle method and pass in your mutation trigger function as the callback instead of using an async resolver. However, there are some cases where it's beneficial to perform the side effect within a resolver:
import { CameraRoll } from 'react-native'; const cameraRoll = { Query: { cameraRoll: async (_, { assetType }) => { try { const media = await CameraRoll.getPhotos({ first: 20, assetType, }); return { ...media, id: assetType, __typename: 'CameraRoll', }; } catch (e) { console.error(e); return null; } }, }, };
CameraRoll.getPhotos()
returns a Promise resolving to an object with a
edges property, which is an
array of camera node objects, and a
page_info property, which is an object
with pagination information. This is a great use case for GraphQL, since we can
filter down the return value to only the data that our components consume.
const GET_PHOTOS = gql` query getPhotos($assetType: String!) { cameraRoll(assetType: $assetType) @client { id edges { node { image { uri } location { latitude longitude } } } } } `;
Organizing your resolvers
For most applications, your map of resolvers will probably be too large to fit in
one file. To organize your resolver map, we recommend splitting it up by
feature, similar to the Redux ducks
pattern. Each feature will have
its own set of queries, mutations, and fields on its resolver map. Then, you can
merge all of your separate resolver maps into one object before you pass it to
withClientState.
import merge from 'lodash.merge'; import { withClientState } from 'apollo-link-state'; import currentUser from './resolvers/user'; import cameraRoll from './resolvers/camera'; import networkStatus from './resolvers/network'; const stateLink = withClientState({ cache, resolvers: merge(currentUser, cameraRoll, networkStatus), });
You can do the same thing with the
defaults option as well:
const currentUser = { defaults: { currentUser: null, }, resolvers: { ... } }; const cameraRoll = { defaults: { ... }, resolvers: { ... }}; const stateLink = withClientState({ ...merge(currentUser, cameraRoll, networkStatus), cache, });
Updating the cache
When you manage your local data with Apollo Client, your Apollo cache becomes
the single source of truth for all your local and remote data. To update and
read from the cache, you access it via the
context, which is the third
argument passed to your resolver function.
The Apollo cache API has several methods to assist you with updating and retrieving data. Let's walk through each of the methods and some common use cases for each one!
writeData
The easiest way to update the cache is with
cache.writeData, which allows you
to write data directly to the cache without passing in a query. Here's how
you use it in your resolver map for a simple update:
const filter = { Mutation: { updateVisibilityFilter: (_, { visibilityFilter }, { cache }) => { const data = { visibilityFilter, __typename: 'Filter' }; cache.writeData({ data }); }, }, };
cache.writeData also allows you to pass in an optional
id property to write
a fragment to an existing object in the cache. This is useful if you want to add
some client-side fields to an existing object in the cache.
The
id should correspond to the object's cache key. If you're using the
InMemoryCache and not overriding the
dataObjectFromId config property, your
cache key should be
__typename:id.
const user = { Mutation: { updateUserEmail: (_, { id, email }, { cache }) => { const data = { email }; cache.writeData({ id: `User:${id}`, data }); }, }, };
cache.writeData should cover most of your needs; however, there are some cases
where the data you're writing to the cache depends on the data that's already
there. In that scenario, you should use
readQuery or
readFragment, which
allows you to pass in a query or a fragment to read data from the cache. If you'd like to validate the shape of your data that you're writing to the cache, use
writeQuery or
writeFragment. We'll explain some of those use
cases below.
writeQuery and readQuery
Sometimes, the data you're writing to the cache depends on data that's already
in the cache; for example, you're adding an item to a list or setting a property
based on an existing property value. In that case, you should use
cache.readQuery to pass in a query and read a value from the cache before you
write any data. Let's look at an example where we add a todo to a list:
let nextTodoId = 0; const todos = {]), }; // you can also do cache.writeData({ data }) here if you prefer cache.writeQuery({ query, data }); return newTodo; }, }, }, };
In order to add our todo to the list, we need the todos that are currently in
the cache, which is why we call
cache.readQuery to retrieve them.
cache.readQuery will throw an error if the data isn't in the cache, so we need
to provide an initial state. This is why we're returning an empty array in our
Query.todos resolver.
To write the data to the cache, you can use either
cache.writeQuery or
cache.writeData. The only difference between the two is that
cache.writeQuery requires that you pass in a query to validate that the shape
of the data you're writing to the cache is the same as the shape of the data
required by the query. Under the hood,
cache.writeData automatically
constructs a query from the
data object you pass in and calls
cache.writeQuery.
writeFragment and readFragment
cache.readFragment is similar to
cache.readQuery except you pass in a
fragment. This allows for greater flexibility because you can read from any
entry in the cache as long as you have its cache key. In contrast,
cache.readQuery only lets you read from the root of your cache.
Let's go back to our previous todo list example and see how
cache.readFragment
can help us toggle one of our todos as completed.
const todos = { resolvers: { Mutation: { toggleTodo: (_, variables, { cache }) => { const id = `TodoItem:${variables.id}`; const fragment = gql` fragment completeTodo on TodoItem { completed } `; const todo = cache.readFragment({ fragment, id }); const data = { ...todo, completed: !todo.completed }; // you can also do cache.writeData({ data, id }) here if you prefer cache.writeFragment({ fragment, id, data }); return null; }, }, }, };
In order to toggle our todo, we need the todo and its status from the cache,
which is why we call
cache.readFragment and pass in a fragment to retrieve it.
The
id we're passing into
cache.readFragment refers to its cache key. If
you're using the
InMemoryCache and not overriding the
dataObjectFromId
config property, your cache key should be
__typename:id.
To write the data to the cache, you can use either
cache.writeFragment or
cache.writeData. The only difference between the two is that
cache.writeFragment requires that you pass in a fragment to validate that the
shape of the data you're writing to the cache node is the same as the shape of
the data required by the fragment. Under the hood,
cache.writeData
automatically constructs a fragment from the
data object and
id you pass in
and calls
cache.writeFragment.
@client directive
Adding the
@client directive to a field is how Apollo Link knows to resolve
your data from the Apollo cache instead of making a network request. This
approach is similar to other Apollo Link APIs, such as
apollo-link-rest, which
uses the
@rest directive to specify fields that should be fetched from a REST
endpoint. To clarify, the
@client and
@rest directives never modify the
shape of the result; rather, they specify where the data is coming from.
Combining local and remote data
What's really cool about using a
@client directive to specify client-side only
fields is that you can actually combine local and remote data in one query. In
this example, we're querying our user's name from our GraphQL server and their
cart from our Apollo cache. Both the local and remote data will be merged
together in one result.
const getUser = gql` query getUser($id: String) { user(id: $id) { id name cart @client { product { name id } } } } `;
Thanks to the power of directives and Apollo Link, you'll soon be able to
request
@client data,
@rest data, and data from your GraphQL server all in
one query! 🎉
Example apps
To get you started, here are some example apps:
If you have an example app that you'd like to be featured, please send us a PR!
😊 We'd love to hear how you're using
apollo-link-state.
Roadmap
While
apollo-link-state is ready to use in your Apollo application today,
there are a few enhancements we're looking to implement soon before a v1.0
release.
We want your experience managing local data in Apollo Client to be as seamless as possible, so please get in touch if there's a feature you're looking for that's not on this list. Additionally, if any of these topics interest you, we'd love to have you on board as a contributor!
Type checking
You may have noticed we haven't mentioned a client-side schema yet or any type
validation. That's because we haven't settled on how to approach this piece of
the puzzle yet. It is something we would like to tackle soon in order to enable
schema introspection and autocomplete with GraphiQL in Apollo DevTools, as well
as code generation with
apollo-codegen.
Having the same runtime type checking as a GraphQL server does is problematic
because the necessary modules from
graphql-js are very large. Including the
modules for defining a schema and validating a request against a schema would
significantly increase bundle size, so we'd like to avoid this approach. This is
why we don't send your server's entire schema over to Apollo Client.
Ideally, we'd like to perform type checking at build time to avoid increasing bundle size. This is comparable to the rest of the JavaScript ecosystem---for example, Flow and TypeScript types are both stripped out at build time.
We don't consider this a blocker for using
apollo-link-state, but it is a
feature we'd like to build before the v1.0 release. If you have any ideas on how
to achieve this, please open up an issue for discussion on the
apollo-link-state repo.
Helper components
Our goal for
apollo-link-state is to make your experience managing local data
in Apollo Client as seamless as possible. To accomplish this, we want to
minimize boilerplate as much as possible so you can be productive quickly.
We're nearly there; for example,
cache.writeData was added as a helper method
to reduce the boilerplate of
cache.writeQuery and
cache.writeFragment. We
think we can improve the boilerplate required for binding your query or mutation
to a component. For example, this is a common pattern for performing a
client-side mutation:
const WrappedComponent = graphql( gql` mutation updateStatus($text: String) { status(text: $text) @client } `, )(({ mutate }) => ( <button onClick={() => mutate({ variables: { text: 'yo' } })} /> ));
What if we could shorten it to something like this, so you don't have to write out the mutation details yourself, but it's still implemented as a mutation under the hood?
withClientMutations(({ writeField }) => ( <button onClick={() => writeField({ status: 'yo' })} /> ));
Once we find out how people are using
apollo-link-state, we can start to write
helper components for making common mutation and query patterns even easier.
These components will be separate from React Apollo and will live in another
package in the
apollo-link-state repo. If you'd like to help build them,
please get in touch! | https://www.apollographql.com/docs/link/links/state/?utm_source=khalil&utm_medium=article&utm_campaign=graphql_architectural_advantages | CC-MAIN-2020-10 | en | refinedweb |
This tutorial depends on step-31, step-55.
This program was contributed by Martin Kronbichler, Wolfgang Bangerth, and Timo Heister.
This material is based upon work partly work discussed here is also presented in the following publication: M. Kronbichler, T. Heister, W. Bangerth: High Accuracy Mantle Convection Simulation through Modern Numerical Methods, Geophysical Journal International, 2012, 191, 12-29. [DOI]
The continuation of development of this program has led to the much larger open source code ASPECT (see) which is much more flexible in solving many kinds of related problems.
This program does pretty much exactly what step-31 already does: it solves the Boussinesq equations that describe the motion of a fluid whose temperature is not in equilibrium. As such, all the equations we have described in step-31 still hold: we solve the same general partial differential equation (with only minor modifications to adjust for more realism in the problem setting), using the same finite element scheme, the same time stepping algorithm, and more or less the same stabilization method for the temperature advection-diffusion equation. As a consequence, you may first want to understand that program — and its implementation — before you work on the current one.
The difference between step-31 and the current program is that here we want to do things in parallel, using both the availability of many machines in a cluster (with parallelization based on MPI) as well as many processor cores within a single machine (with parallelization based on threads). This program's main job is therefore to introduce the changes that are necessary to utilize the availability of these parallel compute resources. In this regard, it builds on the step-40 program that first introduces the necessary classes for much of the parallel functionality, and on step-55 that shows how this is done for a vector-valued problem.
In addition to these changes, we also use a slightly different preconditioner, and we will have to make a number of changes that have to do with the fact that we want to solve a realistic problem here, not a model problem. The latter, in particular, will require that we think about scaling issues as well as what all those parameters and coefficients in the equations under consideration actually mean. We will discuss first the issues that affect changes in the mathematical formulation and solver structure, then how to parallelize things, and finally the actual testcase we will consider.
In step-31, we used the following Stokes model for the velocity and pressure field:
\begin{eqnarray*} -\nabla \cdot (2 \eta \varepsilon ({\mathbf u})) + \nabla p &=& -\rho \; \beta \; T \mathbf{g}, \\ \nabla \cdot {\mathbf u} &=& 0. \end{eqnarray*}
The right hand side of the first equation appears a wee bit unmotivated. Here's how things should really be. We need the external forces that act on the fluid, which we assume are given by gravity only. In the current case, we assume that the fluid does expand slightly for the purposes of this gravity force, but not enough that we need to modify the incompressibility condition (the second equation). What this means is that for the purpose of the right hand side, we can assume that \(\rho=\rho(T)\). An assumption that may not be entirely justified is that we can assume that the changes of density as a function of temperature are small, leading to an expression of the form \(\rho(T) = \rho_{\text{ref}} [1-\beta(T-T_{\text{ref}})]\), i.e., the density equals \(\rho_{\text{ref}}\) at reference temperature and decreases linearly as the temperature increases (as the material expands). The force balance equation then looks properly written like this:
\begin{eqnarray*} -\nabla \cdot (2 \eta \varepsilon ({\mathbf u})) + \nabla p &=& \rho_{\text{ref}} [1-\beta(T-T_{\text{ref}})] \mathbf{g}. \end{eqnarray*}
Now note that the gravity force results from a gravity potential as \(\mathbf g=-\nabla \varphi\), so that we can re-write this as follows:
\begin{eqnarray*} -\nabla \cdot (2 \eta \varepsilon ({\mathbf u})) + \nabla p &=& -\rho_{\text{ref}} \; \beta\; T\; \mathbf{g} -\rho_{\text{ref}} [1+\beta T_{\text{ref}}] \nabla\varphi. \end{eqnarray*}
The second term on the right is time independent, and so we could introduce a new "dynamic" pressure \(p_{\text{dyn}}=p+\rho_{\text{ref}} [1+\beta T_{\text{ref}}] \varphi=p_{\text{total}}-p_{\text{static}}\) with which the Stokes equations would read:
\begin{eqnarray*} -\nabla \cdot (2 \eta \varepsilon ({\mathbf u})) + \nabla p_{\text{dyn}} &=& -\rho_{\text{ref}} \; \beta \; T \; \mathbf{g}, \\ \nabla \cdot {\mathbf u} &=& 0. \end{eqnarray*}
This is exactly the form we used in step-31, and it was appropriate to do so because all changes in the fluid flow are only driven by the dynamic pressure that results from temperature differences. (In other words: Any contribution to the right hand side that results from taking the gradient of a scalar field have no effect on the velocity field.)
On the other hand, we will here use the form of the Stokes equations that considers the total pressure instead:
\begin{eqnarray*} -\nabla \cdot (2 \eta \varepsilon ({\mathbf u})) + \nabla p &=& \rho(T)\; \mathbf{g}, \\ \nabla \cdot {\mathbf u} &=& 0. \end{eqnarray*}
There are several advantages to this:
Remember that we want to solve the following set of equations:
\begin{eqnarray*} -\nabla \cdot (2 \eta \varepsilon ({\mathbf u})) + \nabla p &=& \rho(T) \mathbf{g}, \\ \nabla \cdot {\mathbf u} &=& 0, \\ \frac{\partial T}{\partial t} + {\mathbf u} \cdot \nabla T - \nabla \cdot \kappa \nabla T &=& \gamma, \end{eqnarray*}
augmented by appropriate boundary and initial conditions. As discussed in step-31, we will solve this set of equations by solving for a Stokes problem first in each time step, and then moving the temperature equation forward by one time interval.
The problem under consideration in this current section is with the Stokes problem: if we discretize it as usual, we get a linear system
\begin{eqnarray*} M \; X = \left(\begin{array}{cc} A & B^T \\ B & 0 \end{array}\right) \left(\begin{array}{c} U \\ P \end{array}\right) = \left(\begin{array}{c} F_U \\ 0 \end{array}\right) = F \end{eqnarray*}
which in this program we will solve with a FGMRES solver. This solver iterates until the residual of these linear equations is below a certain tolerance, i.e., until
\[ \left\| \left(\begin{array}{c} F_U - A U^{(k)} - B P^{(k)} \\ B^T U^{(k)} \end{array}\right) \right\| < \text{Tol}. \]
This does not make any sense from the viewpoint of physical units: the quantities involved here have physical units so that the first part of the residual has units \(\frac{\text{Pa}}{\text{m}} \text{m}^{\text{dim}}\) (most easily established by considering the term \((\nabla \cdot \mathbf v, p)_{\Omega}\) and considering that the pressure has units \(\text{Pa}=\frac{\text{kg}}{\text{m}\;\text{s}^2}\) and the integration yields a factor of \(\text{m}^{\text{dim}}\)), whereas the second part of the residual has units \(\frac{\text{m}^{\text{dim}}}{\text{s}}\). Taking the norm of this residual vector would yield a quantity with units \(\text{m}^{\text{dim}-1} \sqrt{\left(\text{Pa}\right)^2 + \left(\frac{\text{m}}{\text{s}}\right)^2}\). This, quite obviously, does not make sense, and we should not be surprised that doing so is eventually going to come back hurting us.
So why is this an issue here, but not in step-31? The reason back there is that everything was nicely balanced: velocities were on the order of one, the pressure likewise, the viscosity was one, and the domain had a diameter of \(\sqrt{2}\). As a result, while nonsensical, nothing bad happened. On the other hand, as we will explain below, things here will not be that simply scaled: \(\eta\) will be around \(10^{21}\), velocities on the order of \(10^{-8}\), pressure around \(10^8\), and the diameter of the domain is \(10^7\). In other words, the order of magnitude for the first equation is going to be \(\eta\text{div}\varepsilon(\mathbf u) \approx 10^{21} \frac{10^{-8}}{(10^7)^2} \approx 10^{-1}\), whereas the second equation will be around \(\text{div}{\mathbf u}\approx \frac{10^{-8}}{10^7} \approx 10^{-15}\). Well, so what this will lead to is this: if the solver wants to make the residual small, it will almost entirely focus on the first set of equations because they are so much bigger, and ignore the divergence equation that describes mass conservation. That's exactly what happens: unless we set the tolerance to extremely small values, the resulting flow field is definitely not divergence free. As an auxiliary problem, it turns out that it is difficult to find a tolerance that always works; in practice, one often ends up with a tolerance that requires 30 or 40 iterations for most time steps, and 10,000 for some others.
So what's a numerical analyst to do in a case like this? The answer is to start at the root and first make sure that everything is mathematically consistent first. In our case, this means that if we want to solve the system of Stokes equations jointly, we have to scale them so that they all have the same physical dimensions. In our case, this means multiplying the second equation by something that has units \(\frac{\text{Pa}\;\text{s}}{\text{m}}\); one choice is to multiply with \(\frac{\eta}{L}\) where \(L\) is a typical lengthscale in our domain (which experiments show is best chosen to be the diameter of plumes — around 10 km — rather than the diameter of the domain). Using these numbers for \(\eta\) and \(L\), this factor is around \(10^{17}\). So, we now get this for the Stokes system:
\begin{eqnarray*} -\nabla \cdot (2 \eta \varepsilon ({\mathbf u})) + \nabla p &=& \rho(T) \; \mathbf{g}, \\ \frac{\eta}{L} \nabla \cdot {\mathbf u} &=& 0. \end{eqnarray*}
The trouble with this is that the result is not symmetric any more (we have \(\frac{\eta}{L} \nabla \cdot\) at the bottom left, but not its transpose operator at the top right). This, however, can be cured by introducing a scaled pressure \(\hat p = \frac{L}{\eta}p\), and we get the scaled equations
\begin{eqnarray*} -\nabla \cdot (2 \eta \varepsilon ({\mathbf u})) + \nabla \left(\frac{\eta}{L} \hat p\right) &=& \rho(T) \; \mathbf{g}, \\ \frac{\eta}{L} \nabla \cdot {\mathbf u} &=& 0. \end{eqnarray*}
This is now symmetric. Obviously, we can easily recover the original pressure \(p\) from the scaled pressure \(\hat p\) that we compute as a result of this procedure.
In the program below, we will introduce a factor
EquationData::pressure_scaling that corresponds to \(\frac{\eta}{L}\), and we will use this factor in the assembly of the system matrix and preconditioner. Because it is annoying and error prone, we will recover the unscaled pressure immediately following the solution of the linear system, i.e., the solution vector's pressure component will immediately be unscaled to retrieve the physical pressure. Since the solver uses the fact that we can use a good initial guess by extrapolating the previous solutions, we also have to scale the pressure immediately before solving.
In this tutorial program, we apply a variant of the preconditioner used in step-31. That preconditioner was built to operate on the system matrix \(M\) in block form such that the product matrix
\begin{eqnarray*} P^{-1} M = \left(\begin{array}{cc} A^{-1} & 0 \\ S^{-1} B A^{-1} & -S^{-1} \end{array}\right) \left(\begin{array}{cc} A & B^T \\ B & 0 \end{array}\right) \end{eqnarray*}
is of a form that Krylov-based iterative solvers like GMRES can solve in a few iterations. We then replaced the exact inverse of \(A\) by the action of an AMG preconditioner \(\tilde{A}\) based on a vector Laplace matrix, approximated the Schur complement \(S = B A^{-1} B^T\) by a mass matrix \(M_p\) on the pressure space and wrote an
InverseMatrix class for implementing the action of \(M_p^{-1}\approx S^{-1}\) on vectors. In the InverseMatrix class, we used a CG solve with an incomplete Cholesky (IC) preconditioner for performing the inner solves.
An observation one can make is that we use just the action of a preconditioner for approximating the velocity inverse \(A^{-1}\) (and the outer GMRES iteration takes care of the approximate character of the inverse), whereas we use a more or less exact inverse for \(M_p^{-1}\), realized by a fully converged CG solve. This appears unbalanced, but there's system to this madness: almost all the effort goes into the upper left block to which we apply the AMG preconditioner, whereas even an exact inversion of the pressure mass matrix costs basically nothing. Consequently, if it helps us reduce the overall number of iterations somewhat, then this effort is well spent.
That said, even though the solver worked well for step-31, we have a problem here that is a bit more complicated (cells are deformed, the pressure varies by orders of magnitude, and we want to plan ahead for more complicated physics), and so we'll change a few things slightly:
As a final note, let us remark that in step-31 we computed the Schur complement \(S=B A^{-1} B^T\) by approximating \(-\text{div}(-\eta\Delta)^{-1}\nabla \approx \frac 1{\eta} \mathbf{1}\). Now, however, we have re-scaled the \(B\) and \(B^T\) operators. So \(S\) should now approximate \(-\frac{\eta}{L}\text{div}(-\eta\Delta)^{-1}\nabla \frac{\eta}{L} \approx \left(\frac{\eta}{L}\right)^2 \frac 1{\eta} \mathbf{1}\). We use the discrete form of the right hand side of this as our approximation \(\tilde S\) to \(S\).
Similarly to step-31, we will use an artificial viscosity for stabilization based on a residual of the equation. As a difference to step-31, we will provide two slightly different definitions of the stabilization parameter. For \(\alpha=1\), we use the same definition as in step-31:
\begin{eqnarray*} \nu_\alpha(T)|_K = \nu_1(T)|_K = \beta \|\mathbf{u}\|_{L^\infty(K)} h_K \min\left\{ 1, \frac{\|R_1(T)\|_{L^\infty(K)}}{c(\mathbf{u},T)} \right\} \end{eqnarray*}
where we compute the viscosity from a residual \(\|R_1(T)\|_{L^\infty(K)}\) of the equation, limited by a diffusion proportional to the mesh size \(h_K\) in regions where the residual is large (around steep gradients). This definition has been shown to work well for the given case, \(\alpha = 1\) in step-31, but it is usually less effective as the diffusion for \(\alpha=2\). For that case, we choose a slightly more readable definition of the viscosity,
\begin{eqnarray*} \nu_2(T)|_K = \min (\nu_h^\mathrm{max}|_K,\nu_h^\mathrm{E}|_K) \end{eqnarray*}
where the first term gives again the maximum dissipation (similarly to a first order upwind scheme),
\begin{eqnarray*} \nu^\mathrm{max}_h|_K = \beta h_K \|\mathbf {u}\|_{L^\infty(K)} \end{eqnarray*}
and the entropy viscosity is defined as
\begin{eqnarray*} \nu^\mathrm{E}_h|_K = c_R \frac{h_K^2 \|R_\mathrm{2,E}(T)\|_{L^\infty(K)}} {\|E(T) - \bar{E}(T)\|_{L^\infty(\Omega)} }. \end{eqnarray*}
This formula is described in the article J.-L. Guermond, R. Pasquetti, & B. Popov, 2011. Entropy viscosity method for nonlinear conservation laws, J. Comput. Phys., 230, 4248–4267. Compared to the case \(\alpha = 1\), the residual is computed from the temperature entropy, \(E(T) = \frac12 (T-T_m)^2\) with \(T_m\) an average temperature (we choose the mean between the maximum and minimum temperature in the computation), which gives the following formula
\begin{eqnarray*} R_\mathrm{E}(T) = \frac{\partial E(T)}{\partial t} + (T-T_\mathrm{m}) \left(\mathbf{u} \cdot \nabla T - \kappa \nabla^2 T - \gamma\right). \end{eqnarray*}
The denominator in the formula for \(\nu^\mathrm{E}_h|_K\) is computed as the global deviation of the entropy from the space-averaged entropy \(\bar{E}(T) = \int_\Omega E(T) d\mathbf{x}/\int_\Omega d\mathbf{x}\). As in step-31, we evaluate the artificial viscosity from the temperature and velocity at two previous time levels, in order to avoid a nonlinearity in its definition.
The above definitions of the viscosity are simple, but depend on two parameters, namely \(\beta\) and \(c_R\). For the current program, we want to go about this issue a bit more systematically for both parameters in the case \(\alpha =1\), using the same line of reasoning with which we chose two other parameters in our discretization, \(c_k\) and \(\beta\), in the results section of step-31. In particular, remember that we would like to make the artificial viscosity as small as possible while keeping it as large as necessary. In the following, let us describe the general strategy one may follow. The computations shown here were done with an earlier version of the program and so the actual numerical values you get when running the program may no longer match those shown here; that said, the general approach remains valid and has been used to find the values of the parameters actually used in the program.
To see what is happening, note that below we will impose boundary conditions for the temperature between 973 and 4273 Kelvin, and initial conditions are also chosen in this range; for these considerations, we run the program without internal heat sources or sinks, and consequently the temperature should always be in this range, barring any internal oscillations. If the minimal temperature drops below 973 Kelvin, then we need to add stabilization by either increasing \(\beta\) or decreasing \(c_R\).
As we did in step-31, we first determine an optimal value of \(\beta\) by using the "traditional" formula
\begin{eqnarray*} \nu_\alpha(T)|_K = \beta \|\mathbf{u}\|_{L^\infty(K)} h_K, \end{eqnarray*}
which we know to be stable if only \(\beta\) is large enough. Doing a couple hundred time steps (on a coarser mesh than the one shown in the program, and with a different viscosity that affects transport velocities and therefore time step sizes) in 2d will produce the following graph:
As can be seen, values \(\beta \le 0.05\) are too small whereas \(\beta=0.052\) appears to work, at least to the time horizon shown here. As a remark on the side, there are at least two questions one may wonder here: First, what happens at the time when the solution becomes unstable? Looking at the graphical output, we can see that with the unreasonably coarse mesh chosen for these experiments, around time \(t=10^{15}\) seconds the plumes of hot material that have been rising towards the cold outer boundary and have then spread sideways are starting to get close to each other, squeezing out the cold material in-between. This creates a layer of cells into which fluids flows from two opposite sides and flows out toward a third, apparently a scenario that then produce these instabilities without sufficient stabilization. Second: In step-31, we used \(\beta=0.015\cdot\text{dim}\); why does this not work here? The answer to this is not entirely clear – stabilization parameters are certainly known to depend on things like the shape of cells, for which we had squares in step-31 but have trapezoids in the current program. Whatever the exact cause, we at least have a value of \(\beta\), namely 0.052 for 2d, that works for the current program. A similar set of experiments can be made in 3d where we find that \(\beta=0.078\) is a good choice — neatly leading to the formula \(\beta=0.026 \cdot \textrm{dim}\).
With this value fixed, we can go back to the original formula for the viscosity \(\nu\) and play with the constant \(c_R\), making it as large as possible in order to make \(\nu\) as small as possible. This gives us a picture like this:
Consequently, \(c_R=0.1\) would appear to be the right value here. While this graph has been obtained for an exponent \(\alpha=1\), in the program we use \(\alpha=2\) instead, and in that case one has to re-tune the parameter (and observe that \(c_R\) appears in the numerator and not in the denominator). It turns out that \(c_R=1\) works with \(\alpha=2\).
The standard Taylor-Hood discretization for Stokes, using the \(Q_{k+1}^d \times Q_k\) element, is globally conservative, i.e. \(\int_{\partial\Omega} \mathbf n \cdot \mathbf u_h = 0\). This can easily be seen: the weak form of the divergence equation reads \((q_h, \textrm{div}\; \mathbf u_h)=0, \forall q_h\in Q_h\). Because the pressure space does contain the function \(q_h=1\), we get
\begin{align*} 0 = (1, \textrm{div}\; \mathbf u_h)_\Omega = \int_\Omega \textrm{div}\; \mathbf u_h = \int_{\partial\Omega} \mathbf n \cdot \mathbf u_h \end{align*}
by the divergence theorem. This property is important: if we want to use the velocity field \(u_h\) to transport along other quantities (such as the temperature in the current equations, but it could also be concentrations of chemical substances or entirely artificial tracer quantities) then the conservation property guarantees that the amount of the quantity advected remains constant.
That said, there are applications where this global property is not enough. Rather, we would like that it holds locally, on every cell. This can be achieved by using the space \(Q_{k+1}^d \times DGP_k\) for discretization, where we have replaced the continuous space of tensor product polynomials of degree \(k\) for the pressure by the discontinuous space of the complete polynomials of the same degree. (Note that tensor product polynomials in 2d contain the functions \(1, x, y, xy\), whereas the complete polynomials only have the functions \(1,x,y\).) This space turns out to be stable for the Stokes equation.
Because the space is discontinuous, we can now in particular choose the test function \(q_h(\mathbf x)=\chi_K(\mathbf x)\), i.e. the characteristic function of cell \(K\). We then get in a similar fashion as above
\begin{align*} 0 = (q_h, \textrm{div}\; \mathbf u_h)_\Omega = (1, \textrm{div}\; \mathbf u_h)_K = \int_K \textrm{div}\; \mathbf u_h = \int_{\partial K} \mathbf n \cdot \mathbf u_h, \end{align*}
showing the conservation property for cell \(K\). This clearly holds for each cell individually.
There are good reasons to use this discretization. As mentioned above, this element guarantees conservation of advected quantities on each cell individually. A second advantage is that the pressure mass matrix we use as a preconditioner in place of the Schur complement becomes block diagonal and consequently very easy to invert. However, there are also downsides. For one, there are now more pressure variables, increasing the overall size of the problem, although this doesn't seem to cause much harm in practice. More importantly, though, the fact that now the divergence integrated over each cell is zero when it wasn't before does not guarantee that the divergence is pointwise smaller. In fact, as one can easily verify, the \(L_2\) norm of the divergence is larger for this than for the standard Taylor-Hood discretization. (However, both converge at the same rate to zero, since it is easy to see that \(\|\textrm{div}\; u_h\|= \|\textrm{div}\; (u-u_h)\|= \|\textrm{trace}\; \nabla (u-u_h)\|\le \|\nabla (u-u_h)\|={\cal O}(h^{k+2})\).) It is therefore not a priori clear that the error is indeed smaller just because we now have more degrees of freedom.
Given these considerations, it remains unclear which discretization one should prefer. Consequently, we leave that up to the user and make it a parameter in the input file which one to use.
In the program, we will use a spherical shell as domain. This means that the inner and outer boundary of the domain are no longer "straight" (by which we usually mean that they are bilinear surfaces that can be represented by the FlatManifold class). Rather, they are curved and it seems prudent to use a curved approximation in the program if we are already using higher order finite elements for the velocity. Consequently, we will introduce a member variable of type MappingQ that denotes such a mapping (step-10 and step-11 introduce such mappings for the first time) and that we will use in all computations on cells that are adjacent to the boundary. Since this only affects a relatively small fraction of cells, the additional effort is not very large and we will take the luxury of using a quartic mapping for these cells.
Running convection codes in 3d with significant Rayleigh numbers requires a lot of computations — in the case of whole earth simulations on the order of one or several hundred million unknowns. This can obviously not be done with a single machine any more (at least not in 2010 when we started writing this code). Consequently, we need to parallelize it. Parallelization of scientific codes across multiple machines in a cluster of computers is almost always done using the Message Passing Interface (MPI). This program is no exception to that, and it follows the general spirit of the step-17 and step-18 programs in this though in practice it borrows more from step-40 in which we first introduced the classes and strategies we use when we want to completely distribute all computations, and step-55 that shows how to do that for vector-valued problems: including, for example, splitting the mesh up into a number of parts so that each processor only stores its own share plus some ghost cells, and using strategies where no processor potentially has enough memory to hold the entries of the combined solution vector locally. The goal is to run this code on hundreds or maybe even thousands of processors, at reasonable scalability.
MPI is a rather awkward interface to program with. It is a semi-object oriented set of functions, and while one uses it to send data around a network, one needs to explicitly describe the data types because the MPI functions insist on getting the address of the data as
void* objects rather than deducing the data type automatically through overloading or templates. We've already seen in step-17 and step-18 how to avoid almost all of MPI by putting all the communication necessary into either the deal.II library or, in those programs, into PETSc. We'll do something similar here: like in step-40 and step-55, deal.II and the underlying p4est library are responsible for all the communication necessary for distributing the mesh, and we will let the Trilinos library (along with the wrappers in namespace TrilinosWrappers) deal with parallelizing the linear algebra components. We have already used Trilinos in step-31, and will do so again here, with the difference that we will use its parallel capabilities.
Trilinos consists of a significant number of packages, implementing basic parallel linear algebra operations (the Epetra package), different solver and preconditioner packages, and on to things that are of less importance to deal.II (e.g., optimization, uncertainty quantification, etc). deal.II's Trilinos interfaces encapsulate many of the things Trilinos offers that are of relevance to PDE solvers, and provides wrapper classes (in namespace TrilinosWrappers) that make the Trilinos matrix, vector, solver and preconditioner classes look very much the same as deal.II's own implementations of this functionality. However, as opposed to deal.II's classes, they can be used in parallel if we give them the necessary information. As a consequence, there are two Trilinos classes that we have to deal with directly (rather than through wrappers), both of which are part of Trilinos' Epetra library of basic linear algebra and tool classes:
The Epetra_Comm class is an abstraction of an MPI "communicator", i.e., it describes how many and which machines can communicate with each other. Each distributed object, such as a sparse matrix or a vector for which we may want to store parts on different machines, needs to have a communicator object to know how many parts there are, where they can be found, and how they can be accessed.
In this program, we only really use one communicator object – based on the MPI variable
MPI_COMM_WORLD – that encompasses all processes that work together. It would be perfectly legitimate to start a process on \(N\) machines but only store vectors on a subset of these by producing a communicator object that only encompasses this subset of machines; there is really no compelling reason to do so here, however.
The IndexSet class is used to describe which elements of a vector or which rows of a matrix should reside on the current machine that is part of a communicator. To create such an object, you need to know (i) the total number of elements or rows, (ii) the indices of the elements you want to store locally. We will set up these
partitioners in the
BoussinesqFlowProblem::setup_dofs function below and then hand it to every parallel object we create.
Unlike PETSc, Trilinos makes no assumption that the elements of a vector need to be partitioned into contiguous chunks. At least in principle, we could store all elements with even indices on one processor and all odd ones on another. That's not very efficient, of course, but it's possible. Furthermore, the elements of these partitionings do not necessarily be mutually exclusive. This is important because when postprocessing solutions, we need access to all locally relevant or at least the locally active degrees of freedom (see the module on Parallel computing with multiple processors using distributed memory for a definition, as well as the discussion in step-40). Which elements the Trilinos vector considers as locally owned is not important to us then. All we care about is that it stores those elements locally that we need.
There are a number of other concepts relevant to distributing the mesh to a number of processors; you may want to take a look at the Parallel computing with multiple processors using distributed memory module and step-40 or step-55 before trying to understand this program. The rest of the program is almost completely agnostic about the fact that we don't store all objects completely locally. There will be a few points where we have to limit loops over all cells to those that are locally owned, or where we need to distinguish between vectors that store only locally owned elements and those that store everything that is locally relevant (see this glossary entry), but by and large the amount of heavy lifting necessary to make the program run in parallel is well hidden in the libraries upon which this program builds. In any case, we will comment on these locations as we get to them in the program code.
The second strategy to parallelize a program is to make use of the fact that most computers today have more than one processor that all have access to the same memory. In other words, in this model, we don't explicitly have to say which pieces of data reside where – all of the data we need is directly accessible and all we have to do is split processing this data between the available processors. We will then couple this with the MPI parallelization outlined above, i.e., we will have all the processors on a machine work together to, for example, assemble the local contributions to the global matrix for the cells that this machine actually "owns" but not for those cells that are owned by other machines. We will use this strategy for four kinds of operations we frequently do in this program: assembly of the Stokes and temperature matrices, assembly of the matrix that forms the Stokes preconditioner, and assembly of the right hand side of the temperature system.
All of these operations essentially look as follows: we need to loop over all cells for which
cell->subdomain_id() equals the index our machine has within the communicator object used for all communication (i.e.,
MPI_COMM_WORLD, as explained above). The test we are actually going to use for this, and which describes in a concise way why we test this condition, is
cell->is_locally_owned(). On each such cell we need to assemble the local contributions to the global matrix or vector, and then we have to copy each cell's contribution into the global matrix or vector. Note that the first part of this (the loop) defines a range of iterators on which something has to happen. The second part, assembly of local contributions is something that takes the majority of CPU time in this sequence of steps, and is a typical example of things that can be done in parallel: each cell's contribution is entirely independent of all other cells' contributions. The third part, copying into the global matrix, must not happen in parallel since we are modifying one object and so several threads can not at the same time read an existing matrix element, add their contribution, and write the sum back into memory without danger of producing a race condition.
deal.II has a class that is made for exactly this workflow: WorkStream, first discussed in step-9 and step-13. Its use is also extensively documented in the module on Parallel computing with multiple processors accessing shared memory (in the section on the WorkStream class) and we won't repeat here the rationale and detailed instructions laid out there, though you will want to read through this module to understand the distinction between scratch space and per-cell data. Suffice it to mention that we need the following:
BoussinesqFlowProblem::local_assemble_stokes_system,
BoussinesqFlowProblem::local_assemble_stokes_preconditioner,
BoussinesqFlowProblem::local_assemble_temperature_matrix, and
BoussinesqFlowProblem::local_assemble_temperature_rhsfunctions in the code below. These four functions can all have several instances running in parallel at the same time.
BoussinesqFlowProblem::copy_local_to_global_stokes_system,
BoussinesqFlowProblem::copy_local_to_global_stokes_preconditioner,
BoussinesqFlowProblem::copy_local_to_global_temperature_matrix, and
BoussinesqFlowProblem::copy_local_to_global_temperature_rhsfunctions.
We will comment on a few more points in the actual code, but in general their structure should be clear from the discussion in Parallel computing with multiple processors accessing shared memory.
The underlying technology for WorkStream identifies "tasks" that need to be worked on (e.g. assembling local contributions on a cell) and schedules these tasks automatically to available processors. WorkStream creates these tasks automatically, by splitting the iterator range into suitable chunks.
The setup for this program is mildly reminiscent of the problem we wanted to solve in the first place (see the introduction of step-31): convection in the earth mantle. As a consequence, we choose the following data, all of which appears in the program in units of meters and seconds (the SI system) even if we list them here in other units. We do note, however, that these choices are essentially still only exemplary, and not meant to result in a completely realistic description of convection in the earth mantle: for that, more and more difficult physics would have to be implemented, and several other aspects are currently missing from this program as well. We will come back to this issue in the results section again, but state for now that providing a realistic description is a goal of the ASPECT code in development at the time of writing this.
As a reminder, let us again state the equations we want to solve are these:
\begin{eqnarray*} -\nabla \cdot (2 \eta \varepsilon ({\mathbf u})) + \nabla \left( \frac{\eta}{L} \hat p\right) &=& \rho(T) \mathbf{g}, \\ \frac{\eta}{L} \nabla \cdot {\mathbf u} &=& 0, \\ \frac{\partial T}{\partial t} + {\mathbf u} \cdot \nabla T - \nabla \cdot \kappa \nabla T &=& \gamma, \end{eqnarray*}
augmented by boundary and initial conditions. We then have to choose data for the following quantities:
The domain is an annulus (in 2d) or a spherical shell (in 3d) with inner and outer radii that match that of the earth: the total radius of the earth is 6371km, with the mantle starting at a depth of around 35km (just under the solid earth crust composed of continental and oceanic plates) to a depth of 2890km (where the outer earth core starts). The radii are therefore \(R_0=(6371-2890)\text{km}, R_1=(6371-35)\text{km}\). This domain is conveniently generated using the GridGenerator::hyper_shell() function.
At the interface between crust and mantle, the temperature is between 500 and 900 degrees Celsius, whereas at its bottom it is around 4000 degrees Celsius (see, for example, this Wikipedia entry). In Kelvin, we therefore choose \(T_0=(4000+273)\text{K}\), \(T_1=(500+273)\text{K}\) as boundary conditions at the inner and outer edge.
In addition to this, we also have to specify some initial conditions for the temperature field. The real temperature field of the earth is quite complicated as a consequence of the convection that has been going on for more than four billion years – in fact, it is the properties of this temperature distribution that we want to explore with programs like this. As a consequence, we don't really have anything useful to offer here, but we can hope that if we start with something and let things run for a while that the exact initial conditions don't matter that much any more — as is in fact suggested by looking at the pictures shown in the results section below. The initial temperature field we use here is given in terms of the radius by
\begin{align*} s &= \frac{\|\mathbf x\|-R_0}{R_1-R_0}, \\ \varphi &= \arctan \frac{y}{x}, \\ \tau &= s + \frac 15 s(1-s) \sin(6\varphi) q(z), \\ T(\mathbf x) &= T_0(1-\tau) + T_1\tau, \end{align*}
where
\begin{align*} q(z) = \left\{ \begin{array}{ll} 1 & \text{in 2d} \\ \max\{0, \cos(\pi |z/R_1|)\} & \text{in 3d} \end{array} \right. . \end{align*}
This complicated function is essentially a perturbation of a linear profile between the inner and outer temperatures. In 2d, the function \(\tau=\tau(\mathbf x)\) looks like this (I got the picture from this page):
The point of this profile is that if we had used \(s\) instead of \(\tau\) in the definition of \(T(\mathbf x)\) then it would simply be a linear interpolation. \(\tau\) has the same function values as \(s\) on the inner and outer boundaries (zero and one, respectively), but it stretches the temperature profile a bit depending on the angle and the \(z\) value in 3d, producing an angle-dependent perturbation of the linearly interpolating field. We will see in the results section that this is an entirely unphysical temperature field (though it will make for interesting images) as the equilibrium state for the temperature will be an almost constant temperature with boundary layers at the inner and outer boundary.
The right hand side of the temperature equation contains the rate of internal heating \(\gamma\). The earth does heat naturally through several mechanisms: radioactive decay, chemical separation (heavier elements sink to the bottom, lighter ones rise to the top; the countercurrents dissipate energy equal to the loss of potential energy by this separation process); heat release by crystallization of liquid metal as the solid inner core of the earth grows; and heat dissipation from viscous friction as the fluid moves.
Chemical separation is difficult to model since it requires modeling mantle material as multiple phases; it is also a relatively small effect. Crystallization heat is even more difficult since it is confined to areas where temperature and pressure allow for phase changes, i.e., a discontinuous process. Given the difficulties in modeling these two phenomena, we will neglect them.
The other two are readily handled and, given the way we scaled the temperature equation, lead to the equation
\[ \gamma(\mathbf x) = \frac{\rho q+2\eta \varepsilon(\mathbf u):\varepsilon(\mathbf u)} {\rho c_p}, \]
where \(q\) is the radiogenic heating in \(\frac{W}{kg}\), and the second term in the enumerator is viscous friction heating. \(\rho\) is the density and \(c_p\) is the specific heat. The literature provides the following approximate values: \(c_p=1250 \frac{J}{kg\; K}, q=7.4\cdot 10^{-12}\frac{W}{kg}\). The other parameters are discussed elsewhere in this section.
We neglect one internal heat source, namely adiabatic heating here, which will lead to a surprising temperature field. This point is commented on in detail in the results section below.
For the velocity we choose as boundary conditions \(\mathbf{v}=0\) at the inner radius (i.e., the fluid sticks to the earth core) and \(\mathbf{n}\cdot\mathbf{v}=0\) at the outer radius (i.e., the fluid flows tangentially along the bottom of the earth crust). Neither of these is physically overly correct: certainly, on both boundaries, fluids can flow tangentially, but they will incur a shear stress through friction against the medium at the other side of the interface (the metallic core and the crust, respectively). Such a situation could be modeled by a Robin-type boundary condition for the tangential velocity; in either case, the normal (vertical) velocity would be zero, although even that is not entirely correct since continental plates also have vertical motion (see, for example, the phenomenon of post-glacial rebound). But to already make things worse for the tangential velocity, the medium on the other side is in motion as well, so the shear stress would, in the simplest case, be proportional to the velocity difference, leading to a boundary condition of the form
\begin{align*} \mathbf{n}\cdot [2\eta \varepsilon(\mathbf v)] &= s \mathbf{n} \times [\mathbf v - \mathbf v_0], \\ \mathbf{n} \cdot \mathbf v &= 0, \end{align*}
with a proportionality constant \(s\). Rather than going down this route, however, we go with the choice of zero (stick) and tangential flow boundary conditions.
As a side note of interest, we may also have chosen tangential flow conditions on both inner and outer boundary. That has a significant drawback, however: it leaves the velocity not uniquely defined. The reason is that all velocity fields \(\hat{\mathbf v}\) that correspond to a solid body rotation around the center of the domain satisfy \(\mathrm{div}\; \varepsilon(\hat{\mathbf v})=0, \mathrm{div} \;\hat{\mathbf v} = 0\), and \(\mathbf{n} \cdot \hat{\mathbf v} = 0\). As a consequence, if \(\mathbf v\) satisfies equations and boundary conditions, then so does \(\mathbf v + \hat{\mathbf v}\). That's certainly not a good situation that we would like to avoid. The traditional way to work around this is to pick an arbitrary point on the boundary and call this your fixed point by choosing the velocity to be zero in all components there. (In 3d one has to choose two points.) Since this program isn't meant to be too realistic to begin with, we avoid this complication by simply fixing the velocity along the entire interior boundary.
To first order, the gravity vector always points downward. The question for a body as big as the earth is just: where is "up". The naive answer of course is "radially inward, towards the center of the earth". So at the surface of the earth, we have
\[ \mathbf g = -9.81 \frac{\text{m}}{\text{s}^2} \frac{\mathbf x}{\|\mathbf x\|}, \]
where \(9.81 \frac{\text{m}}{\text{s}^2}\) happens to be the average gravity acceleration at the earth surface. But in the earth interior, the question becomes a bit more complicated: at the (bary-)center of the earth, for example, you have matter pulling equally hard in all directions, and so \(\mathbf g=0\). In between, the net force is described as follows: let us define the gravity potential by
\[ \varphi(\mathbf x) = \int_{\text{earth}} -G \frac{\rho(\mathbf y)}{\|\mathbf x-\mathbf y\|} \ \text{d}y, \]
then \(\mathbf g(\mathbf x) = -\nabla \varphi(\mathbf x)\). If we assume that the density \(\rho\) is constant throughout the earth, we can produce an analytical expression for the gravity vector (don't try to integrate above equation somehow – it leads to elliptic integrals; a simpler way is to notice that \(-\Delta\varphi(\mathbf x) = -4\pi G \rho \chi_{\text{earth}}(\mathbf x)\) and solving this partial differential equation in all of \({\mathbb R}^3\) exploiting the radial symmetry):
\[ \mathbf g(\mathbf x) = \left\{ \begin{array}{ll} -\frac{4}{3}\pi G \rho \|\mathbf x\| \frac{\mathbf x}{\|\mathbf x\|} & \text{for} \ \|\mathbf x\|<R_1, \\ -\frac{4}{3}\pi G \rho R^3 \frac{1}{\|\mathbf x\|^2} \frac{\mathbf x}{\|\mathbf x\|} & \text{for} \ \|\mathbf x\|\ge R_1. \end{array} \right. \]
The factor \(-\frac{\mathbf x}{\|\mathbf x\|}\) is the unit vector pointing radially inward. Of course, within this problem, we are only interested in the branch that pertains to within the earth, i.e., \(\|\mathbf x\|<R_1\). We would therefore only consider the expression
\[ \mathbf g(\mathbf x) = -\frac{4}{3}\pi G \rho \|\mathbf x\| \frac{\mathbf x}{\|\mathbf x\|} = -\frac{4}{3}\pi G \rho \mathbf x = - 9.81 \frac{\mathbf x}{R_1} \frac{\text{m}}{\text{s}^2}, \]
where we can infer the last expression because we know Earth's gravity at the surface (where \(\|x\|=R_1\)).
One can derive a more general expression by integrating the differential equation for \(\varphi(r)\) in the case that the density distribution is radially symmetric, i.e., \(\rho(\mathbf x)=\rho(\|\mathbf x\|)=\rho(r)\). In that case, one would get
\[ \varphi(r) = 4\pi G \int_0^r \frac 1{s^2} \int_0^s t^2 \rho(t) \; dt \; ds. \]
There are two problems with this, however: (i) The Earth is not homogeneous, i.e., the density \(\rho\) depends on \(\mathbf x\); in fact it is not even a function that only depends on the radius \(r=\|\mathbf x\|\). In reality, gravity therefore does not always decrease as we get deeper: because the earth core is so much denser than the mantle, gravity actually peaks at around \(10.7 \frac{\text{m}}{\text{s}^2}\) at the core mantle boundary (see this article). (ii) The density, and by consequence the gravity vector, is not even constant in time: after all, the problem we want to solve is the time dependent upwelling of hot, less dense material and the downwelling of cold dense material. This leads to a gravity vector that varies with space and time, and does not always point straight down.
In order to not make the situation more complicated than necessary, we could use the approximation that at the inner boundary of the mantle, gravity is \(10.7 \frac{\text{m}}{\text{s}^2}\) and at the outer boundary it is \(9.81 \frac{\text{m}}{\text{s}^2}\), in each case pointing radially inward, and that in between gravity varies linearly with the radial distance from the earth center. That said, it isn't that hard to actually be slightly more realistic and assume (as we do below) that the earth mantle has constant density. In that case, the equation above can be integrated and we get an expression for \(\|\mathbf{g}\|\) where we can fit constants to match the gravity at the top and bottom of the earth mantle to obtain
\[ \|\mathbf{g}\| = 1.245\cdot 10^{-6} \frac{1}{\textrm{s}^2} r + 7.714\cdot 10^{13} \frac{\textrm{m}^3}{\textrm{s}^2}\frac{1}{r^2}. \]
The density of the earth mantle varies spatially, but not by very much. \(\rho_{\text{ref}}=3300 \frac{\text{kg}}{\text{m}^3}\) is a relatively good average value for the density at reference temperature \(T_{\text{ref}}=293\) Kelvin.
The thermal expansion coefficient \(\beta\) also varies with depth (through its dependence on temperature and pressure). Close to the surface, it appears to be on the order of \(\beta=45\cdot 10^{-6} \frac 1{\text{K}}\), whereas at the core mantle boundary, it may be closer to \(\beta=10\cdot 10^{-6} \frac 1{\text{K}}\). As a reasonable value, let us choose \(\beta=2\cdot 10^{-5} \frac 1{\text{K}}\). The density as a function of temperature is then \(\rho(T)=[1-\beta(T-T_{\text{ref}})]\rho_{\text{ref}}\).
The second to last parameter we need to specify is the viscosity \(\eta\). This is a tough one, because rocks at the temperatures and pressure typical for the earth mantle flow so slowly that the viscosity can not be determined accurately in the laboratory. So how do we know about the viscosity of the mantle? The most commonly used route is to consider that during and after ice ages, ice shields form and disappear on time scales that are shorter than the time scale of flow in the mantle. As a consequence, continents slowly sink into the earth mantle under the added weight of an ice shield, and they rise up again slowly after the ice shield has disappeared again (this is called postglacial rebound). By measuring the speed of this rebound, we can infer the viscosity of the material that flows into the area vacated under the rebounding continental plates.
Using this technique, values around \(\eta=10^{21} \text{Pa}\;\text{s} = 10^{21} \frac{\text{N}\;\text{s}}{\text{m}^2} = 10^{21} \frac{\text{kg}}{\text{m}\;\text{s}}\) have been found as the most likely, though the error bar on this is at least one order of magnitude.
While we will use this value, we again have to caution that there are many physical reasons to assume that this is not the correct value. First, it should really be made dependent on temperature: hotter material is most likely to be less viscous than colder material. In reality, however, the situation is even more complex. Most rocks in the mantle undergo phase changes as temperature and pressure change: depending on temperature and pressure, different crystal configurations are thermodynamically favored over others, even if the chemical composition of the mantle were homogeneous. For example, the common mantle material MgSiO3 exists in its perovskite structure throughout most of the mantle, but in the lower mantle the same substance is stable only as post-perovskite. Clearly, to compute realistic viscosities, we would not only need to know the exact chemical composition of the mantle and the viscosities of all materials, but we would also have to compute the thermodynamically most stable configurations for all materials at each quadrature point. This is at the time of writing this program not a feasible suggestion.
Our last material parameter is the thermal diffusivity \(\kappa\), which is defined as \(\kappa=\frac{k}{\rho c_p}\) where \(k\) is the thermal conductivity, \(\rho\) the density, and \(c_p\) the specific heat. For this, the literature indicates that it increases from around \(0.7\) in the upper mantle to around \(1.7 \frac{\text{mm}^2}{\text{s}}\) in the lower mantle, though the exact value is not really all that important: heat transport through convection is several orders of magnitude more important than through thermal conduction. It may be of interest to know that perovskite, the most abundant material in the earth mantle, appears to become transparent at pressures above around 120 GPa (see, for example, J. Badro et al., Science 305, 383-386 (2004)); in the lower mantle, it may therefore be that heat transport through radiative transfer is more efficient than through thermal conduction.
In view of these considerations, let us choose \(\kappa=1 \frac{\text{mm}^2}{\text{s}} =10^{-6} \frac{\text{m}^2}{\text{s}}\) for the purpose of this program.
All of these pieces of equation data are defined in the program in the
EquationData namespace. When run, the program produces long-term maximal velocities around 10-40 centimeters per year (see the results section below), approximately the physically correct order of magnitude. We will set the end time to 1 billion years.
Compared to step-31, this program has a number of noteworthy differences:
EquationDatanamespace is significantly larger, reflecting the fact that we now have much more physics to deal with. That said, most of this additional physical detail is rather self-contained in functions in this one namespace, and does not proliferate throughout the rest of the program.
step-32.prmfile :
This is a tutorial program. That means that at least most of its focus needs to lie on demonstrating ways of using deal.II and associated libraries, and not diluting this teaching lesson by focusing overly much on physical details. Despite the lengthy section above on the choice of physical parameters, the part of the program devoted to this is actually quite short and self contained.
That said, both step-31 and the current step-32 have not come about by chance but are certainly meant as wayposts along the path to a more comprehensive program that will simulate convection in the earth mantle. We call this code ASPECT (short for Advanced Solver for Problems in Earth's ConvecTion); its development is funded by the Computational Infrastructure in Geodynamics initiative with support from the National Science Foundation. More information on ASPECT is available at its homepage.
The first task as usual is to include the functionality of these well-known deal.II library files and some C++ header files.
This is the only include file that is new: It introduces the parallel::distributed::SolutionTransfer equivalent of the SolutionTransfer class to take a solution from on mesh to the next one upon mesh refinement, but in the case of parallel distributed triangulations:
The following classes are used in parallel distributed computations and have all already been introduced in step-40:
The next step is like in all previous tutorial programs: We put everything into a namespace of its own and then import the deal.II classes and functions into it:
In the following namespace, we define the various pieces of equation data that describe the problem. This corresponds to the various aspects of making the problem at least slightly realistic and that were exhaustively discussed in the description of the testcase in the introduction.
We start with a few coefficients that have constant values (the comment after the value indicates its physical units):
The next set of definitions are for functions that encode the density as a function of temperature, the gravity vector, and the initial values for the temperature. Again, all of these (along with the values they compute) are discussed in the introduction:
As mentioned in the introduction we need to rescale the pressure to avoid the relative ill-conditioning of the momentum and mass conservation equations. The scaling factor is \(\frac{\eta}{L}\) where \(L\) was a typical length scale. By experimenting it turns out that a good length scale is the diameter of plumes, which is around 10 km:
The final number in this namespace is a constant that denotes the number of seconds per (average, tropical) year. We use this only when generating screen output: internally, all computations of this program happen in SI units (kilogram, meter, seconds) but writing geological times in seconds yields numbers that one can't relate to reality, and so we convert to years using the factor defined here:
This namespace implements the preconditioner. As discussed in the introduction, this preconditioner differs in a number of key portions from the one used in step-31. Specifically, it is a right preconditioner, implementing the matrix
\begin{align*} \left(\begin{array}{cc}A^{-1} & B^T \\0 & S^{-1} \end{array}\right) \end{align*}
where the two inverse matrix operations are approximated by linear solvers or, if the right flag is given to the constructor of this class, by a single AMG V-cycle for the velocity block. The three code blocks of the
vmult function implement the multiplications with the three blocks of this preconditioner matrix and should be self explanatory if you have read through step-31 or the discussion of composing solvers in step-20.
As described in the introduction, we will use the WorkStream mechanism discussed in the Parallel computing with multiple processors accessing shared memory module to parallelize operations among the processors of a single machine. The WorkStream class requires that data is passed around in two kinds of data structures, one for scratch data and one to pass data from the assembly function to the function that copies local contributions into global objects.
The following namespace (and the two sub-namespaces) contains a collection of data structures that serve this purpose, one pair for each of the four operations discussed in the introduction that we will want to parallelize. Each assembly routine gets two sets of data: a Scratch array that collects all the classes and arrays that are used for the calculation of the cell contribution, and a CopyData array that keeps local matrices and vectors which will be written into the global matrix. Whereas CopyData is a container for the final data that is written into the global matrices and vector (and, thus, absolutely necessary), the Scratch arrays are merely there for performance reasons — it would be much more expensive to set up a FEValues object on each cell, than creating it only once and updating some derivative data.
step-31 had four assembly routines: One for the preconditioner matrix of the Stokes system, one for the Stokes matrix and right hand side, one for the temperature matrices and one for the right hand side of the temperature equation. We here organize the scratch arrays and CopyData objects for each of those four assembly components using a
struct environment (since we consider these as temporary objects we pass around, rather than classes that implement functionality of their own, though this is a more subjective point of view to distinguish between
structs and
classes).
Regarding the Scratch objects, each struct is equipped with a constructor that creates an FEValues object using the FiniteElement, Quadrature, Mapping (which describes the interpolation of curved boundaries), and The interplay of UpdateFlags, Mapping, and FiniteElement in FEValues instances. Moreover, we manually implement a copy constructor (since the FEValues class is not copyable by itself), and provide some additional vector fields that are used to hold intermediate data during the computation of local contributions.
Let us start with the scratch arrays and, specifically, the one used for assembly of the Stokes preconditioner:
The next one is the scratch object used for the assembly of the full Stokes system. Observe that we derive the StokesSystem scratch class from the StokesPreconditioner class above. We do this because all the objects that are necessary for the assembly of the preconditioner are also needed for the actual matrix system and right hand side, plus some extra data. This makes the program more compact. Note also that the assembly of the Stokes system and the temperature right hand side further down requires data from temperature and velocity, respectively, so we actually need two FEValues objects for those two cases.
After defining the objects used in the assembly of the Stokes system, we do the same for the assembly of the matrices necessary for the temperature system. The general structure is very similar:
The final scratch object is used in the assembly of the right hand side of the temperature system. This object is significantly larger than the ones above because a lot more quantities enter the computation of the right hand side of the temperature equation. In particular, the temperature values and gradients of the previous two time steps need to be evaluated at the quadrature points, as well as the velocities and the strain rates (i.e. the symmetric gradients of the velocity) that enter the right hand side as friction heating terms. Despite the number of terms, the following should be rather self explanatory:
The CopyData objects are even simpler than the Scratch objects as all they have to do is to store the results of local computations until they can be copied into the global matrix or vector objects. These structures therefore only need to provide a constructor, a copy operation, and some arrays for local matrix, local vectors and the relation between local and global degrees of freedom (a.k.a.
local_dof_indices). Again, we have one such structure for each of the four operations we will parallelize using the WorkStream class:
BoussinesqFlowProblemclass template
This is the declaration of the main class. It is very similar to step-31 but there are a number differences we will comment on below.
The top of the class is essentially the same as in step-31, listing the public methods and a set of private functions that do the heavy lifting. Compared to step-31 there are only two additions to this section: the function
get_cfl_number() that computes the maximum CFL number over all cells which we then compute the global time step from, and the function
get_entropy_variation() that is used in the computation of the entropy stabilization. It is akin to the
get_extrapolated_temperature_range() we have used in step-31 for this purpose, but works on the entropy instead of the temperature instead.
The first significant new component is the definition of a struct for the parameters according to the discussion in the introduction. This structure is initialized by reading from a parameter file during construction of this object.
The
pcout (for parallel
std::cout) object is used to simplify writing output: each MPI process can use this to generate output as usual, but since each of these processes will (hopefully) produce the same output it will just be replicated many times over; with the ConditionalOStream class, only the output generated by one MPI process will actually be printed to screen, whereas the output by all the other threads will simply be forgotten.
The following member variables will then again be similar to those in step-31 (and to other tutorial programs). As mentioned in the introduction, we fully distribute computations, so we will have to use the parallel::distributed::Triangulation class (see step-40) but the remainder of these variables is rather standard with two exceptions:
mappingvariable is used to denote a higher-order polynomial mapping. As mentioned in the introduction, we use this mapping when forming integrals through quadrature for all cells that are adjacent to either the inner or outer boundaries of our domain where the boundary is curved.
*_solutionvectors are therefore filled immediately after solving their respective linear system in parallel and will always contain values for all locally relevant degrees of freedom; the fully distributed vectors that we obtain from the solution process and that only ever contain the locally owned degrees of freedom are destroyed immediately after the solution process and after we have copied the relevant values into the member variable vectors.
The next member variable,
computing_timer is used to conveniently account for compute time spent in certain "sections" of the code that are repeatedly entered. For example, we will enter (and leave) sections for Stokes matrix assembly and would like to accumulate the run time spent in this section over all time steps. Every so many time steps as well as at the end of the program (through the destructor of the TimerOutput class) we will then produce a nice summary of the times spent in the different sections into which we categorize the run-time of this program.
After these member variables we have a number of auxiliary functions that have been broken out of the ones listed above. Specifically, there are first three functions that we call from
setup_dofs and then the ones that do the assembling of linear systems:
Following the task-based parallelization paradigm, we split all the assembly routines into two parts: a first part that can do all the calculations on a certain cell without taking care of other threads, and a second part (which is writing the local data into the global matrices and vectors) which can be entered by only one thread at a time. In order to implement that, we provide functions for each of those two steps for all the four assembly routines that we use in this program. The following eight functions do exactly this:
Finally, we forward declare a member class that we will define later on and that will be used to compute a number of quantities from our solution vectors that we'd like to put into the output files for visualization.
Here comes the definition of the parameters for the Stokes problem. We allow to set the end time for the simulation, the level of refinements (both global and adaptive, which in the sum specify what maximum level the cells are allowed to have), and the interval between refinements in the time stepping.
Then, we let the user specify constants for the stabilization parameters (as discussed in the introduction), the polynomial degree for the Stokes velocity space, whether to use the locally conservative discretization based on FE_DGP elements for the pressure or not (FE_Q elements for pressure), and the polynomial degree for the temperature interpolation.
The constructor checks for a valid input file (if not, a file with default parameters for the quantities is written), and eventually parses the parameters.
Next we have a function that declares the parameters that we expect in the input file, together with their data types, default values and a description:
And then we need a function that reads the contents of the ParameterHandler object we get by reading the input file and puts the results into variables that store the values of the parameters we have previously declared:
The constructor of the problem is very similar to the constructor in step-31. What is different is the parallel communication: Trilinos uses a message passing interface (MPI) for data distribution. When entering the BoussinesqFlowProblem class, we have to decide how the parallelization is to be done. We choose a rather simple strategy and let all processors that are running the program work together, specified by the communicator
MPI_COMM_WORLD. Next, we create the output stream (as we already did in step-18) that only generates output on the first MPI process and is completely forgetful on all others. The implementation of this idea is to check the process number when
pcout gets a true argument, and it uses the
std::cout stream for output. If we are one processor five, for instance, then we will give a
false argument to
pcout, which means that the output of that processor will not be printed. With the exception of the mapping object (for which we use polynomials of degree 4) all but the final member variable are exactly the same as in step-31.
This final object, the TimerOutput object, is then told to restrict output to the
pcout stream (processor 0), and then we specify that we want to get a summary table at the end of the program which shows us wallclock times (as opposed to CPU times). We will manually also request intermediate summaries every so many time steps in the
run() function below.
Except for two small details, the function to compute the global maximum of the velocity is the same as in step-31. The first detail is actually common to all functions that implement loops over all cells in the triangulation: When operating in parallel, each processor can only work on a chunk of cells since each processor only has a certain part of the entire triangulation. This chunk of cells that we want to work on is identified via a so-called
subdomain_id, as we also did in step-18. All we need to change is hence to perform the cell-related operations only on cells that are owned by the current process (as opposed to ghost or artificial cells), i.e. for which the subdomain id equals the number of the process ID. Since this is a commonly used operation, there is a shortcut for this operation: we can ask whether the cell is owned by the current processor using
cell->is_locally_owned().
The second difference is the way we calculate the maximum value. Before, we could simply have a
double variable that we checked against on each quadrature point for each cell. Now, we have to be a bit more careful since each processor only operates on a subset of cells. What we do is to first let each processor calculate the maximum among its cells, and then do a global communication operation
Utilities::MPI::max that computes the maximum value among all the maximum values of the individual processors. MPI provides such a call, but it's even simpler to use the respective function in namespace Utilities::MPI using the MPI communicator object since that will do the right thing even if we work without MPI and on a single machine only. The call to
Utilities::MPI::max needs two arguments, namely the local maximum (input) and the MPI communicator, which is MPI_COMM_WORLD in this example.
The next function does something similar, but we now compute the CFL number, i.e., maximal velocity on a cell divided by the cell diameter. This number is necessary to determine the time step size, as we use a semi-explicit time stepping scheme for the temperature equation (see step-31 for a discussion). We compute it in the same way as above: Compute the local maximum over all locally owned cells, then exchange it via MPI to find the global maximum.
Next comes the computation of the global entropy variation \(\|E(T)-\bar{E}(T)\|_\infty\) where the entropy \(E\) is defined as discussed in the introduction. This is needed for the evaluation of the stabilization in the temperature equation as explained in the introduction. The entropy variation is actually only needed if we use \(\alpha=2\) as a power in the residual computation. The infinity norm is computed by the maxima over quadrature points, as usual in discrete computations.
In order to compute this quantity, we first have to find the space-average \(\bar{E}(T)\) and then evaluate the maximum. However, that means that we would need to perform two loops. We can avoid the overhead by noting that \(\|E(T)-\bar{E}(T)\|_\infty = \max\big(E_{\textrm{max}}(T)-\bar{E}(T), \bar{E}(T)-E_{\textrm{min}}(T)\big)\), i.e., the maximum out of the deviation from the average entropy in positive and negative directions. The four quantities we need for the latter formula (maximum entropy, minimum entropy, average entropy, area) can all be evaluated in the same loop over all cells, so we choose this simpler variant.
In the two functions above we computed the maximum of numbers that were all non-negative, so we knew that zero was certainly a lower bound. On the other hand, here we need to find the maximum deviation from the average value, i.e., we will need to know the maximal and minimal values of the entropy for which we don't a priori know the sign.
To compute it, we can therefore start with the largest and smallest possible values we can store in a double precision number: The minimum is initialized with a bigger and the maximum with a smaller number than any one that is going to appear. We are then guaranteed that these numbers will be overwritten in the loop on the first cell or, if this processor does not own any cells, in the communication step at the latest. The following loop then computes the minimum and maximum local entropy as well as keeps track of the area/volume of the part of the domain we locally own and the integral over the entropy on it:
Now we only need to exchange data between processors: we need to sum the two integrals (
area,
entropy_integrated), and get the extrema for maximum and minimum. We could do this through four different data exchanges, but we can it with two: Utilities::MPI::sum also exists in a variant that takes an array of values that are all to be summed up. And we can also utilize the Utilities::MPI::max function by realizing that forming the minimum over the minimal entropies equals forming the negative of the maximum over the negative of the minimal entropies; this maximum can then be combined with forming the maximum over the maximal entropies.
Having computed everything this way, we can then compute the average entropy and find the \(L^\infty\) norm by taking the larger of the deviation of the maximum or minimum from the average:
The next function computes the minimal and maximal value of the extrapolated temperature over the entire domain. Again, this is only a slightly modified version of the respective function in step-31. As in the function above, we collect local minima and maxima and then compute the global extrema using the same trick as above.
As already discussed in step-31, the function needs to distinguish between the first and all following time steps because it uses a higher order temperature extrapolation scheme when at least two previous time steps are available.
The function that calculates the viscosity is purely local and so needs no communication at all. It is mostly the same as in step-31 but with an updated formulation of the viscosity if \(\alpha=2\) is chosen:
The following three functions set up the Stokes matrix, the matrix used for the Stokes preconditioner, and the temperature matrix. The code is mostly the same as in step-31, but it has been broken out into three functions of their own for simplicity.
The main functional difference between the code here and that in step-31 is that the matrices we want to set up are distributed across multiple processors. Since we still want to build up the sparsity pattern first for efficiency reasons, we could continue to build the entire sparsity pattern as a BlockDynamicSparsityPattern, as we did in step-31. However, that would be inefficient: every processor would build the same sparsity pattern, but only initialize a small part of the matrix using it. It also violates the principle that every processor should only work on those cells it owns (and, if necessary the layer of ghost cells around it).
Rather, we use an object of type TrilinosWrappers::BlockSparsityPattern, which is (obviously) a wrapper around a sparsity pattern object provided by Trilinos. The advantage is that the Trilinos sparsity pattern class can communicate across multiple processors: if this processor fills in all the nonzero entries that result from the cells it owns, and every other processor does so as well, then at the end after some MPI communication initiated by the
compress() call, we will have the globally assembled sparsity pattern available with which the global matrix can be initialized.
There is one important aspect when initializing Trilinos sparsity patterns in parallel: In addition to specifying the locally owned rows and columns of the matrices via the
stokes_partitioning index set, we also supply information about all the rows we are possibly going to write into when assembling on a certain processor. The set of locally relevant rows contains all such rows (possibly also a few unnecessary ones, but it is difficult to find the exact row indices before actually getting indices on all cells and resolving constraints). This additional information allows to exactly determine the structure for the off-processor data found during assembly. While Trilinos matrices are able to collect this information on the fly as well (when initializing them from some other reinit method), it is less efficient and leads to problems when assembling matrices with multiple threads. In this program, we pessimistically assume that only one processor at a time can write into the matrix while assembly (whereas the computation is parallel), which is fine for Trilinos matrices. In practice, one can do better by hinting WorkStream at cells that do not share vertices, allowing for parallelism among those cells (see the graph coloring algorithms and WorkStream with colored iterators argument). However, that only works when only one MPI processor is present because Trilinos' internal data structures for accumulating off-processor data on the fly are not thread safe. With the initialization presented here, there is no such problem and one could safely introduce graph coloring for this algorithm.
The only other change we need to make is to tell the DoFTools::make_sparsity_pattern() function that it is only supposed to work on a subset of cells, namely the ones whose
subdomain_id equals the number of the current processor, and to ignore all other cells.
This strategy is replicated across all three of the following functions.
Note that Trilinos matrices store the information contained in the sparsity patterns, so we can safely release the
sp variable once the matrix has been given the sparsity structure.
The remainder of the setup function (after splitting out the three functions above) mostly has to deal with the things we need to do for parallelization across processors. Because setting all of this up is a significant compute time expense of the program, we put everything we do here into a timer group so that we can get summary information about the fraction of time spent in this part of the program at its end.
At the top as usual we enumerate degrees of freedom and sort them by component/block, followed by writing their numbers to the screen from processor zero. The DoFHandler::distributed_dofs() function, when applied to a parallel::distributed::Triangulation object, sorts degrees of freedom in such a way that all degrees of freedom associated with subdomain zero come before all those associated with subdomain one, etc. For the Stokes part, this entails, however, that velocities and pressures become intermixed, but this is trivially solved by sorting again by blocks; it is worth noting that this latter operation leaves the relative ordering of all velocities and pressures alone, i.e. within the velocity block we will still have all those associated with subdomain zero before all velocities associated with subdomain one, etc. This is important since we store each of the blocks of this matrix distributed across all processors and want this to be done in such a way that each processor stores that part of the matrix that is roughly equal to the degrees of freedom located on those cells that it will actually work on.
When printing the numbers of degrees of freedom, note that these numbers are going to be large if we use many processors. Consequently, we let the stream put a comma separator in between every three digits. The state of the stream, using the locale, is saved from before to after this operation. While slightly opaque, the code works because the default locale (which we get using the constructor call
std::locale("")) implies printing numbers with a comma separator for every third digit (i.e., thousands, millions, billions).
In this function as well as many below, we measure how much time we spend here and collect that in a section called "Setup dof
systems" across function invocations. This is done using an TimerOutput::Scope object that gets a timer going in the section with above name of the
computing_timer object upon construction of the local variable; the timer is stopped again when the destructor of the
timing_section variable is called. This, of course, happens either at the end of the function, or if we leave the function through a
return statement or when an exception is thrown somewhere – in other words, whenever we leave this function in any way. The use of such "scope" objects therefore makes sure that we do not have to manually add code that tells the timer to stop at every location where this function may be left.
After this, we have to set up the various partitioners (of type
IndexSet, see the introduction) that describe which parts of each matrix or vector will be stored where, then call the functions that actually set up the matrices, and at the end also resize the various vectors we keep around in this program.
Following this, we can compute constraints for the solution vectors, including hanging node constraints and homogeneous and inhomogeneous boundary values for the Stokes and temperature fields. Note that as for everything else, the constraint objects can not hold all constraints on every processor. Rather, each processor needs to store only those that are actually necessary for correctness given that it only assembles linear systems on cells it owns. As discussed in the this paper, the set of constraints we need to know about is exactly the set of constraints on all locally relevant degrees of freedom, so this is what we use to initialize the constraint objects.
All this done, we can then initialize the various matrix and vector objects to their proper sizes. At the end, we also record that all matrices and preconditioners have to be re-computed at the beginning of the next time step. Note how we initialize the vectors for the Stokes and temperature right hand sides: These are writable vectors (last boolean argument set to
true) that have the correct one-to-one partitioning of locally owned elements but are still given the relevant partitioning for means of figuring out the vector entries that are going to be set right away. As for matrices, this allows for writing local contributions into the vector with multiple threads (always assuming that the same vector entry is not accessed by multiple threads at the same time). The other vectors only allow for read access of individual elements, including ghosts, but are not suitable for solvers.
Following the discussion in the introduction and in the Parallel computing with multiple processors accessing shared memory module, we split the assembly functions into different parts:
The local calculations of matrices and right hand sides, given a certain cell as input (these functions are named
local_assemble_* below). The resulting function is, in other words, essentially the body of the loop over all cells in step-31. Note, however, that these functions store the result from the local calculations in variables of classes from the CopyData namespace.
These objects are then given to the second step which writes the local data into the global data structures (these functions are named
copy_local_to_global_* below). These functions are pretty trivial.
assemble_*below), where a WorkStream object is set up and runs over all the cells that belong to the processor's subdomain.
Let us start with the functions that builds the Stokes preconditioner. The first two of these are pretty trivial, given the discussion above. Note in particular that the main point in using the scratch data object is that we want to avoid allocating any objects on the free space each time we visit a new cell. As a consequence, the assembly function below only has automatic local variables, and everything else is accessed through the scratch data object, which is allocated only once before we start the loop over all cells:
Now for the function that actually puts things together, using the WorkStream functions. WorkStream::run needs a start and end iterator to enumerate the cells it is supposed to work on. Typically, one would use DoFHandler::begin_active() and DoFHandler::end() for that but here we actually only want the subset of cells that in fact are owned by the current processor. This is where the FilteredIterator class comes into play: you give it a range of cells and it provides an iterator that only iterates over that subset of cells that satisfy a certain predicate (a predicate is a function of one argument that either returns true or false). The predicate we use here is IteratorFilters::LocallyOwnedCell, i.e., it returns true exactly if the cell is owned by the current processor. The resulting iterator range is then exactly what we need.
With this obstacle out of the way, we call the WorkStream::run function with this set of cells, scratch and copy objects, and with pointers to two functions: the local assembly and copy-local-to-global function. These functions need to have very specific signatures: three arguments in the first and one argument in the latter case (see the documentation of the WorkStream::run function for the meaning of these arguments). Note how we use a lambda functions to create a function object that satisfies this requirement. It uses function arguments for the local assembly function that specify cell, scratch data, and copy data, as well as function argument for the copy function that expects the data to be written into the global matrix (also see the discussion in step-13's
assemble_linear_system() function). On the other hand, the implicit zeroth argument of member functions (namely the
this pointer of the object on which that member function is to operate on) is bound to the
this pointer of the current function and is captured. The WorkStream::run function, as a consequence, does not need to know anything about the object these functions work on.
When the WorkStream is executed, it will create several local assembly routines of the first kind for several cells and let some available processors work on them. The function that needs to be synchronized, i.e., the write operation into the global matrix, however, is executed by only one thread at a time in the prescribed order. Of course, this only holds for the parallelization on a single MPI process. Different MPI processes will have their own WorkStream objects and do that work completely independently (and in different memory spaces). In a distributed calculation, some data will accumulate at degrees of freedom that are not owned by the respective processor. It would be inefficient to send data around every time we encounter such a dof. What happens instead is that the Trilinos sparse matrix will keep that data and send it to the owner at the end of assembly, by calling the
compress() command.
The final function in this block initiates assembly of the Stokes preconditioner matrix and then in fact builds the Stokes preconditioner. It is mostly the same as in the serial case. The only difference to step-31 is that we use a Jacobi preconditioner for the pressure mass matrix instead of IC, as discussed in the introduction.
The next three functions implement the assembly of the Stokes system, again split up into a part performing local calculations, one for writing the local data into the global matrix and vector, and one for actually running the loop over all cells with the help of the WorkStream class. Note that the assembly of the Stokes matrix needs only to be done in case we have changed the mesh. Otherwise, just the (temperature-dependent) right hand side needs to be calculated here. Since we are working with distributed matrices and vectors, we have to call the respective
compress() functions in the end of the assembly in order to send non-local data to the owner process.
The task to be performed by the next three functions is to calculate a mass matrix and a Laplace matrix on the temperature system. These will be combined in order to yield the semi-implicit time stepping matrix that consists of the mass matrix plus a time step-dependent weight factor times the Laplace matrix. This function is again essentially the body of the loop over all cells from step-31.
The two following functions perform similar services as the ones above.
This is the last assembly function. It calculates the right hand side of the temperature system, which includes the convection and the stabilization terms. It includes a lot of evaluations of old solutions at the quadrature points (which are necessary for calculating the artificial viscosity of stabilization), but is otherwise similar to the other assembly functions. Notice, once again, how we resolve the dilemma of having inhomogeneous boundary conditions, by just making a right hand side at this point (compare the comments for the
project() function above): We create some matrix columns with exactly the values that would be entered for the temperature stiffness matrix, in case we have inhomogeneously constrained dofs. That will account for the correct balance of the right hand side vector with the matrix system of temperature.
In the function that runs the WorkStream for actually calculating the right hand side, we also generate the final matrix. As mentioned above, it is a sum of the mass matrix and the Laplace matrix, times some time step-dependent weight. This weight is specified by the BDF-2 time integration scheme, see the introduction in step-31. What is new in this tutorial program (in addition to the use of MPI parallelization and the WorkStream class), is that we now precompute the temperature preconditioner as well. The reason is that the setup of the Jacobi preconditioner takes a noticeable time compared to the solver because we usually only need between 10 and 20 iterations for solving the temperature system (this might sound strange, as Jacobi really only consists of a diagonal, but in Trilinos it is derived from more general framework for point relaxation preconditioners which is a bit inefficient). Hence, it is more efficient to precompute the preconditioner, even though the matrix entries may slightly change because the time step might change. This is not too big a problem because we remesh every few time steps (and regenerate the preconditioner then).
The next part is computing the right hand side vectors. To do so, we first compute the average temperature \(T_m\) that we use for evaluating the artificial viscosity stabilization through the residual \(E(T) = (T-T_m)^2\). We do this by defining the midpoint between maximum and minimum temperature as average temperature in the definition of the entropy viscosity. An alternative would be to use the integral average, but the results are not very sensitive to this choice. The rest then only requires calling WorkStream::run again, binding the arguments to the
local_assemble_temperature_rhs function that are the same in every call to the correct values:
This function solves the linear systems in each time step of the Boussinesq problem. First, we work on the Stokes system and then on the temperature system. In essence, it does the same things as the respective function in step-31. However, there are a few changes here.
The first change is related to the way we store our solution: we keep the vectors with locally owned degrees of freedom plus ghost nodes on each MPI node. When we enter a solver which is supposed to perform matrix-vector products with a distributed matrix, this is not the appropriate form, though. There, we will want to have the solution vector to be distributed in the same way as the matrix, i.e. without any ghosts. So what we do first is to generate a distributed vector called
distributed_stokes_solution and put only the locally owned dofs into that, which is neatly done by the
operator= of the Trilinos vector.
Next, we scale the pressure solution (or rather, the initial guess) for the solver so that it matches with the length scales in the matrices, as discussed in the introduction. We also immediately scale the pressure solution back to the correct units after the solution is completed. We also need to set the pressure values at hanging nodes to zero. This we also did in step-31 in order not to disturb the Schur complement by some vector entries that actually are irrelevant during the solve stage. As a difference to step-31, here we do it only for the locally owned pressure dofs. After solving for the Stokes solution, each processor copies the distributed solution back into the solution vector that also includes ghost elements.
The third and most obvious change is that we have two variants for the Stokes solver: A fast solver that sometimes breaks down, and a robust solver that is slower. This is what we already discussed in the introduction. Here is how we realize it: First, we perform 30 iterations with the fast solver based on the simple preconditioner based on the AMG V-cycle instead of an approximate solve (this is indicated by the
false argument to the
LinearSolvers::BlockSchurPreconditioner object). If we converge, everything is fine. If we do not converge, the solver control object will throw an exception SolverControl::NoConvergence. Usually, this would abort the program because we don't catch them in our usual
solve() functions. This is certainly not what we want to happen here. Rather, we want to switch to the strong solver and continue the solution process with whatever vector we got so far. Hence, we catch the exception with the C++ try/catch mechanism. We then simply go through the same solver sequence again in the
catch clause, this time passing the
true flag to the preconditioner for the strong solver, signaling an approximate CG solve.
Now let's turn to the temperature part: First, we compute the time step size. We found that we need smaller time steps for 3D than for 2D for the shell geometry. This is because the cells are more distorted in that case (it is the smallest edge length that determines the CFL number). Instead of computing the time step from maximum velocity and minimal mesh size as in step-31, we compute local CFL numbers, i.e., on each cell we compute the maximum velocity times the mesh size, and compute the maximum of them. Hence, we need to choose the factor in front of the time step slightly smaller.
After temperature right hand side assembly, we solve the linear system for temperature (with fully distributed vectors without any ghosts), apply constraints and copy the vector back to one with ghosts.
In the end, we extract the temperature range similarly to step-31 to produce some output (for example in order to help us choose the stabilization constants, as discussed in the introduction). The only difference is that we need to exchange maxima over all processors.
Next comes the function that generates the output. The quantities to output could be introduced manually like we did in step-31. An alternative is to hand this task over to a class PostProcessor that inherits from the class DataPostprocessor, which can be attached to DataOut. This allows us to output derived quantities from the solution, like the friction heating included in this example. It overloads the virtual function DataPostprocessor::evaluate_vector_field(), which is then internally called from DataOut::build_patches(). We have to give it values of the numerical solution, its derivatives, normals to the cell, the actual evaluation points and any additional quantities. This follows the same procedure as discussed in step-29 and other programs.
Here we define the names for the variables we want to output. These are the actual solution values for velocity, pressure, and temperature, as well as the friction heating and to each cell the number of the processor that owns it. This allows us to visualize the partitioning of the domain among the processors. Except for the velocity, which is vector-valued, all other quantities are scalar.
Now we implement the function that computes the derived quantities. As we also did for the output, we rescale the velocity from its SI units to something more readable, namely cm/year. Next, the pressure is scaled to be between 0 and the maximum pressure. This makes it more easily comparable – in essence making all pressure variables positive or zero. Temperature is taken as is, and the friction heating is computed as \(2 \eta \varepsilon(\mathbf{u}) \cdot \varepsilon(\mathbf{u})\).
The quantities we output here are more for illustration, rather than for actual scientific value. We come back to this briefly in the results section of this program and explain what one may in fact be interested in.
The
output_results() function has a similar task to the one in step-31. However, here we are going to demonstrate a different technique on how to merge output from different DoFHandler objects. The way we're going to achieve this recombination is to create a joint DoFHandler that collects both components, the Stokes solution and the temperature solution. This can be nicely done by combining the finite elements from the two systems to form one FESystem, and let this collective system define a new DoFHandler object. To be sure that everything was done correctly, we perform a sanity check that ensures that we got all the dofs from both Stokes and temperature even in the combined system. We then combine the data vectors. Unfortunately, there is no straight-forward relation that tells us how to sort Stokes and temperature vector into the joint vector. The way we can get around this trouble is to rely on the information collected in the FESystem. For each dof on a cell, the joint finite element knows to which equation component (velocity component, pressure, or temperature) it belongs – that's the information we need! So we step through all cells (with iterators into all three DoFHandlers moving in sync), and for each joint cell dof, we read out that component using the FiniteElement::system_to_base_index function (see there for a description of what the various parts of its return value contain). We also need to keep track whether we're on a Stokes dof or a temperature dof, which is contained in joint_fe.system_to_base_index(i).first.first. Eventually, the dof_indices data structures on either of the three systems tell us how the relation between global vector and local dofs looks like on the present cell, which concludes this tedious work. We make sure that each processor only works on the subdomain it owns locally (and not on ghost or artificial cells) when building the joint solution vector. The same will then have to be done in DataOut::build_patches(), but that function does so automatically.
What we end up with is a set of patches that we can write using the functions in DataOutBase in a variety of output formats. Here, we then have to pay attention that what each processor writes is really only its own part of the domain, i.e. we will want to write each processor's contribution into a separate file. This we do by adding an additional number to the filename when we write the solution. This is not really new, we did it similarly in step-40. Note that we write in the compressed format
.vtu instead of plain vtk files, which saves quite some storage.
All the rest of the work is done in the PostProcessor class.
This function isn't really new either. Since the
setup_dofs function that we call in the middle has its own timer section, we split timing this function into two sections. It will also allow us to easily identify which of the two is more expensive.
One thing of note, however, is that we only want to compute error indicators on the locally owned subdomain. In order to achieve this, we pass one additional argument to the KellyErrorEstimator::estimate function. Note that the vector for error estimates is resized to the number of active cells present on the current process, which is less than the total number of active cells on all processors (but more than the number of locally owned active cells); each processor only has a few coarse cells around the locally owned ones, as also explained in step-40.
The local error estimates are then handed to a parallel version of GridRefinement (in namespace parallel::distributed::GridRefinement, see also step-40) which looks at the errors and finds the cells that need refinement by comparing the error values across processors. As in step-31, we want to limit the maximum grid level. So in case some cells have been marked that are already at the finest level, we simply clear the refine flags.
With all flags marked as necessary, we can then tell the parallel::distributed::SolutionTransfer objects to get ready to transfer data from one mesh to the next, which they will do when notified by Triangulation as part of the
execute_coarsening_and_refinement() call. The syntax is similar to the non-parallel solution transfer (with the exception that here a pointer to the vector entries is enough). The remainder of the function further down below is then concerned with setting up the data structures again after mesh refinement and restoring the solution vectors on the new mesh.
enforce constraints to make the interpolated solution conforming on the new mesh:
enforce constraints to make the interpolated solution conforming on the new mesh:
This is the final and controlling function in this class. It, in fact, runs the entire rest of the program and is, once more, very similar to step-31. The only substantial difference is that we use a different mesh now (a GridGenerator::hyper_shell instead of a simple cube geometry).
VectorTools::project supports parallel vector classes with most standard finite elements via deal.II's own native MatrixFree framework: since we use standard Lagrange elements of moderate order this function works well here.
Having so computed the current temperature field, let us set the member variable that holds the temperature nodes. Strictly speaking, we really only need to set
old_temperature_solution since the first thing we will do is to compute the Stokes solution that only requires the previous time step's temperature field. That said, nothing good can come from not initializing the other vectors as well (especially since it's a relatively cheap operation and we only have to do it once at the beginning of the program) if we ever want to extend our numerical method or physical model, and so we initialize
old_temperature_solution and
old_old_temperature_solution as well. The assignment makes sure that the vectors on the left hand side (which where initialized to contain ghost elements as well) also get the correct ghost elements. In other words, the assignment here requires communication between processors:
In order to speed up linear solvers, we extrapolate the solutions from the old time levels to the new one. This gives a very good initial guess, cutting the number of iterations needed in solvers by more than one half. We do not need to extrapolate in the last iteration, so if we reached the final time, we stop here.
As the last thing during a time step (before actually bumping up the number of the time step), we check whether the current time step number is divisible by 100, and if so we let the computing timer print a summary of CPU times spent so far.
Trilinos sadd does not like ghost vectors even as input. Copy into distributed vectors for now:
If we are generating graphical output, do so also for the last time step unless we had just done so before we left the do-while loop
mainfunction
The main function is short as usual and very similar to the one in step-31. Since we use a parameter file which is specified as an argument in the command line, we have to read it in here and pass it on to the Parameters class for parsing. If no filename is given in the command line, we simply use the
step-32.prm file which is distributed together with the program.
Because 3d computations are simply very slow unless you throw a lot of processors at them, the program defaults to 2d. You can get the 3d version by changing the constant dimension below to 3.
When run, the program simulates convection in 3d in much the same way as step-31 did, though with an entirely different testcase.
Before we go to this testcase, however, let us show a few results from a slightly earlier version of this program that was solving exactly the testcase we used in step-31, just that we now solve it in parallel and with much higher resolution. We show these results mainly for comparison.
Here are two images that show this higher resolution if we choose a 3d computation in
main() and if we set
initial_refinement=3 and
n_pre_refinement_steps=4. At the time steps shown, the meshes had around 72,000 and 236,000 cells, for a total of 2,680,000 and 8,250,000 degrees of freedom, respectively, more than an order of magnitude more than we had available in step-31:
The computation was done on a subset of 50 processors of the Brazos cluster at Texas A&M University.
Next, we will run step-32 with the parameter file in the directory with one change: we increase the final time to 1e9. Here we are using 16 processors. The command to launch is (note that step-32.prm is the default):
$ mpirun -np 16 ./step-32
Note that running a job on a cluster typically requires going through a job scheduler, which we won't discuss here. The output will look roughly like this:
$ mpirun -np 16 ./step-32 Number of active cells: 12,288 (on 6 levels) Number of degrees of freedom: 186,624 (99,840+36,864+49,920)
Timestep 0: t=0 years
Rebuilding Stokes preconditioner... Solving Stokes system... 41 iterations. Maximal velocity: 60.4935 cm/year Time step: 18166.9 years 17 CG iterations for temperature Temperature range: 973 4273.16
Number of active cells: 15,921 (on 7 levels) Number of degrees of freedom: 252,723 (136,640+47,763+68,320)
Timestep 0: t=0 years
Rebuilding Stokes preconditioner... Solving Stokes system... 50 iterations. Maximal velocity: 60.3223 cm/year Time step: 10557.6 years 19 CG iterations for temperature Temperature range: 973 4273.16
Number of active cells: 19,926 (on 8 levels) Number of degrees of freedom: 321,246 (174,312+59,778+87,156)
Timestep 0: t=0 years
Rebuilding Stokes preconditioner... Solving Stokes system... 50 iterations. Maximal velocity: 57.8396 cm/year Time step: 5453.78 years 18 CG iterations for temperature Temperature range: 973 4273.16
Timestep 1: t=5453.78 years
Solving Stokes system... 49 iterations. Maximal velocity: 59.0231 cm/year Time step: 5345.86 years 18 CG iterations for temperature Temperature range: 973 4273.16
Timestep 2: t=10799.6 years
Solving Stokes system... 24 iterations. Maximal velocity: 60.2139 cm/year Time step: 5241.51 years 17 CG iterations for temperature Temperature range: 973 4273.16
[...]
Timestep 100: t=272151 years
Solving Stokes system... 21 iterations. Maximal velocity: 161.546 cm/year Time step: 1672.96 years 17 CG iterations for temperature Temperature range: 973 4282.57
Number of active cells: 56,085 (on 8 levels) Number of degrees of freedom: 903,408 (490,102+168,255+245,051)
+---------------------------------------------+------------+------------+ | Total wallclock time elapsed since start | 115s | | | | | | | Section | no. calls | wall time | % of total | +---------------------------------+-----------+------------+------------+ | Assemble Stokes system | 103 | 2.82s | 2.5% | | Assemble temperature matrices | 12 | 0.452s | 0.39% | | Assemble temperature rhs | 103 | 11.5s | 10% | | Build Stokes preconditioner | 12 | 2.09s | 1.8% | | Solve Stokes system | 103 | 90.4s | 79% | | Solve temperature system | 103 | 1.53s | 1.3% | | Postprocessing | 3 | 0.532s | 0.46% | | Refine mesh structure, part 1 | 12 | 0.93s | 0.81% | | Refine mesh structure, part 2 | 12 | 0.384s | 0.33% | | Setup dof systems | 13 | 2.96s | 2.6% | +---------------------------------+-----------+------------+------------+
[...]
+---------------------------------------------+------------+------------+ | Total wallclock time elapsed since start | 9.14e+04s | | | | | | | Section | no. calls | wall time | % of total | +---------------------------------+-----------+------------+------------+ | Assemble Stokes system | 47045 | 2.05e+03s | 2.2% | | Assemble temperature matrices | 4707 | 310s | 0.34% | | Assemble temperature rhs | 47045 | 8.7e+03s | 9.5% | | Build Stokes preconditioner | 4707 | 1.48e+03s | 1.6% | | Solve Stokes system | 47045 | 7.34e+04s | 80% | | Solve temperature system | 47045 | 1.46e+03s | 1.6% | | Postprocessing | 1883 | 222s | 0.24% | | Refine mesh structure, part 1 | 4706 | 641s | 0.7% | | Refine mesh structure, part 2 | 4706 | 259s | 0.28% | | Setup dof systems | 4707 | 1.86e+03s | 2% | +---------------------------------+-----------+------------+------------+
The simulation terminates when the time reaches the 1 billion years selected in the input file. You can extrapolate from this how long a simulation would take for a different final time (the time step size ultimately settles on somewhere around 20,000 years, so computing for two billion years will take 100,000 time steps, give or take 20%). As can be seen here, we spend most of the compute time in assembling linear systems and — above all — in solving Stokes systems.
To demonstrate the output we show the output from every 1250th time step here:
The last two images show the grid as well as the partitioning of the mesh for the same computation with 16 subdomains and 16 processors. The full dynamics of this simulation are really only visible by looking at an animation, for example the one shown on this site. This image is well worth watching due to its artistic quality and entrancing depiction of the evolution of the magma plumes.
If you watch the movie, you'll see that the convection pattern goes through several stages: First, it gets rid of the instable temperature layering with the hot material overlain by the dense cold material. After this great driver is removed and we have a sort of stable situation, a few blobs start to separate from the hot boundary layer at the inner ring and rise up, with a few cold fingers also dropping down from the outer boundary layer. During this phase, the solution remains mostly symmetric, reflecting the 12-fold symmetry of the original mesh. In a final phase, the fluid enters vigorous chaotic stirring in which all symmetries are lost. This is a pattern that then continues to dominate flow.
These different phases can also be identified if we look at the maximal velocity as a function of time in the simulation:
Here, the velocity (shown in centimeters per year) becomes very large, to the order of several meters per year) at the beginning when the temperature layering is instable. It then calms down to relatively small values before picking up again in the chaotic stirring regime. There, it remains in the range of 10-40 centimeters per year, quite within the physically expected region.
3d computations are very expensive computationally. Furthermore, as seen above, interesting behavior only starts after quite a long time requiring more CPU hours than is available on a typical cluster. Consequently, rather than showing a complete simulation here, let us simply show a couple of pictures we have obtained using the successor to this program, called ASPECT (short for Advanced Solver for Problems in Earth's ConvecTion), that is being developed independently of deal.II and that already incorporates some of the extensions discussed below. The following two pictures show isocontours of the temperature and the partition of the domain (along with the mesh) onto 512 processors:
There are many directions in which this program could be extended. As mentioned at the end of the introduction, most of these are under active development in the ASPECT (short for Advanced Solver for Problems in Earth's ConvecTion) code at the time this tutorial program is being finished. Specifically, the following are certainly topics that one should address to make the program more useful:
Adiabatic heating/cooling: The temperature field we get in our simulations after a while is mostly constant with boundary layers at the inner and outer boundary, and streamers of cold and hot material mixing everything. Yet, this doesn't match our expectation that things closer to the earth core should be hotter than closer to the surface. The reason is that the energy equation we have used does not include a term that describes adiabatic cooling and heating: rock, like gas, heats up as you compress it. Consequently, material that rises up cools adiabatically, and cold material that sinks down heats adiabatically. The correct temperature equation would therefore look somewhat like this:
\begin{eqnarray*} \frac{D T}{Dt} - \nabla \cdot \kappa \nabla T &=& \gamma + \tau\frac{Dp}{Dt}, \end{eqnarray*}
or, expanding the advected derivative \(\frac{D}{Dt} = \frac{\partial}{\partial t} + \mathbf u \cdot \nabla\):
\begin{eqnarray*} \frac{\partial T}{\partial t} + {\mathbf u} \cdot \nabla T - \nabla \cdot \kappa \nabla T &=& \gamma + \tau\left\{\frac{\partial p}{\partial t} + \mathbf u \cdot \nabla p \right\}. \end{eqnarray*}
In other words, as pressure increases in a rock volume ( \(\frac{Dp}{Dt}>0\)) we get an additional heat source, and vice versa.
The time derivative of the pressure is a bit awkward to implement. If necessary, one could approximate using the fact outlined in the introduction that the pressure can be decomposed into a dynamic component due to temperature differences and the resulting flow, and a static component that results solely from the static pressure of the overlying rock. Since the latter is much bigger, one may approximate \(p\approx p_{\text{static}}=-\rho_{\text{ref}} [1+\beta T_{\text{ref}}] \varphi\), and consequently \(\frac{Dp}{Dt} \approx \left\{- \mathbf u \cdot \nabla \rho_{\text{ref}} [1+\beta T_{\text{ref}}]\varphi\right\} = \rho_{\text{ref}} [1+\beta T_{\text{ref}}] \mathbf u \cdot \mathbf g\). In other words, if the fluid is moving in the direction of gravity (downward) it will be compressed and because in that case \(\mathbf u \cdot \mathbf g > 0\) we get a positive heat source. Conversely, the fluid will cool down if it moves against the direction of gravity.
Compressibility: As already hinted at in the temperature model above, mantle rocks are not incompressible. Rather, given the enormous pressures in the earth mantle (at the core-mantle boundary, the pressure is approximately 140 GPa, equivalent to 1,400,000 times atmospheric pressure), rock actually does compress to something around 1.5 times the density it would have at surface pressure. Modeling this presents any number of difficulties. Primarily, the mass conservation equation is no longer \(\textrm{div}\;\mathbf u=0\) but should read \(\textrm{div}(\rho\mathbf u)=0\) where the density \(\rho\) is now no longer spatially constant but depends on temperature and pressure. A consequence is that the model is now no longer linear; a linearized version of the Stokes equation is also no longer symmetric requiring us to rethink preconditioners and, possibly, even the discretization. We won't go into detail here as to how this can be resolved.
Nonlinear material models: As already hinted at in various places, material parameters such as the density, the viscosity, and the various thermal parameters are not constant throughout the earth mantle. Rather, they nonlinearly depend on the pressure and temperature, and in the case of the viscosity on the strain rate \(\varepsilon(\mathbf u)\). For complicated models, the only way to solve such models accurately may be to actually iterate this dependence out in each time step, rather than simply freezing coefficients at values extrapolated from the previous time step(s).
Checkpoint/restart: Running this program in 2d on a number of processors allows solving realistic models in a day or two. However, in 3d, compute times are so large that one runs into two typical problems: (i) On most compute clusters, the queuing system limits run times for individual jobs are to 2 or 3 days; (ii) losing the results of a computation due to hardware failures, misconfigurations, or power outages is a shame when running on hundreds of processors for a couple of days. Both of these problems can be addressed by periodically saving the state of the program and, if necessary, restarting the program at this point. This technique is commonly called checkpoint/restart and it requires that the entire state of the program is written to a permanent storage location (e.g. a hard drive). Given the complexity of the data structures of this program, this is not entirely trivial (it may also involve writing gigabytes or more of data), but it can be made easier by realizing that one can save the state between two time steps where it essentially only consists of the mesh and solution vectors; during restart one would then first re-enumerate degrees of freedom in the same way as done before and then re-assemble matrices. Nevertheless, given the distributed nature of the data structures involved here, saving and restoring the state of a program is not trivial. An additional complexity is introduced by the fact that one may want to change the number of processors between runs, for example because one may wish to continue computing on a mesh that is finer than the one used to precompute a starting temperature field at an intermediate time.
Predictive postprocessing: The point of computations like this is not simply to solve the equations. Rather, it is typically the exploration of different physical models and their comparison with things that we can measure at the earth surface, in order to find which models are realistic and which are contradicted by reality. To this end, we need to compute quantities from our solution vectors that are related to what we can observe. Among these are, for example, heatfluxes at the surface of the earth, as well as seismic velocities throughout the mantle as these affect earthquake waves that are recorded by seismographs.
There are many other ways to extend the current program. However, rather than discussing them here, let us point to the much larger open source code ASPECT (see ) that constitutes the further development of step-32 and that already includes many such possible extensions. | https://dealii.org/developer/doxygen/deal.II/step_32.html | CC-MAIN-2020-10 | en | refinedweb |
Learn more about Scribd Membership
Discover everything Scribd has to offer, including books and audiobooks from major publishers.
Release ns-3.27
ns-3 project
1 Introduction 3 1.1 About ns-3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 For ns-2 Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3 Contributing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.4 Tutorial Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 Resources 7 2.1 The Web . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 Mercurial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.3 Waf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.4 Development Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.5 Socket Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3 Getting Started 9 3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.2 Downloading ns-3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.3 Building ns-3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.4 Testing ns-3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.5 Running a Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4 Conceptual Overview 23 4.1 Key Abstractions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 4.2 A First ns-3 Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.3 Ns-3 Source Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5 Tweaking 35 5.1 Using the Logging Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 5.2 Using Command Line Arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 5.3 Using the Tracing System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
6 Building Topologies 51 6.1 Building a Bus Network Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 6.2 Models, Attributes and Reality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 6.3 Building a Wireless Network Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
7 Tracing 69 7.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 7.2 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 7.3 Real Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 7.4 Trace Helpers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 7.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
i8 Data Collection 113 8.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 8.2 Example Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 8.3 GnuplotHelper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 8.4 Supported Trace Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 8.5 FileHelper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 8.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
9 Conclusion 121 9.1 Futures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 9.2 Closing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
ii ns-3 Tutorial, Release ns-3.27
This is the ns-3 Tutorial. Primary documentation for the ns-3 project is available in five forms: • ns-3 Doxygen: Documentation of the public APIs of the simulator • Tutorial (this document), Manual, and Model Library for the latest release and development tree • ns-3 wikiThis document is written in reStructuredText for Sphinx and is maintained in the doc/tutorial directory of ns-3’ssource code.
CONTENTS 1ns-3 Tutorial, Release ns-3.27
2 CONTENTS CHAPTER
ONE
INTRODUCTION
The ns-3 simulator is a discrete-event network simulator targeted primarily for research and educational use. The ns-3project, started in 2006, is an open-source project developing ns-3.The purpose of this tutorial is to introduce new ns-3 users to the system in a structured way. It is sometimes difficultfor new users to glean essential information from detailed manuals and to convert this information into workingsimulations. In this tutorial, we will build several example simulations, introducing and explaining key concepts andfeatures as we go.As the tutorial unfolds, we will introduce the full ns-3 documentation and provide pointers to source code for thoseinterested in delving deeper into the workings of the system.A few key points are worth noting at the onset: • ns-3 is open-source, and the project strives to maintain an open environment for researchers to contribute and share their software. • ns-3 is not a backwards-compatible.
ns-3 has been developed to provide an open, extensible network simulation platform, for networking research andeducation. In brief, ns-3 provides models of how packet data networks work and perform, and provides a simulationengine for users to conduct simulation experiments. Some of the reasons to use ns-3 include to perform studiesthat are more difficult or not possible to perform with real systems, to study system behavior in a highly controlled,reproducible environment, and to learn about how networks work. Users will note that the available model set in ns-3focuses on modeling how Internet protocols and networks work, but ns-3 is not limited to Internet systems; severalusers are using ns-3 to model non-Internet-based systems.Many simulation tools exist for network simulation studies. Below are a few distinguishing features of ns-3 in contrastto other tools. •. • ns-3 is primarily used on Linux systems, although support exists for FreeBSD, Cygwin (for Windows), and native Windows Visual Studio support is in the process of being developed.
3ns-3 Tutorial, Release ns-3.27
• ns-3 is not an officially supported software product of any company. Support for ns-3 is done on a best-effort basis on the ns-3-users mailing list.
For those familiar with ns-2 (a popular tool that preceded ns-3), the most visible outward change when moving to ns-3is the choice of scripting language. Programs in ns-2 are scripted in OTcl and results of simulations can be visualizedusing inC++ or in Python. New animators and visualizers are available and under current development. Since ns-3 generatespcap packet trace files, other utilities can be used to analyze traces as well. In this tutorial, we will first concentrate onscripting directly in C++ and interpreting results via trace files.But there are similarities as well (both, for example, are based on C++ objects, and some code from ns-2 has alreadybeen ported to ns-3). We will try to highlight differences between ns-2 and ns-3 as we proceed in this tutorial.A question that we often hear is “Should I still use ns-2 or move to ns-3?” In this author’s opinion, unless the user issomehow vested in ns-2 (either based on existing personal comfort with and knowledge of ns-2, or based on a specificsimulation model that is only available in ns-2), a user will be more productive with ns-3 for the following reasons: • ns-3 is actively maintained with an active, responsive users mailing list, while ns-2 is only lightly maintained and has not seen significant development in its main code tree for over a decade. • ns-3 provides features not available in ns-2, such as a implementation code execution environment (allowing users to run real implementation code in the simulator) • ns-3 provides a lower base level of abstraction compared with ns-2, allowing it to align better with how real systems are put together. Some limitations found in ns-2 (such as supporting multiple types of interfaces on nodes correctly) have been remedied in ns-3.ns-2 has a more diverse set of contributed modules than does ns-3, owing to its long history. However, ns-3 hasmore detailed models in several popular areas of research (including sophisticated LTE and WiFi models), and itssupport of implementation code admits a very wide spectrum of high-fidelity models. Users may be surprised to learnthat the whole Linux networking stack can be encapsulated in an ns-3 node, using the Direct Code Execution (DCE)framework. ns-2 models can sometimes be ported to ns-3, particularly if they have been implemented in C++.If in doubt, a good guideline would be to look at both simulators (as well as other simulators), and in particular themodels available for your research, but keep in mind that your experience may be better in using the tool that is beingactively developed and maintained (ns-3).
1.3 Contributing
ns-3 is a research and educational simulator, by and for the research community. It will rely on the ongoing contribu-tions of the community to develop new models, debug or maintain existing ones, and share results. There are a fewpolicies that we hope will encourage people to contribute to ns-3 like they have for ns-2: • Open source licensing based on GNU GPLv2 compatibility • wiki • Contributed Code page, similar to ns-2‘s popular Contributed Code page • Open bug tracker
4 Chapter 1. Introduction ns-3 Tutorial, Release ns-3.27
We realize that if you are reading this document, contributing back to the project is probably not your foremostconcern at this point, but we want you to be aware that contributing is in the spirit of the project and that even the actof dropping us a note about your early experience with ns-3 (e.g. “this tutorial section was not clear...”), reports ofstale documentation, etc. are much appreciated..
6 Chapter 1. Introduction CHAPTER
TWO
RESOURCES
There are several important resources of which any ns-3 user must be aware. The main web site is located at and provides access to basic information about the ns-3 system. Detailed documentation isavailable through the main web site at. You can also find documents relating tothe system architecture from this page.There is a Wiki that complements the main ns-3 web site which you will find at. You willfind user and developer FAQs there, as well as troubleshooting guides, third-party contributed code, papers, etc.The source code may be found and browsed at. There you will find the current developmenttree in the repository named ns-3-dev. Past releases and experimental repositories of the core developers may alsobe found there.
2.2 Mercurial
Complex software systems need some way to manage the organization and changes to the underlying code and docu-mentation. There are many ways to perform this feat, and you may have heard of some of the systems that are currentlyused to do this. The Concurrent Version System (CVS) is probably the most well known.The ns-3 project uses Mercurial as its source code management system. Although you do not need to know muchabout Mercurial in order to complete this tutorial, we recommend becoming familiar with Mercurial and using itto access the source code. Mercurial has a web site at, from which you can getbinary or source releases of this Software Configuration Management (SCM) system. Selenic (the developer of Mer-curial) also provides a tutorial at, and a QuickStart guide at can also find vital information about using Mercurial and ns-3 on the main ns-3 web site.
2.3 Waf
Once you have source code downloaded to your local system, you will need to compile that source to produce usableprograms. Just as in the case of source code management, there are many tools available to perform this function.Probably the most well known of these tools is make. Along with being the most well known, make is probablythe most difficult to use in a very large and highly configurable system. Because of this, many alternatives have beendeveloped. Recently these systems have been developed using the Python language.The build system Waf is used on the ns-3 project. It is one of the new generation of Python-based build systems. Youwill not need to understand any Python to build the existing ns-3 system.
7ns-3 Tutorial, Release ns-3.27
For those interested in the gory details of Waf, the main web site can be found at.
As mentioned above, scripting in ns-3 is done in C++ or Python. Most of the ns-3 API is available in Python, but themodels are written in C++ in either case. A working knowledge of C++ and object-oriented concepts is assumed inthis document. We will take some time to review some of the more advanced concepts or possibly unfamiliar languagefeatures, idioms and design patterns as they appear. We don’t want this tutorial to devolve into a C++ tutorial, though,so we do expect a basic command of the language. There are an almost unimaginable number of sources of informationon C++ available on the web or in print.If you are new to C++, you may want to find a tutorial- or cookbook-based book or web site and work through at leastthe basic features of the language before proceeding. For instance, this tutorial.The ns-3 system uses several components of the GNU “toolchain” for development. A software toolchain is the set ofprogramming tools available in the given environment. For a quick review of what is included in the GNU toolchainsee,. ns-3 uses gcc, GNU binutils, and gdb. However, we do not use theGNU build system tools, neither make nor autotools. We use Waf for these functions.Typically an ns-3 author will work in Linux or a Linux-like environment. For those running under Windows, theredo exist environments which simulate the Linux environment to various degrees. The ns-3 project has in the past (butnot presently) supported development in the Cygwin environment for these users. See fordetails on downloading, and visit the ns-3 wiki for more information about Cygwin and ns-3. MinGW is presently notofficially supported. Another alternative to Cygwin is to install a virtual machine environment such as VMware serverand install a Linux virtual machine.
We will assume a basic facility with the Berkeley Sockets API in the examples used in this tutorial. If you are newto sockets, we recommend reviewing the API and some common usage cases. For a good overview of programmingTCP/IP sockets we recommend TCP/IP Sockets in C, Donahoo and Calvert.There is an associated web site that includes source for the examples in the book, which you can find at: you understand the first four chapters of the book (or for those who do not have access to a copy of the book, theecho clients and servers shown in the website above) you will be in good shape to understand the tutorial. There is asimilar book on Multicast Sockets, Multicast Sockets, Makofske and Almeroth. that covers material you may need tounderstand if you look at the multicast examples in the distribution.
8 Chapter 2. Resources CHAPTER
THREE
GETTING STARTED
This section is aimed at getting a user to a working state starting with a machine that may never have had ns-3 installed.It covers supported platforms, prerequisites, ways to obtain ns-3, ways to build ns-3, and ways to verify your build andrun simple programs.
3.1 Overview
ns-3 is built as a system of software libraries that work together. User programs can be written that links with (orimports from) these libraries. User programs are written in either the C++ or Python programming languages.ns-3 is distributed as source code, meaning that the target system needs to have a software development environmentto build the libraries first, then build the user program. ns-3 could in principle be distributed as pre-built libraries forselected systems, and in the future it may be distributed that way, but at present, many users actually do their workby editing ns-3 itself, so having the source code around to rebuild the libraries is useful. If someone would like toundertake the job of making pre-built libraries and packages for operating systems, please contact the ns-developersmailing list.In the following, we’ll look at two ways of downloading and building ns-3. The first is to download and build anofficial release from the main web site. The second is to fetch and build development copies of ns-3. We’ll walkthroughwill need to ensure that a number of additional libraries are present on your system before proceeding. ns-3 pro-vides a wiki page that includes pages with many useful hints and tips. One such page is the “Installation” page, “Prerequisites” section of this wiki page explains which packages are required to support common ns-3 options,and also provides the commands used to install them for common Linux variants. Cygwin users will have to use theCy atarball release at, or you can work with repositories using Mercurial. We recommend
9ns-3 Tutorial, Release ns-3.27
using Mercurial unless there’s a good reason not to. See the end of this section for instructions on how to get a tarballrelease.The simplest way to get started using Mercurial repositories is to use the ns-3-allinone environment. This is aset of scripts that manages the downloading and building of various subsystems of ns-3 for you. We recommend thatyou begin your ns-3 work in this environment.One practice is to create a directory called workspace in one’s home directory under which one can keep localMercurial repositories. Any directory name will do, but we’ll assume that workspace is used herein (note: reposmay also be used in some documentation as an example directory name).
A tarball is a particular format of software archive where multiple files are bundled together and the archive possiblycompressed. ns-3 software releases are provided via a downloadable tarball. The process for downloading ns-3 viatarball is simple; you just have to pick a release, download it and decompress it.Let’s assume that you, as a user, wish to build ns-3 in a local directory called workspace. If you adopt theworkspace:$ lsbake constants.py ns-3.27 READMEbuilddevelopment versions of the ns-3 software, and to download and build extensions to the base ns-3 distribution, suchas the Direct Code Execution environment, Network Simulation Cradle, ability to create new Python bindings, andothers.In recent ns-3 releases, Bake has been included in the release tarball. The configuration file included in the releasedversion will allow one to download any software that was current at the time of the release. That is, for example, theversion of Bake that is distributed with the ns-3.21 release can be used to fetch components for that ns-3 release orearlier, but can’t be used to fetch components for later releases (unless the bakeconf.xml file is updated).You can also get the most recent copy of bake by typing the following into your Linux shell (assuming you haveinstalled Mercurial):$ cd$ mkdir workspace$ cd workspace$ hg clone
As the hg (Mercurial) command executes, you should see something like the following displayed,
...destination directory: bakerequesting all changesadding changesetsadding manifestsadding file changesadded 339 changesets with 796 changes to 63 filesupdating to branch default45 files updated, 0 files merged, 0 files removed, 0 files unresolved
After the clone command completes, you should have a directory called bake, the contents of which should looksomething like the following:$ lsbake bakeconf.xml doc generate-binary.py TODObake.py examples test
Notice that you really just downloaded some Python scripts and a Python module called bake. The next step will beto use those scripts to download and build the ns-3 distribution of your choice.There are a few configuration targets available: 1. ns-3.27: the module corresponding to the release; it will download components similar to the release tarball. 2. ns-3-dev: a similar module but using the development code tree 3. ns-allinone-3.27: the module that includes other optional features such as click routing, openflow for ns-3, and the Network Simulation Cradle 4. ns-3-allinone: similar to the released version of the allinone module, but for development code.The current development snapshot (unreleased) of ns-3 may be found at. The devel-opers attempt to keep these repository in consistent, working states but they are in a development area with unreleasedcode present, so you may want to consider staying with an official release if you do not need newly- introducedfeatures aword about running bake.bake works by downloading source packages into a source directory, and installing libraries into a build directory. bakecan be run by referencing the binary, but if one chooses to run bake from outside of the directory it was downloadedinto,created by bake. Although several bake use cases do not require setting PATH and PYTHONPATH as above, fullbuilds
In particular, download tools such as Mercurial, CVS, GIT, and Bazaar are our principal concerns at this point, sincethey allow us to fetch the code. Please install missing tools at this stage, in the usual way for your system (if you areable to), or contact your system administrator as needed to install these tools.Next, try to download the software:$ ./bake.py download
The above suggests that seven sources have been downloaded. Check the source directory now and type ls; oneshould see:$ lsBRITE netanim-3.108 openflow-ns-3.25 pygccxml-1.9.1castxml ns-3.27 pybindgen v1.9.1.tar.gzclick-ns-3.25 nsc-0.5.3 pygccxml
When working from a released tarball, the first time you build the ns-3 project you can build using a convenienceprogram found in the allinone directory. This program is called build.py. This program will get the projectconfigured for you in the most commonly useful way. However, please note that more advanced configuration andwork with ns-3 will typically involve using the native ns-3 build system, Waf, to be introduced later in this tutorial.If you downloaded using a tarball you should have a directory called something like ns-allinone-3.27 underyour ~/workspace directory. Type the following:$ ./build.py --enable-examples --enable-tests
Because we are working with examples and tests in this tutorial, and because they are not built by default in ns-3, thearguments for build.py tells it to build them for us. The program also defaults to building all available modules. Later,you can build ns-3 without examples and tests, or eliminate the modules that are not necessary for your work, if youwish.You will see lots of typical compiler output messages displayed as the build script builds the various pieces youdownloaded. Eventually you should see the following:Waf: Leaving directory `/path/to/workspace/ns-allinone-3.27/ns-3.27/build''build' finished successfully (6m25.032s)
Modules built:antenna aodv applicationsbridge buildings config-storecore csma csma-layoutdsdv dsr energyfd-net-device flow-monitor internetinternet-apps lr-wpan ltemesh mobility mpinetanim (no Python) network nix-vector-routingolsr openflow (no Python) point-to-pointpoint-to-point-layout propagation sixlowpanspectrum stats tap-bridgetest (no Python) topology-read traffic-controluan virtual-net-device visualizerwave wifi wimax
This just means that some ns-3 modules that have dependencies on outside libraries may not have been built, or thatthe configuration specifically asked not to build them. It does not mean that the simulator did not build successfully orthat it will provide wrong results for the modules listed as being built.
If you used bake above to fetch source code from project repositories, you may continue to use it to build ns-3. Type$ ./bake.py build
Hint: you can also perform both steps, download and build, by calling ‘‘bake.py deploy‘‘.There may be failures to build all components, but the build will proceed anyway if the component is optional. Forexample, a common issue at the moment is that castxml may not build via the bake build tool on all platforms; in thiscase, doso (or to do so until they are more involved with ns-3 changes), so such warnings might be safely ignored for now.If there happens to be a failure, please have a look at what the following command tells you; it may give a hint as to amissing dependency:$ ./bake.py show
This will list out the various dependencies of the packages you are trying to build.
Up to this point, we have used either the build.py script, or the bake tool, to get started with building ns-3. These toolsare useful for building ns-3 and supporting libraries, and they call into the ns-3 directory to call the Waf build tool todo changesto the configuration of the project. Probably the most useful configuration change you can make will be to build theoptimized version of the code. By default you have configured your project to build the debug version. Let’s tell theproject to make an optimized build. To explain to Waf that it should do optimized builds that include the examples andtests, you will need to execute the following commands:$ ./waf clean$ ./waf configure --build-profile=optimized --enable-examples --enable-tests
This runs Waf out of the local directory (which is provided as a convenience for you). The first command to clean outthe previous build is not typically strictly necessary but is good practice (but see Build Profiles, below); it will removethe previously built libraries and object files found in directory build/. When the project is reconfigured and thebuild system checks for various dependencies, you should see output that looks similar to the following:
Setting top to : .Setting out to : buildChecking for 'gcc' (c compiler) : /usr/bin/gccChecking for cc version : 4.2.1Checking for 'g++' (c++ compiler) : /usr/bin/g++Checking boost includes : 1_46_1Checking boost libs : okChecking for boost linkage : okChecking for click location : not foundChecking for program pkg-config : /sw/bin/pkg-configChecking for 'gtk+-2.0' >= 2.12 : yesChecking for 'libxml-2.0' >= 2.7 : yesChecking for type uint128_t : not foundChecking for type __uint128_t : yesChecking high precision implementation : 128-bit integer (default)Checking for header stdint.h : yesChecking for header inttypes.h : yesChecking for header sys/inttypes.h : not foundChecking for header sys/types.h : yesChecking for header sys/stat.h : yesChecking for header dirent.h : yesChecking for header stdlib.h : yesChecking for header signal.h : yesChecking for header pthread.h : yesChecking for header stdint.h : yesChecking for header inttypes.h : yesChecking for header sys/inttypes.h : not foundChecking for library rt : not foundChecking for header netpacket/packet.h : not foundChecking for header sys/ioctl.h : yesChecking for header net/if.h : not foundChecking for header net/ethernet.h : yesChecking for header linux/if_tun.h : not foundChecking for header netpacket/packet.h : not foundChecking for NSC location : not foundChecking for 'mpic++' : yesChecking for 'sqlite3' : yesChecking for header linux/if_tun.h : not foundChecking for program sudo : /usr/bin/sudoChecking for program valgrind : /sw/bin/valgrindChecking for 'gsl' : yesChecking for compilation flag -Wno-error=deprecated-d... support : okChecking for compilation flag -Wno-error=deprecated-d... support : okChecking for compilation flag -fstrict-aliasing... support : okChecking for compilation flag -fstrict-aliasing... support : okChecking for compilation flag -Wstrict-aliasing... support : okChecking for compilation flag -Wstrict-aliasing... support : okChecking for program doxygen : /usr/local/bin/doxygen---- Summary of optional NS-3 features:Build profile : debugBRITE Integration : not enabled (BRITE not enabled (see option --with-brite))Build directory : buildBuild examples : enabledBuild tests : enabledEmulated Net Device : enabled (<netpacket/packet.h> include not detected)Emulation FdNetDevice : not enabled (needs netpacket/packet.h)File descriptor NetDevice : enabledGNU Scientific Library (GSL) : enabled
GtkConfigStore : enabledMPI Support : enabledNS-3 Click Integration : not enabled (nsclick not enabled (see option --with-nsclick))NS-3 OpenFlow Integration : not enabled (Required boost libraries not found, missing: system, sigNetwork Simulation Cradle : not enabled (NSC not found (see option --with-nsc))PlanetLab FdNetDevice : not enabled (PlanetLab operating system not detected (see option --foPyViz visualizer : enabledPython Bindings : enabledReal Time Simulator : enabled (librt is not available)SQlite stats data output : enabledTap Bridge : not enabled (<linux/if_tun.h> include not detected)Tap FdNetDevice : not enabled (needs linux/if_tun.h)Threading Primitives : enabledUse sudo to set suid bit : not enabled (option --enable-sudo not selected)XmlIo : enabled'configure' finished successfully (1.944s)
Note the last part of the above output. Some ns-3 options are not enabled by default or require support from theunderlying system to work properly. For instance, to enable XmlTo, the library libxml-2.0 must be found on thesystem. If this library were not found, the corresponding ns-3 feature would not be enabled and a message would bedisplayed. Note further that there is a feature to use the program sudo to set the suid bit of certain programs. Thisis not enabled by default and so this feature is reported as “not enabled.” Finally, to reprint this summary of whichoptionaland build optimized code.A command exists for checking which profile is currently active for an already configured project:$ ./waf --check-profileWaf: Entering directory \`/path/to/ns-3-allinone/ns-3.27/build'Build profile: debug
The build.py script discussed above supports also the --enable-examples and enable-tests arguments, butin general, does not directly support other waf options; for example, this will not work:$ ./build.py --disable-python
will result inbuild.py: error: no such option: --disable-python
However, the special operator -- can be used to pass additional options through to waf, so instead of the above, thefollowing will work:$ ./build.py -- --disable-python
Some Waf commands are only meaningful during the configure phase and some commands are valid in the buildphase. For example, if you wanted to use the emulation features of ns-3, you might want to enable setting the suidbit using sudo as described above. This turns out to be a configuration-time command, and so you could reconfigureusing
Build Profiles
We already saw how you can configure Waf for debug or optimized builds:$ ./waf --build-profile=debug
NS3_BUILD_PROFILE_DEBUG NS_LOG... NS_ASSERT......
This allows you to work with multiple builds rather than always overwriting the last build. When you switch, Waf willonly compile what it has to, instead of recompiling everything.When you do switch build profiles like this, you have to be careful to give the same configuration parameters eachtime. It may be convenient to define some environment variables to help you avoid mistakes:$ export NS3CONFIG="--enable-examples --enable-tests"$ export NS3DEBUG="--build-profile=debug --out=build/debug"$ export NS3OPT=="--build-profile=optimized --out=build/optimized"
In the examples above, Waf uses the GCC C++ compiler, g++, for building ns-3. However, it’s possible to change theC++.
Install
Waf may be used to install libraries in various places on the system. The default location where libraries and executa-bles are built is in the build directory, and because Waf knows the location of these libraries and executables, it isnot necessary to install the libraries elsewhere.If users choose to install things outside of the build directory, users may issue the ./waf install com-mand. By default, the prefix for installation is /usr/local, so ./waf install will install programs into/usr/local/bin, libraries into /usr/local/lib, and headers into /usr/local/include. Superuserprivileges are typically needed to install to the default prefix, so the typical command would be sudo ./wafinstall. When running programs with Waf, Waf will first prefer to use shared libraries in the build directory,then will look for libraries in the library path configured in the local environment. So when installing libraries to thesystem,at a different prefix.In summary, it is not necessary to call ./waf install to use ns-3. Most users will not need this command sinceWaf will pick up the current libraries from the build directory, but some users may find it useful if their use caseinvolves working with programs outside of the ns-3 directory.
One Waf
There is only one Waf script, at the top level of the ns-3 source tree. As you work, you may find yourself spending alot ...
$ waff build
$ cd scratch$ waff build
It might be tempting in a module directory to add a trivial waf script along the lines of exec ../../waf. Pleasedon’t. It’s confusing to new-comers, and when done poorly it leads to subtle build errors. The solutions above are theway to go.
You can run the unit tests of the ns-3 distribution by running the ./test.py -c core script:$ ./test.py -c core
These tests are run in parallel by Waf. You should eventually see a report saying that92 of 92 tests passed (92 passed, 0 failed, 0 crashed, 0 valgrind errors)
Modules built:aodv applications bridgeclick config-store corecsma csma-layout dsdvemu energy flow-monitorinternet lte meshmobility mpi netanimnetwork nix-vector-routing ns3tcpns3wifi olsr openflowpoint-to-point point-to-point-layout propagationspectrum stats tap-bridgetemplate test toolstopology-read uan virtual-net-devicevisualizer wifi wimax
...
This command is typically run by users to quickly verify that an ns-3 distribution has built correctly. (Note the orderof the PASS: ... lines can vary, which is okay. What’s important is that the summary line at the end report that alltests passed; none failed or crashed.)
We typically run scripts under the control of Waf. This allows the build system to ensure that the shared library pathsare set correctly and that the libraries are available at run time. To run a program, simply use the --run option inWaf. Let’s run the ns-3 equivalent of the ubiquitous hello world program by typing the following:$ ./waf --run hello-simulator
Waf first checks to make sure that the program is built correctly and executes a build if required. Waf then executesthe program, which produces the following output.Hello Simulator
to tell Waf to build the debug versions of the ns-3 programs that includes the examples and tests. You must still buildthe actual debug version of the code by typing$ ./waf
Now, if you run the hello-simulator program, you should see the expected output.
Substitute your program name for <ns3-program>, and the arguments for <args>. The--command-template argument to Waf is basically a recipe for constructing the actual command line Wafshould use to execute the program. Waf checks that the build is complete, sets the shared library paths, then invokesthe executable using the provided command line template, inserting the program name for the %s placeholder. (Iadmit this is a bit awkward, but that’s the way it is. Patches welcome!)Another particularly useful example is to run a test suite by itself. Let’s assume that a mytest test suite exists (itdoesn’t). Above, we used the ./test.py script to run a whole slew of tests in parallel, by repeatedly invoking thereal testing program, test-runner. To invoke test-runner directly for a single test:$ ./waf --run test-runner --command-template="%s --suite=mytest --verbose"
This passes the arguments to the test-runner program. Since mytest does not exist, an error message will begenerated. To print the available test-runner options:$ ./waf --run test-runner --command-template="%s --help"
3.5.2 Debuggingin the --command-template argument. The --args tells gdb that the remainder of the command line belongs tothe “inferior” program. (Some gdb‘s don’t understand the --args feature. In this case, omit the program argumentsfromwill be written. But what if you want to keep those ouf to the ns-3 source tree? Use the --cwd argument:
$ ./waf --cwd=...
It may be more convenient to start with your working directory where you want the output files, in which case a littleindirection can help:$ function waff { CWD="$PWD" cd $NS3DIR >/dev/null ./waf --cwd="$CWD" $* cd - >/dev/null }
This embellishment of the previous version saves the current working directory, cd‘s to the Waf directory, then in-structs Waf to change the working directory back to the saved current working directory before running the program.
FOUR
CONCEPTUAL OVERVIEW
The first thing we need to do before actually starting to look at or write ns-3 code is to explain a few core conceptsand abstractions in the system. Much of this may appear transparently obvious to some, but we recommend taking thetime to read through this section just to ensure you are starting on a firm foundation.
In this section, we’ll review some terms that are commonly used in networking, but have a specific meaning in ns-3.
4.1.1 Node
In Internet jargon, a computing device that connects to a network is called a host or sometimes an end system. Becausens-3 is a network simulator, not specifically an Internet simulator, we intentionally do not use the term host sinceit is closely associated with the Internet and its protocols. Instead, we use a more generic term also used by othersimulators that originates in Graph Theory — the node.In ns-3 the basic computing device abstraction is called the node. This abstraction is represented in C++ by the classNode. The Node class provides methods for managing the representations of computing devices in simulations.You should think of a Node as a computer to which you will add functionality. One adds things like applications,protocol stacks and peripheral cards with their associated drivers to enable the computer to do useful work. We usethe same basic model in ns-3.
4.1.2 Application
Typically, computer software is divided into two broad classes. System Software organizes various computer resourcessuch as memory, processor cycles, disk, network, etc., according to some computing model. System software usuallydoes not use those resources to complete tasks that directly benefit a user. A user would typically run an applicationthat acquires and uses the resources controlled by the system software to accomplish some goal.Often, the line of separation between system and application software is made at the privilege level change that happensin operating system traps. In ns-3 there is no real concept of operating system and especially no concept of privilegelevels or system calls. We do, however, have the idea of an application. Just as software applications run on computersto perform tasks in the “real world,” ns-3 applications run on ns-3 Nodes to drive simulations in the simulated world.In ns-3 the basic abstraction for a user program that generates some activity to be simulated is the application.This abstraction is represented in C++ by the class Application. The Application class provides meth-ods for managing the representations of our version of user-level applications in simulations. Developers are ex-pected to specialize the Application class in the object-oriented programming sense to create new applications.In this tutorial, we will use specializations of class Application called UdpEchoClientApplication and
23ns-3 Tutorial, Release ns-3.27
UdpEchoServerApplication. As you might expect, these applications compose a client/server application setused to generate and echo simulated network packets
4.1.3 Channel
In the real world, one can connect a computer to a network. Often the media over which data flows in these networksare called channels. When you connect your Ethernet cable to the plug in the wall, you are connecting your computerto an Ethernet communication channel. In the simulated world of ns-3, one connects a Node to an object representing acommunication channel. Here the basic communication subnetwork abstraction is called the channel and is representedin C++ by the class Channel.The Channel class provides methods for managing communication subnetwork objects and connecting nodes tothem. Channels may also be specialized by developers in the object oriented programming sense. A Channel spe-cialization may model something as simple as a wire. The specialized Channel can also model things as complicatedas a large Ethernet switch, or three-dimensional space full of obstructions in the case of wireless networks.We will use specialized versions of the Channel called CsmaChannel, PointToPointChannel andWifiChannel in this tutorial. The CsmaChannel, for example, models a version of a communication subnetworkthat implements a carrier sense multiple access communication medium. This gives us Ethernet-like functionality.
It used to be the case that if you wanted to connect a computer to a network, you had to buy a specific kind of networkcable and a hardware device called (in PC terminology) a peripheral card that needed to be installed in your computer.If the peripheral card implemented some networking function, they were called Network Interface Cards, or NICs.Today most computers come with the network interface hardware built in and users don’t see these building blocks.A NIC will not work without a software driver to control the hardware. In Unix (or Linux), a piece of peripheral hard-ware is classified as a device. Devices are controlled using device drivers, and network devices (NICs) are controlledusing network device drivers collectively known as net devices. In Unix and Linux you refer to these net devices bynames such as eth0.In ns-3 the net device abstraction covers both the software driver and the simulated hardware. A net device is “in-stalled” in a Node in order to enable the Node to communicate with other Nodes in the simulation via Channels.Just as in a real computer, a Node may be connected to more than one Channel via multiple NetDevices.The net device abstraction is represented in C++ by the class NetDevice. The NetDevice class providesmethods for managing connections to Node and Channel objects; and may be specialized by developers inthe object-oriented programming sense. We will use the several specialized versions of the NetDevice calledCsmaNetDevice, PointToPointNetDevice, and WifiNetDevice in this tutorial. Just as an Ethernet NICis designed to work with an Ethernet network, the CsmaNetDevice is designed to work with a CsmaChannel; thePointToPointNetDevice is designed to work with a PointToPointChannel and a WifiNetNevice isdesigned to work with a WifiChannel.
In a real network, you will find host computers with added (or built-in) NICs. In ns-3 we would say that you willfind Nodes with attached NetDevices. In a large simulated network you will need to arrange many connectionsbetween Nodes, NetDevices and Channels.Since connecting NetDevices to Nodes, NetDevices to Channels, assigning IP addresses, etc., are suchcommon tasks in ns-3, we provide what we call topology helpers to make this as easy as possible. For example, it maytake many distinct ns-3 core operations to create a NetDevice, add a MAC address, install that net device on a Node,configure the node’s protocol stack, and then connect the NetDevice to a Channel. Even more operations would
be required to connect multiple devices onto multipoint channels and then to connect individual networks together intointernetworks. We provide topology helper objects that combine those many distinct operations into an easy to usemodel for your convenience.
If you downloaded the system as was suggested above, you will have a release of ns-3 in a directory called reposunder your home directory. Change into that release directory, and you should find a directory structure somethinglike the following:AUTHORS examples scratch utils waf.bat*bindings LICENSE src utils.py waf-toolsbuild ns3 test.py* utils.pyc wscriptCHANGES.html README testpy-output VERSION wutils.pydoc RELEASE_NOTES testpy.supp waf* wutils.pyc
Change into the examples/tutorial directory. You should see a file named first.cc located there. This isa script that will create a simple point-to-point link between two nodes and echo a single packet between the nodes.Let’s take a look at that script line by line, so go ahead and open first.cc in your favorite editor.
4.2.1 Boilerplate
The first line in the file is an emacs mode line. This tells emacs about the formatting conventions (coding style) weuse in our source code./* -*- Mode:C++; c-file-style:"gnu"; indent-tabs-mode:nil; -*- */
This is always a somewhat controversial subject, so we might as well get it out of the way immediately. The ns-3project, like most large projects, has adopted a coding style to which all contributed code must adhere. If you want tocontribute your code to the project, you will eventually have to conform to the ns-3 coding standard as described inthe file doc/codingstd.txt or shown on the project web page here.We recommend that you, well, just get used to the look and feel of ns-3 code and adopt this standard whenever youare working with our code. All of the development team and contributors have done so with various amounts ofgrumbling. The emacs mode line above makes it easier to get the formatting correct if you use the emacs editor.The ns-3 simulator is licensed using the GNU General Public License. You will see the appropriate GNU legalese atthe head of every file in the ns-3 distribution. Often you will see a copyright notice for one of the institutions involvedin the ns-3 project above the GPL text and an author listed help our high-level script users deal with the large number of include files present in the system, we group includesaccording to relatively large modules. We provide a single include file that will recursively load all of the includefiles used in each module. Rather than having to look up exactly what header you need, and possibly have to get anumber of dependencies right, we give you the ability to load a group of files at a large granularity. This is not themost efficient approach but it certainly makes writing scripts much easier.Each of the ns-3 include files is placed in a directory called ns3 (under the build directory) during the build processto help avoid include file name collisions. The ns3/core-module.h file corresponds to the ns-3 module you willfind in the directory src/core in your downloaded release distribution. If you list this directory you will find alarge number of header files. When you do a build, Waf will place public header files in an ns3 directory underthe appropriate build/debug or build/optimized directory depending on your configuration. Waf will alsoautomatically generate a module include file to load all of the public header files.Since you are, of course, following this tutorial religiously, you will already have done a$ ./waf -d debug --enable-examples --enable-tests configure
in order to configure the project to perform debug builds that include examples and tests. You will also have done a$ ./waf
to build the project. So now if you look in the directory ../../build/debug/ns3 you will find the four moduleinclude files shown above. You can take a look at the contents of these files and find that they do include all of thepublic include files in their respective modules.
The ns-3 project is implemented in a C++ namespace called ns3. This groups all ns-3-related declarations in a scopeoutside the global namespace, which we hope will help with integration with other code. The C++ using statementintroduces the ns-3 namespace into the current (global) declarative region. This is a fancy way of saying that after thisdeclaration, you will not have to type ns3:: scope resolution operator before all of the ns-3 code in order to use it.If you are unfamiliar with namespaces, please consult almost any C++ tutorial and compare the ns3 namespace andusage here with instances of the std namespace and the using namespace std; statements you will often findin discussions of cout and streams.
4.2.4 Logging
We will use this statement as a convenient place to talk about our Doxygen documentation system. If you look at theproject web site, ns-3 project, you will find a link to “Documentation” in the navigation bar. If you select this link, youwill be taken to our documentation page. There is a link to “Latest Release” that will take you to the documentationfor the latest stable release of ns-3. If you select the “API Documentation” link, you will be taken to the ns-3 APIdocumentation page.Along the left side, you will find a graphical representation of the structure of the documentation. A good place tostart is the NS-3 Modules “book” in the ns-3 navigation tree. If you expand Modules you will see a list of ns-3module documentation. The concept of module here ties directly into the module include files discussed above. Thens-3 logging subsystem is discussed in the Using the Logging Module section, so we’ll get to it later in this tutorial, butyou can find out about the above statement by looking at the Core module, then expanding the Debugging toolsbook, and then selecting the Logging page. Click on Logging.You should now be looking at the Doxygen documentation for the Logging module. In the list of Macros‘s at thetop of the page you will see the entry for NS_LOG_COMPONENT_DEFINE. Before jumping in, it would probablybe good to look for the “Detailed Description” of the logging module to get a feel for the overall operation. You caneither scroll down or select the “More...” link under the collaboration diagram to do this.Once you have a general idea of what is going on, go ahead and take a look at the specificNS_LOG_COMPONENT_DEFINE documentation. I won’t duplicate the documentation here, but to summarize, thisline declares a logging component called FirstScriptExample that allows you to enable and disable consolemessage logging by reference to the name.
This is just the declaration of the main function of your program (script). Just as in any C++ program, you need todefine a main function that will be the first function run. There is nothing at all special here. Your ns-3 script is just aC++ program.The next line sets the time resolution to one nanosecond, which happens to be the default value:Time::SetResolution (Time::NS);
The resolution is the smallest time value that can be represented (as well as the smallest representable differencebetween two time values). You can change the resolution exactly once. The mechanism enabling this flexibility issomewhat memory hungry, so once the resolution has been set explicitly we release the memory, preventing furtherupdates. (If you don’t set the resolution explicitly, it will default to one nanosecond, and the memory will be releasedwhen the simulation starts.)The next two lines of the script are used to enable two logging components that are built into the Echo Client and EchoServer applications:LogComponentEnable("UdpEchoClientApplication", LOG_LEVEL_INFO);LogComponentEnable("UdpEchoServerApplication", LOG_LEVEL_INFO);
If you have read over the Logging component documentation you will have seen that there are a number of levels oflogging verbosity/detail that you can enable on each component. These two lines of code enable debug logging at theINFO level for echo clients and servers. This will result in the application printing out messages as packets are sentand received during the simulation.Now we will get directly to the business of creating a topology and running a simulation. We use the topology helperobjects to make this job as easy as possible.
NodeContainer
The next two lines of code in our script will actually create the ns-3 Node objects that will represent the computers inthe simulation.NodeContainer nodes;nodes.Create (2);
Let’s find the documentation for the NodeContainer class before we continue. Another way to get into the doc-umentation for a given class is via the Classes tab in the Doxygen pages. If you still have the Doxygen handy,just scroll up to the top of the page and select the Classes tab. You should see a new set of tabs appear, oneof which is Class List. Under that tab you will see a list of all of the ns-3 classes. Scroll down, looking forns3::NodeContainer. When you find the class, go ahead and select it to go to the documentation for the class.You may recall that one of our key abstractions is the Node. This represents a computer to which we are going toadd things like protocol stacks, applications and peripheral cards. The NodeContainer topology helper provides aconvenient way to create, manage and access any Node objects that we create in order to run a simulation. The firstline above just declares a NodeContainer which we call nodes. The second line calls the Create method on thenodes object and asks the container to create two nodes. As described in the Doxygen, the container calls down intothe ns-3 system proper to create two Node objects and stores pointers to those objects internally.The nodes as they stand in the script do nothing. The next step in constructing a topology is to connect our nodestogether into a network. The simplest form of network we support is a single point-to-point link between two nodes.We’ll construct one of those links here.
PointToPointHelper
We are constructing a point to point link, and, in a pattern which will become quite familiar to you, we use a topologyhelper object to do the low-level work required to put the link together. Recall that two of our key abstractionsare the NetDevice and the Channel. In the real world, these terms correspond roughly to peripheral cards andnetwork cables. Typically these two things are intimately tied together and one cannot expect to interchange, forexample, Ethernet devices and wireless channels. Our Topology Helpers follow this intimate coupling and thereforeyou will use a single PointToPointHelper to configure and connect ns-3 PointToPointNetDevice andPointToPointChannel objects in this script.The next three lines in the script are,PointToPointHelper pointToPoint;pointToPoint.SetDeviceAttribute ("DataRate", StringValue ("5Mbps"));pointToPoint.SetChannelAttribute ("Delay", StringValue ("2ms"));
instantiates a PointToPointHelper object on the stack. From a high-level perspective the next line,pointToPoint.SetDeviceAttribute ("DataRate", StringValue ("5Mbps"));
tells the PointToPointHelper object to use the value “5Mbps” (five megabits per second) as the “DataRate”when it creates a PointToPointNetDevice object.From a more detailed perspective, the string “DataRate” corresponds to what we call an Attribute of thePointToPointNetDevice. If you look at the Doxygen for class ns3::PointToPointNetDevice and findthe documentation for the GetTypeId method, you will find a list of Attributes defined for the device. Among
these is the “DataRate” Attribute. Most user-visible ns-3 objects have similar lists of Attributes. We use thismechanism to easily configure simulations without recompiling as you will see in a following section.Similar to the “DataRate” on the PointToPointNetDevice you will find a “Delay” Attribute associated withthe PointToPointChannel. The final line,pointToPoint.SetChannelAttribute ("Delay", StringValue ("2ms"));
tells the PointToPointHelper to use the value “2ms” (two milliseconds) as the value of the propagation delay ofevery point to point channel it subsequently creates.
NetDeviceContainer
At this point in the script, we have a NodeContainer that contains two nodes. We have a PointToPointHelperthat is primed and ready to make PointToPointNetDevices and wire PointToPointChannel objects be-tween them. Just as we used the NodeContainer topology helper object to create the Nodes for our simulation,we will ask the PointToPointHelper to do the work involved in creating, configuring and installing our devicesfor us. We will need to have a list of all of the NetDevice objects that are created, so we use a NetDeviceContainer tohold them just as we used a NodeContainer to hold the nodes we created. The following two lines of code,NetDeviceContainer devices;devices = pointToPoint.Install (nodes);
will finish configuring the devices and channel. The first line declares the device container mentioned above and thesecond does the heavy lifting. The Install method of the PointToPointHelper takes a NodeContainer asa parameter. Internally, a NetDeviceContainer is created. For each node in the NodeContainer (there mustbe exactly two for a point-to-point link) a PointToPointNetDevice is created and saved in the device container.A PointToPointChannel is created and the two PointToPointNetDevices are attached. When objectsare created by the PointToPointHelper, the Attributes previously set in the helper are used to initialize thecorresponding Attributes in the created objects.After executing the pointToPoint.Install (nodes) call we will have two nodes, each with an installedpoint-to-point net device and a single point-to-point channel between them. Both devices will be configured to transmitdata at five megabits per second over the channel which has a two millisecond transmission delay.
InternetStackHelper
We now have nodes and devices configured, but we don’t have any protocol stacks installed on our nodes. The nexttwo lines of code will take care of that.InternetStackHelper stack;stack.Install (nodes);
The InternetStackHelper is a topology helper that is to internet stacks what the PointToPointHelper isto point-to-point net devices. The Install method takes a NodeContainer as a parameter. When it is executed,it will install an Internet Stack (TCP, UDP, IP, etc.) on each of the nodes in the node container.
Ipv4AddressHelper
Next we need to associate the devices on our nodes with IP addresses. We provide a topology helper to manage theallocation of IP addresses. The only user-visible API is to set the base IP address and network mask to use whenperforming the actual address allocation (which is done at a lower level inside the helper).The next two lines of code in our example script, first.cc,
Ipv4AddressHelper address;address.SetBase ("10.1.1.0", "255.255.255.0");
declare an address helper object and tell it that it should begin allocating IP addresses from the network 10.1.1.0 usingthe mask 255.255.255.0 to define the allocatable bits. By default the addresses allocated will start at one and increasemonotonically, so the first address allocated from this base will be 10.1.1.1, followed by 10.1.1.2, etc. The low levelns-3 system actually remembers all of the IP addresses allocated and will generate a fatal error if you accidentallycause the same address to be generated twice (which is a very hard to debug error, by the way).The next line of code,Ipv4InterfaceContainer interfaces = address.Assign (devices);
performs the actual address assignment. In ns-3 we make the association between an IP address and a device using anIpv4Interface object. Just as we sometimes need a list of net devices created by a helper for future reference wesometimes need a list of Ipv4Interface objects. The Ipv4InterfaceContainer provides this functionality.Now we have a point-to-point network built, with stacks installed and IP addresses assigned. What we need at thispoint are applications to generate traffic.
4.2.7 Applications
Another one of the core abstractions of the ns-3 system is the Application. In this script weuse two specializations of the core ns-3 class Application called UdpEchoServerApplication andUdpEchoClientApplication. Just as we have in our previous explanations, we use helper objects to help con-figure and manage the underlying objects. Here, we use UdpEchoServerHelper and UdpEchoClientHelperobjects to make our lives easier.
UdpEchoServerHelper
The following lines of code in our example script, first.cc, are used to set up a UDP echo server application onone of the nodes we have previously created.UdpEchoServerHelper echoServer (9);
The first line of code in the above snippet declares the UdpEchoServerHelper. As usual, this isn’t the applicationitself, it is an object used to help us create the actual applications. One of our conventions is to place requiredAttributes in the helper constructor. In this case, the helper can’t do anything useful unless it is provided witha port number that the client also knows about. Rather than just picking one and hoping it all works out, we requirethe port number as a parameter to the constructor. The constructor, in turn, simply does a SetAttribute with thepassed value. If you want, you can set the “Port” Attribute to another value later using SetAttribute.Similar to many other helper objects, the UdpEchoServerHelper object has an Install method. It is theexecution of this method that actually causes the underlying echo server application to be instantiated and attached toa node. Interestingly, the Install method takes a NodeContainter as a parameter just as the other Installmethods we have seen. This is actually what is passed to the method even though it doesn’t look so in this case. Thereis a C++ implicit conversion at work here that takes the result of nodes.Get (1) (which returns a smart pointer toa node object — Ptr<Node>) and uses that in a constructor for an unnamed NodeContainer that is then passedto Install. If you are ever at a loss to find a particular method signature in C++ code that compiles and runs justfine, look for these kinds of implicit conversions.
We now see that echoServer.Install is going to install a UdpEchoServerApplication on the node foundat index number one of the NodeContainer we used to manage our nodes. Install will return a container thatholds pointers to all of the applications (one in this case since we passed a NodeContainer containing one node)created by the helper.Applications require a time to “start” generating traffic and may take an optional time to “stop”. We provide both.These times are set using the ApplicationContainer methods Start and Stop. These methods take Timeparameters. In this case, we use an explicit C++ conversion sequence to take the C++ double 1.0 and convert it to anns-3 Time object using a Seconds cast. Be aware that the conversion rules may be controlled by the model author,and C++ has its own rules, so you can’t always just assume that parameters will be happily converted for you. Thetwo lines,serverApps.Start (Seconds (1.0));serverApps.Stop (Seconds (10.0));
will cause the echo server application to Start (enable itself) at one second into the simulation and to Stop (disableitself) at ten seconds into the simulation. By virtue of the fact that we have declared a simulation event (the applicationstop event) to be executed at ten seconds, the simulation will last at least ten seconds.
UdpEchoClientHelper
The echo client application is set up in a method substantially similar to that for the server. There is an underlyingUdpEchoClientApplication that is managed by an UdpEchoClientHel));
For the echo client, however, we need to set five different Attributes. The first two Attributes are set dur-ing construction of the UdpEchoClientHelper. We pass parameters that are used (internally to the helper) toset the “RemoteAddress” and “RemotePort” Attributes in accordance with our convention to make requiredAttributes parameters in the helper constructors.Recall that we used an Ipv4InterfaceContainer to keep track of the IP addresses we assigned to our devices.The zeroth interface in the interfaces container is going to correspond to the IP address of the zeroth node in thenodes container. The first interface in the interfaces container corresponds to the IP address of the first nodein the nodes container. So, in the first line of code (from above), we are creating the helper and telling it so set theremote address of the client to be the IP address assigned to the node on which the server resides. We also tell it toarrange to send packets to port nine.The “MaxPackets” Attribute tells the client the maximum number of packets we allow it to send during the simula-tion. The “Interval” Attribute tells the client how long to wait between packets, and the “PacketSize” Attributetells the client how large its packet payloads should be. With this particular combination of Attributes, we aretelling the client to send one 1024-byte packet.Just as in the case of the echo server, we tell the echo client to Start and Stop, but here we start the client onesecond after the server is enabled (at two seconds into the simulation).
4.2.8 Simulator
What we need to do at this point is to actually run the simulation. This is done using the global functionSimulator::Run.Simulator::Run ();
we actually scheduled events in the simulator at 1.0 seconds, 2.0 seconds and two events at 10.0 seconds. WhenSimulator::Run is called, the system will begin looking through the list of scheduled events and executing them.First it will run the event at 1.0 seconds, which will enable the echo server application (this event may, in turn, schedulemany other events). Then it will run the event scheduled for t=2.0 seconds which will start the echo client application.Again, this event may schedule many more events. The start event implementation in the echo client application willbegin the data transfer phase of the simulation by sending a packet to the server.The act of sending the packet to the server will trigger a chain of events that will be automatically scheduled behindthe scenes and which will perform the mechanics of the packet echo according to the various timing parameters thatwe have set in the script.Eventually, since we only send one packet (recall the MaxPackets Attribute was set to one), the chain of eventstriggered by that single client echo request will taper off and the simulation will go idle. Once this happens, theremaining events will be the Stop events for the server and the client. When these events are executed, there are nofurther events to process and Simulator::Run returns. The simulation is then complete.All that remains is to clean up. This is done by calling the global function Simulator::Destroy. As the helperfunctions (or low level ns-3 code) executed, they arranged it so that hooks were inserted in the simulator to destroy allof the objects that were created. You did not have to keep track of any of these objects yourself — all you had to dowas to call Simulator::Destroy and exit. The ns-3 system took care of the hard part for you. The remaininglines of our first ns-3 script, first.cc, do just that: Simulator::Destroy (); return 0;}
ns-3 is a Discrete Event (DE) simulator. In such a simulator, each event is associated with its execution time, and thesimulation proceeds by executing events in the temporal order of simulation time. Events may cause future events tobe scheduled (for example, a timer may reschedule itself to expire at the next interval).The initial events are usually triggered by each object, e.g., IPv6 will schedule Router Advertisements, NeighborSolicitations, etc., an Application schedule the first packet sending event, etc.When an event is processed, it may generate zero, one or more events. As a simulation executes, events are consumed,but more events may (or may not) be generated. The simulation will stop automatically when no further events are inthe event queue, or when a special Stop event is found. The Stop event is created through the Simulator::Stop(stopTime); function.There is a typical case where Simulator::Stop is absolutely necessary to stop the simulation: when there is a self-sustaining event. Self-sustaining (or recurring) events are events that always reschedule themselves. As a consequence,they always keep the event queue non-empty.
There are many protocols and modules containing recurring events, e.g.: • FlowMonitor - periodic check for lost packets • RIPng - periodic broadcast of routing tables update • etc.In these cases, Simulator::Stop is necessary to gracefully stop the simulation. In addition, when ns-3 is inemulation mode, the RealtimeSimulator is used to keep the simulation clock aligned with the machine clock,and Simulator::Stop is necessary to stop the process.Many of the simulation programs in the tutorial do not explicitly call Simulator::Stop, since the event queuewill automatically run out of events. However, these programs will also accept a call to Simulator::Stop. Forexample, the following additional statement in the first example program will schedule an explicit stop at 11 seconds:+ Simulator::Stop (Seconds (11.0)); Simulator::Run (); Simulator::Destroy (); return 0; }
The above will not actually change the behavior of this program, since this particular simulation naturally ends after 10seconds. But if you were to change the stop time in the above statement from 11 seconds to 1 second, you would noticethat the simulation stops before any output is printed to the screen (since the output occurs around time 2 seconds ofsimulation time).It is important to call Simulator::Stop before calling Simulator::Run; otherwise, Simulator::Run maynever return control to the main program to execute the stop!
We have made it trivial to build your simple scripts. All you have to do is to drop your script into the scratch directoryand it will automatically be built if you run Waf. Let’s try it. Copy examples/tutorial/first.cc into thescratch directory after changing back into the top level directory.$ cd ../..$ cp examples/tutorial/first.cc scratch/myfirst.cc
You should see messages reporting that your myfirst example was built successfully.Waf: Entering directory `/home/craigdo/repos/ns-3-allinone/ns-3-dev/build'[614/708] cxx: scratch/myfirst.cc -> build/debug/scratch/myfirst_3.o[706/708] cxx_link: build/debug/scratch/myfirst_3.o -> build/debug/scratch/myfirstWaf: Leaving directory `/home/craigdo/repos/ns-3-allinone/ns-3-dev/build''build' finished successfully (2.357s)
You can now run the example (note that if you build your program in the scratch directory you must run it out of thescratch directory):$ ./waf --run scratch/myfirst
Here you see that the build system checks to make sure that the file has been build and then runs it. You see the loggingcomponent on the echo client indicate that it has sent one 1024 byte packet to the Echo Server on 10.1.1.2. You alsosee the logging component on the echo server say that it has received the 1024 bytes from 10.1.1.1. The echo serversilently echoes the packet and you see the echo client log that it has received its packet back from the server.
Now that you have used some of the ns-3 helpers you may want to have a look at some of the source code thatimplements that functionality. The most recent code can be browsed on our web server at the following link:. There, you will see the Mercurial summary page for our ns-3 development tree.At the top of the page, you will see a number of links,summary | shortlog | changelog | graph | tags | files
Go ahead and select the files link. This is what the top-level of most of our repositories will look:drwxr-xr-x [up]drwxr-xr-x bindings python filesdrwxr-xr-x doc filesdrwxr-xr-x examples filesdrwxr-xr-x ns3 filesdrwxr-xr-x scratch filesdrwxr-xr-x src filesdrwxr-xr-x utils files-rw-r--r-- 2009-07-01 12:47 +0200 560 .hgignore file | revisions | annotate-rw-r--r-- 2009-07-01 12:47 +0200 1886 .hgtags file | revisions | annotate-rw-r--r-- 2009-07-01 12:47 +0200 1276 AUTHORS file | revisions | annotate-rw-r--r-- 2009-07-01 12:47 +0200 30961 CHANGES.html file | revisions | annotate-rw-r--r-- 2009-07-01 12:47 +0200 17987 LICENSE file | revisions | annotate-rw-r--r-- 2009-07-01 12:47 +0200 3742 README file | revisions | annotate-rw-r--r-- 2009-07-01 12:47 +0200 16171 RELEASE_NOTES file | revisions | annotate-rw-r--r-- 2009-07-01 12:47 +0200 6 VERSION file | revisions | annotate-rwxr-xr-x 2009-07-01 12:47 +0200 88110 waf file | revisions | annotate-rwxr-xr-x 2009-07-01 12:47 +0200 28 waf.bat file | revisions | annotate-rw-r--r-- 2009-07-01 12:47 +0200 35395 wscript file | revisions | annotate-rw-r--r-- 2009-07-01 12:47 +0200 7673 wutils.py file | revisions | annotate
Our example scripts are in the examples directory. If you click on examples you will see a list of subdirectories.One of the files in tutorial subdirectory is first.cc. If you click on first.cc you will find the code you justwalked through.The source code is mainly in the src directory. You can view source code either by clicking on the directory name orby clicking on the files link to the right of the directory name. If you click on the src directory, you will be takento the listing of the src subdirectories. If you then click on core subdirectory, you will find a list of files. The firstfile you will find (as of this writing) is abort.h. If you click on the abort.h link, you will be sent to the sourcefile for abort.h which contains useful macros for exiting scripts if abnormal conditions are detected.The source code for the helpers we have used in this chapter can be found in the src/applications/helperdirectory. Feel free to poke around in the directory tree to get a feel for what is there and the style of ns-3 programs.
FIVE
TWEAKING
We have already taken a brief look at the ns-3 logging module while going over the first.cc script. We will nowtake a closer look and see what kind of use-cases the logging subsystem was designed to cover.
Many large systems support some kind of message logging facility, and ns-3 is not an exception. In some cases, onlyerror messages are logged to the “operator console” (which is typically stderr in Unix- based systems). In othersystems, warning messages may be output as well as more detailed informational messages. In some cases, loggingfacilities are used to output debug messages which can quickly turn the output into a blur.ns-3 takes the view that all of these verbosity levels are useful and we provide a selectable, multi-level approachto message logging. Logging can be disabled completely, enabled on a component-by-component basis, or enabledglobally; and it provides selectable verbosity levels. The ns-3 log module provides a straightforward, relatively easyto use way to get useful information out of your simulation.You should understand that we do provide a general purpose mechanism — tracing — to get data out of your modelswhich should be preferred for simulation output (see the tutorial section Using the Tracing System for more details onour tracing system). Logging should be preferred for debugging information, warnings, error messages, or any timeyou want to easily get a quick message out of your scripts or models.There are currently seven levels of log messages of increasing verbosity defined in the system. • LOG_ERROR — Log error messages (associated macro: NS_LOG_ERROR); • LOG_WARN — Log warning messages (associated macro: NS_LOG_WARN); • LOG_DEBUG — Log relatively rare, ad-hoc debugging messages (associated macro: NS_LOG_DEBUG); • LOG_INFO — Log informational messages about program progress (associated macro: NS_LOG_INFO); • LOG_FUNCTION — Log a message describing each function called (two associated macros: NS_LOG_FUNCTION, used for member functions, and NS_LOG_FUNCTION_NOARGS, used for static functions); • LOG_LOGIC – Log messages describing logical flow within a function (associated macro: NS_LOG_LOGIC); • LOG_ALL — Log everything mentioned above (no associated macro).For each LOG_TYPE there is also LOG_LEVEL_TYPE that, if used, enables logging of all the levels above it inaddition to it’s level. (As a consequence of this, LOG_ERROR and LOG_LEVEL_ERROR and also LOG_ALLand LOG_LEVEL_ALL are functionally equivalent.) For example, enabling LOG_INFO will only enable messagesprovided by NS_LOG_INFO macro, while enabling LOG_LEVEL_INFO will also enable messages provided byNS_LOG_DEBUG, NS_LOG_WARN and NS_LOG_ERROR macros.
35ns-3 Tutorial, Release ns-3.27
We also provide an unconditional logging macro that is always displayed, irrespective of logging levels or componentselection. • NS_LOG_UNCOND – Log the associated message unconditionally (no associated log level).Each level can be requested singly or cumulatively; and logging can be set up using a shell environment variable(NS_LOG) or by logging system function call. As was seen earlier in the tutorial, the logging system has Doxygendocumentation and now would be a good time to peruse the Logging Module documentation if you have not done so.Now that you have read the documentation in great detail, let’s use some of that knowledge to get some interestinginformation out of the scratch/myfirst.cc example script you have already built.
Let’s use the NS_LOG environment variable to turn on some more logging, but first, just to get our bearings, go aheadand run the last script just as you did previously,$ ./waf --run scratch/myfirst
You should see the now familiar output of the first ns-3 example program$ Waf: Entering directory `/home/craigdo/repos/ns-3-allinone/ns-3-dev/build'Waf: Leaving directory `/home/craigdo/repos/ns-3-allinone/ns-3-dev/build''build' finished successfully (0.413s)Sent 1024 bytes to 10.1.1.2Received 1024 bytes from 10.1.1.1Received 1024 bytes from 10.1.1.2
It turns out that the “Sent” and “Received” messages you see above are actually logging messages from theUdpEchoClientApplication and UdpEchoServerApplication. We can ask the client application, forexample, to print more information by setting its logging level via the NS_LOG environment variable.I am going to assume from here on that you are using an sh-like shell that uses the”VARIABLE=value” syntax. Ifyou are using a csh-like shell, then you will have to convert my examples to the “setenv VARIABLE value” syntaxrequired by those shells.Right now, the UDP echo client application is responding to the following line of code in scratch/myfirst.cc,LogComponentEnable("UdpEchoClientApplication", LOG_LEVEL_INFO);
This line of code enables the LOG_LEVEL_INFO level of logging. When we pass a logging level flag, we are actuallyenabling the given level and all lower levels. In this case, we have enabled NS_LOG_INFO, NS_LOG_DEBUG,NS_LOG_WARN and NS_LOG_ERROR. We can increase the logging level and get more information without changingthe script and recompiling by setting the NS_LOG environment variable like this:$ export NS_LOG=UdpEchoClientApplication=level_all
The left hand side of the assignment is the name of the logging component we want to set, and the right hand side isthe flag we want to use. In this case, we are going to turn on all of the debugging levels for the application. If yourun the script with NS_LOG set this way, the ns-3 logging system will pick up the change and you should see thefollowing output:Waf: Entering directory `/home/craigdo/repos/ns-3-allinone/ns-3-dev/build'Waf: Leaving directory `/home/craigdo/repos/ns-3-allinone/ns-3-dev/build''build' finished successfully (0.404s)
36 Chapter 5. Tweaking ns-3 Tutorial, Release ns-3.27()Sent 1024 bytes to 10.1.1.2Received 1024 bytes from 10.1.1.1UdpEchoClientApplication:HandleRead(0x6241e0, 0x624a20)Received 1024 bytes from 10.1.1.2UdpEchoClientApplication:StopApplication()UdpEchoClientApplication:DoDispose()UdpEchoClientApplication:~UdpEchoClient()
The additional debug information provided by the application is from the NS_LOG_FUNCTION level. Thisshows every time a function in the application is called during script execution. Generally, use of (at least)NS_LOG_FUNCTION(this) in member functions is preferred. Use NS_LOG_FUNCTION_NOARGS() only in staticfunctions. Note, however, that there are no requirements in the ns-3 system that models must support any particularlogging functionality. The decision regarding how much information is logged is left to the individual model developer.In the case of the echo applications, a good deal of log output is available.You can now see a log of the function calls that were made to the application. If you look closely you will noticea single colon between the string UdpEchoClientApplication and the method name where you might haveexpected a C++ scope operator (::). This is intentional.The name is not actually a class name, it is a logging component name. When there is a one-to-one correspondencebetween a source file and a class, this will generally be the class name but you should understand that it is not actuallya class name, and there is a single colon there instead of a double colon to remind you in a relatively subtle way toconceptually separate the logging component name from the class name.It turns out that in some cases, it can be hard to determine which method actually generates a log message. If youlook in the text above, you may wonder where the string “Received 1024 bytes from 10.1.1.2” comesfrom. You can resolve this by OR’ing the prefix_func level into the NS_LOG environment variable. Try doingthe following,$ export 'NS_LOG=UdpEchoClientApplication=level_all|prefix_func'
Note that the quotes are required since the vertical bar we use to indicate an OR operation is also a Unix pipe connector.Now, if you run the script you will see that the logging system makes sure that every message from the given logcomponent is prefixed with the component name.Waf: Entering directory `/home/craigdo/repos/ns-3-allinone/ns-3-dev/build'Waf: Leaving directory `/home/craigdo/repos/ns-3-allinone/ns-3-dev/build''build' finished successfully (0.417s()UdpEchoClientApplication:Send(): Sent 1024 bytes to 10.1.1.2Received 1024 bytes from 10.1.1.1UdpEchoClientApplication:HandleRead(0x6241e0, 0x624a20)UdpEchoClientApplication:HandleRead(): Received 1024 bytes from 10.1.1.2UdpEchoClientApplication:StopApplication()UdpEchoClientApplication:DoDispose()UdpEchoClientApplication:~UdpEchoClient()
You can now see all of the messages coming from the UDP echo client application are identified as such. The mes-sage “Received 1024 bytes from 10.1.1.2” is now clearly identified as coming from the echo client application. The
remaining message must be coming from the UDP echo server application. We can enable that component by enteringa colon separated list of components in the NS_LOG environment variable.$ export 'NS_LOG=UdpEchoClientApplication=level_all|prefix_func: UdpEchoServerApplication=level_all|prefix_func'
Warning: You will need to remove the newline after the : in the example text above which is only there for documentformatting purposes.Now, if you run the script you will see all of the log messages from both the echo client and server applications. Youmay see that this can be very useful in debugging problems.Waf: Entering directory `/home/craigdo/repos/ns-3-allinone/ns-3-dev/build'Waf: Leaving directory `/home/craigdo/repos/ns-3-allinone/ns-3-dev/build''build' finished successfully (0.1.1.2UdpEchoServerApplication:HandleRead(): Received 1024 bytes from 10.1.1.1UdpEchoServerApplication:HandleRead(): Echoing packetUdpEchoClientApplication:HandleRead(0x624920, 0x625160)UdpEchoClientApplication:HandleRead(): Received 1024 bytes from 10.1.1.2UdpEchoServerApplication:StopApplication()UdpEchoClientApplication:StopApplication()UdpEchoClientApplication:DoDispose()UdpEchoServerApplication:DoDispose()UdpEchoClientApplication:~UdpEchoClient()UdpEchoServerApplication:~UdpEchoServer()
It is also sometimes useful to be able to see the simulation time at which a log message is generated. You can do thisby ORing in the prefix_time bit.$ export 'NS_LOG=UdpEchoClientApplication=level_all|prefix_func|prefix_time: UdpEchoServerApplication=level_all|prefix_func|prefix_time'
Again, you will have to remove the newline above. If you run the script now, you should see the following output:Waf: Entering directory `/home/craigdo/repos/ns-3-allinone/ns-3-dev/build'Waf: Leaving directory `/home/craigdo/repos/ns-3-allinone/ns-3-dev/build''build' finished successfully (0.1.22.00369s UdpEchoServerApplication:HandleRead(): Received 1024 bytes from 10.1.1.12.00369s UdpEchoServerApplication:HandleRead(): Echoing packet2.00737s UdpEchoClientApplication:HandleRead(0x624290, 0x624ad0)2.00737s UdpEchoClientApplication:HandleRead(): Received 1024 bytes from 10.1.1.210s UdpEchoServerApplication:StopApplication()10s UdpEchoClientApplication:StopApplication()
38 Chapter 5. Tweaking ns-3 Tutorial, Release ns-3.27
UdpEchoClientApplication:DoDispose()UdpEchoServerApplication:DoDispose()UdpEchoClientApplication:~UdpEchoClient()UdpEchoServerApplication:~UdpEchoServer()
You can see that the constructor for the UdpEchoServer was called at a simulation time of 0 seconds. This is ac-tually happening before the simulation starts, but the time is displayed as zero seconds. The same is true for theUdpEchoClient constructor message.Recall that the scratch/first.cc script started the echo server application at one second into the simulation.You can now see that the StartApplication method of the server is, in fact, called at one second. You can alsosee that the echo client application is started at a simulation time of two seconds as we requested in the script.You can now follow the progress of the simulation from the ScheduleTransmit call in the client that calls Sendto the HandleRead callback in the echo server application. Note that the elapsed time for the packet to be sent acrossthe point-to-point link is 3.69 milliseconds. You see the echo server logging a message telling you that it has echoedthe packet and then, after another channel delay, you see the echo client receive the echoed packet in its HandleReadmethod.There is a lot that is happening under the covers in this simulation that you are not seeing as well. You can very easilyfollow the entire process by turning on all of the logging components in the system. Try setting the NS_LOG variableto the following,$ export 'NS_LOG=*=level_all|prefix_func|prefix_time'
The asterisk above is the logging component wildcard. This will turn on all of the logging in all of the componentsused in the simulation. I won’t reproduce the output here (as of this writing it produces 1265 lines of output for thesingle packet echo) but you can redirect this information into a file and look through it with your favorite editor if youlike,$ ./waf --run scratch/myfirst > log.out 2>&1
I personally use this extremely verbose version of logging when I am presented with a problem and I have no ideawhere things are going wrong. I can follow the progress of the code quite easily without having to set breakpoints andstep through code in a debugger. I can just edit up the output in my favorite editor and search around for things I expect,and see things happening that I don’t expect. When I have a general idea about what is going wrong, I transition into adebugger for a fine-grained examination of the problem. This kind of output can be especially useful when your scriptdoes something completely unexpected. If you are stepping using a debugger you may miss an unexpected excursioncompletely. Logging the excursion makes it quickly visible.
You can add new logging to your simulations by making calls to the log component via several macros. Let’s do so inthe myfirst.cc script we have in the scratch directory.Recall that we have defined a logging component in that script:NS_LOG_COMPONENT_DEFINE ("FirstScriptExample");
You now know that you can enable all of the logging for this component by setting the NS_LOG environment variableto the various levels. Let’s go ahead and add some logging to the script. The macro used to add an informational levellog message is NS_LOG_INFO. Go ahead and add one (just before we start creating the nodes) that tells you that thescript is “Creating Topology.” This is done as in this code snippet,Open scratch/myfirst.cc in your favorite editor and add the line,
Now build the script using waf and clear the NS_LOG variable to turn off the torrent of logging we previously enabled:$ ./waf$ export NS_LOG=
you will not see your new message since its associated logging component (FirstScriptExample) has not beenenabled. In order to see your message you will have to enable the FirstScriptExample logging component witha level greater than or equal to NS_LOG_INFO. If you just want to see this particular level of logging, you can enableit by,$ export NS_LOG=FirstScriptExample=info
If you now run the script you will see your new “Creating Topology” log message,Waf: Entering directory `/home/craigdo/repos/ns-3-allinone/ns-3-dev/build'Waf: Leaving directory `/home/craigdo/repos/ns-3-allinone/ns-3-dev/build''build' finished successfully (0.404s)Creating TopologySent 1024 bytes to 10.1.1.2Received 1024 bytes from 10.1.1.1Received 1024 bytes from 10.1.1.2
Another way you can change how ns-3 scripts behave without editing and building is via command line arguments.We provide a mechanism to parse command line arguments and automatically set local and global variables based onthose arguments.The first step in using the command line argument system is to declare the command line parser. This is done quitesimply (in your main program) as in the following code,intmain (int argc, char *argv[]){ ...
CommandLine cmd; cmd.Parse (argc, argv);
...}
40 Chapter 5. Tweaking ns-3 Tutorial, Release ns-3.27
This simple two line snippet is actually very useful by itself. It opens the door to the ns-3 global variable andAttribute systems. Go ahead and add that two lines of code to the scratch/myfirst.cc script at the start ofmain. Go ahead and build the script and run it, but ask the script for help in the following way,$ ./waf --run "scratch/myfirst --PrintHelp"
This will ask Waf to run the scratch/myfirst script and pass the command line argument --PrintHelp to thescript. The quotes are required to sort out which program gets which argument. The command line parser will nowsee the --PrintHelp argument and respond with,Waf: Entering directory `/home/craigdo/repos/ns-3-allinone/ns-3-dev/build'Waf: Leaving directory `/home/craigdo/repos/ns-3-allinone/ns-3-dev/build''build' finished successfully (0.413s)TcpL4Protocol:TcpStateMachine()CommandLine:HandleArgument(): Handle arg name=PrintHelp value=-.
Let’s focus on the --PrintAttributes option. We have already hinted at the ns-3 Attribute system whilewalking through the first.cc script. We looked at the following lines of code,PointToPointHelper pointToPoint;pointToPoint.SetDeviceAttribute ("DataRate", StringValue ("5Mbps"));pointToPoint.SetChannelAttribute ("Delay", StringValue ("2ms"));
and mentioned that DataRate was actually an Attribute of the PointToPointNetDevice. Let’s use thecommand line argument parser to take a look at the Attributes of the PointToPointNetDevice. The help listingsays that we should provide a TypeId. This corresponds to the class name of the class to which the Attributesbelong. In this case it will be ns3::PointToPointNetDevice. Let’s go ahead and type in,$ ./waf --run "scratch/myfirst --PrintAttributes=ns3::PointToPointNetDevice"
The system will print out all of the Attributes of this kind of net device. Among the Attributes you will seelisted is,--ns3::PointToPointNetDevice::DataRate=[32768bps]: The default data rate for point to point links
This is the default value that will be used when a PointToPointNetDevice is created in the system. Weoverrode this default with the Attribute setting in the PointToPointHelper above. Let’s use the de-fault values for the point-to-point devices and channels by deleting the SetDeviceAttribute call and theSetChannelAttribute call from the myfirst.cc we have in the scratch directory.Your script should now just declare the PointToPointHelper and not do any set operations as in the followingexample,... echoserver application and turn on the time prefix.$ export 'NS_LOG=UdpEchoServerApplication=level_all|prefix_time'
If you run the script, you should now see the following output,Waf: Entering directory `/home/craigdo/repos/ns-3-allinone/ns-3-dev/build'Waf: Leaving directory `/home/craigdo/repos/ns-3-allinone/ns-3-dev/build''build' finished successfully (0.405s)0s UdpEchoServerApplication:UdpEchoServer()1s UdpEchoServerApplication:StartApplication()Sent 1024 bytes to 10.1.1.22.25732s Received 1024 bytes from 10.1.1.12.25732s Echoing packetReceived 1024 bytes from 10.1.1.210at 2.00369 seconds.2.00369s UdpEchoServerApplication:HandleRead(): Received 1024 bytes from 10.1.1.1
Now it is receiving the packet at 2.25732 seconds. This is because we just dropped the data rate of thePointToPointNetDevice down to its default of 32768 bits per second from five megabits per second.If we were to provide a new DataRate using the command line, we could speed our simulation up again. We do thisthe result? It turns out that in order to get the original behavior of the script back, we will have to set the speed-of-lightdelay of the channel as well. We can ask the command line system to print out the Attributes of the channel justlike `/home/craigdo/repos/ns-3-allinone/ns-3-dev/build'Waf: Leaving directory `/home/craigdo/repos/ns-3-allinone/ns-3-dev/build''build' finished successfully (0.417s)0s UdpEchoServerApplication:UdpEchoServer()1s UdpEchoServerApplication:StartApplication()
42 Chapter 5. Tweaking ns-3 Tutorial, Release ns-3.27
Note that the packet is again received by the server at 2.00369 seconds. We could actually set any of the Attributesused in the script in this way. In particular we could set the UdpEchoClient Attribute MaxPackets to someother value than one.How would you go about that? Give it a try. Remember you have to comment out the place we override the defaultAttribute and explicitly set MaxPackets in the script. Then you have to rebuild the script. You will also haveto find the syntax for actually setting the new default attribute value using the command line help facility. Once youhave this figured out you should be able to control the number of packets echoed from the command line. Since we’renice"
A natural question to arise at this point is how to learn about the existence of all of these attributes. Again, thecommand line help facility has a feature for this. If we ask for command line help we should see:$ ./waf --run "scratch/myfirst --PrintHelp"myfirst [Program Arguments] [General Arguments].
If you select the “PrintGroups” argument, you should see a list of all registered TypeId groups. The group names arealigned with the module names in the source directory (although with a leading capital letter). Printing out all of theinformation at once would be too much, so a further filter is available to print information on a per-group basis. So,focusing again on the point-to-point module:./waf --run "scratch/myfirst --PrintGroup=PointToPoint"TypeIds in group PointToPoint: ns3::PointToPointChannel ns3::PointToPointNetDevice ns3::PointToPointRemoteChannel ns3::PppHeader
and from here, one can find the possible TypeId names to search for attributes, such as in the--PrintAttributes=ns3::PointToPointChannel example shown above.Another way to find out about attributes is through the ns-3 Doxygen; there is a page that lists out all of the registeredattributes in the simulator.
You can also add your own hooks to the command line system. This is done quite simply by using the AddValuemethod to the command line parser.Let’s use this facility to specify the number of packets to echo in a completely different way. Let’s add a local variablecalled nPackets to the main function. We’ll initialize it to one to match our previous default behavior. To allowthe command line parser to change this value, we need to hook the value into the parser. We do this by adding a callto AddValue. Go ahead and change the scratch/myfirst.cc script to start with the following code,intmain the variable nPackets instead of the constant 1 as is shown below.echoClient.SetAttribute ("MaxPackets", UintegerValue (nPackets));
Now if you run the script and provide the --PrintHelp argument, you should see your new User Argumentlisted in the help display.Try,$ ./waf --run "scratch/myfirst --PrintHelp"
If you want to specify the number of packets to echo, you can now do so by setting the --nPackets argument in thecommand line,$ ./waf --run "scratch/myfirst --nPackets=2"
44 Chapter 5. Tweaking ns-3 Tutorial, Release ns-3.27
You have now echoed two packets. Pretty easy, isn’t it?You can see that if you are an ns-3 user, you can use the command line argument system to control global valuesand Attributes. If you are a model author, you can add new Attributes to your Objects and they willautomatically be available for setting by your users through the command line system. If you are a script author, youcan add new variables to your scripts and hook them into the command line system quite painlessly.
The whole point of simulation is to generate output for further study, and the ns-3 tracing system is a primary mech-anism for this. Since ns-3 is a C++ program, standard facilities for generating output from C++ programs could beused:#include <iostream>...int main (){ ... std::cout << "The value of x is " << x << std::endl; ...}
You could even use the logging module to add a little structure to your solution. There are many well-known problemsgenerated by such approaches and so we have provided a generic event tracing subsystem to address the issues wethought were important.The basic goals of the ns-3 tracing system are: • For basic tasks, the tracing system should allow the user to generate standard tracing for popular tracing sources, and to customize which objects generate the tracing; • Intermediate users must be able to extend the tracing system to modify the output format generated, or to insert new tracing sources, without modifying the core of the simulator; • Advanced users can modify the simulator core to add new tracing sources and sinks.The ns-3 tracing system is built on the concepts of independent tracing sources and tracing sinks, and a uniformmechanism for connecting sources to sinks. Trace sources are entities that can signal events that happen in a simulationand provide access to interesting underlying data. For example, a trace source could indicate when a packet is receivedby a net device and provide access to the packet contents for interested trace sinks.Trace sources are not useful by themselves, they must be “connected” to other pieces of code that actually do somethinguseful with the information provided by the sink. Trace sinks are consumers of the events and data provided by thetrace sources. For example, one could create a trace sink that would (when connected to the trace source of the previousexample) print out interesting parts of the received packet.The rationale for this explicit division is to allow users to attach new types of sinks to existing tracing sources, withoutrequiring editing and recompilation of the core of the simulator. Thus, in the example above, a user could define a new
tracing sink in her script and attach it to an existing tracing source defined in the simulation core by editing only theuser script.In this tutorial, we will walk through some pre-defined sources and sinks and show how they may be customized withlittle user effort. See the ns-3 manual or how-to sections for information on advanced tracing configuration includingextending the tracing namespace and creating new tracing sources.
ns-3 provides helper functionality that wraps the low-level tracing system to help you with the details involved inconfiguring some easily understood packet traces. If you enable this functionality, you will see output in a ASCII files— thus the name. For those familiar with ns-2 output, this type of trace is analogous to the out.tr generated bymany scripts.Let’s just jump right in and add some ASCII tracing output to our scratch/myfirst.cc script. Right before thecall to Simulator::Run (), add the following lines of code:AsciiTraceHelper ascii;pointToPoint.EnableAsciiAll (ascii.CreateFileStream ("myfirst.tr"));
Like in many other ns-3 idioms, this code uses a helper object to help create ASCII traces. The second line containstwo nested method calls. The “inside” method, CreateFileStream() uses an unnamed object idiom to createa file stream object on the stack (without an object name) and pass it down to the called method. We’ll go into thismore in the future, but all you have to know at this point is that you are creating an object representing a file named“myfirst.tr” and passing it into ns-3. You are telling ns-3 to deal with the lifetime issues of the created object andalso to deal with problems caused by a little-known (intentional) limitation of C++ ofstream objects relating to copyconstructors.The outside call, to EnableAsciiAll(), tells the helper that you want to enable ASCII tracing on all point-to-pointdevices in your simulation; and you want the (provided) trace sinks to write out information about packet movementin ASCII format.For those familiar with ns-2, the traced events are equivalent to the popular trace points that log “+”, “-”, “d”, and “r”events.You can now build the script and run it from the command line:$ ./waf --run scratch/myfirst
Just as you have seen many times before, you will see some messages from Waf and then “‘build’ finished successfully”with some number of messages from the running program.When it ran, the program will have created a file named myfirst.tr. Because of the way that Waf works, the fileis not created in the local directory, it is created at the top-level directory of the repository by default. If you want tocontrol where the traces are saved you can use the --cwd option of Waf to specify this. We have not done so, thus weneed to change into the top level directory of our repo and take a look at the ASCII trace file myfirst.tr in yourfavorite editor.
There’s a lot of information there in a pretty dense form, but the first thing to notice is that there are a number ofdistinct lines in this file. It may be difficult to see this clearly unless you widen your window considerably.Each line in the file corresponds to a trace event. In this case we are tracing events on the transmit queue present inevery point-to-point net device in the simulation. The transmit queue is a queue through which every packet destinedfor a point-to-point channel must pass. Note that each line in the trace file begins with a lone character (has a spaceafter it). This character will have the following meaning:
46 Chapter 5. Tweaking ns-3 Tutorial, Release ns-3.27
The first section of this expanded trace event (reference number 0) is the operation. We have a + character, so this corresponds to an enqueue operation on the transmit queue. The second section (reference 1) is the simulation time expressed in seconds. You may recall that we asked the UdpEchoClientApplication to start sending packets at two seconds. Here we see confirmation that this is, indeed, happening. The next section of the example trace (reference 2) tell us which trace source originated this event (expressed in the tracing namespace). You can think of the tracing namespace somewhat like you would a filesystem namespace. The root of the namespace is the NodeList. This corresponds to a container managed in the ns-3 core code that contains all of the nodes that are created in a script. Just as a filesystem may have directories under the root, we may have node numbers in the NodeList. The string /NodeList/0 therefore refers to the zeroth node in the NodeList which we typically think of as “node 0”. In each node there is a list of devices that have been installed. This list appears next in the namespace. You can see that this trace event comes from DeviceList/0 which is the zeroth device installed in the node. The next string, $ns3::PointToPointNetDevice tells you what kind of device is in the zeroth position of the device list for node zero. Recall that the operation + found at reference 00 meant that an enqueue operation happened on the transmit queue of the device. This is reflected in the final segments of the “trace path” which are TxQueue/Enqueue. The remaining sections in the trace should be fairly intuitive. References 3-4 indicate that the packet is encapsulated in the point-to-point protocol. References 5-7 show that the packet has an IP version four header and has originated from IP address 10.1.1.1 and is destined for 10.1.1.2. References 8-9 show that this packet has a UDP header and, finally, reference 10 shows that the payload is the expected 1024 bytes. The next line in the trace file shows the same packet being dequeued from the transmit queue on the same node. The Third line in the trace file shows the packet being received by the net device on the node with the echo server. I have reproduced that event below. 1 r 2 2.25732 3 /NodeList/1/DeviceList/0/$ns3::PointToPointNetDevice/MacRx 4 ns3::Ipv4Header ( 5 tos 0x0 ttl 64 id 0 protocol 17 offset 0 flags [none] 6 length: 1052 10.1.1.1 > 10.1.1.2) 7 ns3::UdpHeader ( 8 length: 1032 49153 > 9) 9 Payload (size=1024)
Notice that the trace operation is now r and the simulation time has increased to 2.25732 seconds. If you have beenfollowing the tutorial steps closely this means that you have left the DataRate of the net devices and the channelDelay set to their default values. This time should be familiar as you have seen it before in a previous section.The trace source namespace entry (reference 02) has changed to reflect that this event is coming from node 1(/NodeList/1) and the packet reception trace source (/MacRx). It should be quite easy for you to follow theprogress of the packet through the topology by looking at the rest of the traces in the file.
The ns-3 device helpers can also be used to create trace files in the .pcap format. The acronym pcap (usually writtenin lower case) stands for packet capture, and is actually an API that includes the definition of a .pcap file format. Themost popular program that can read and display this format is Wireshark (formerly called Ethereal). However, thereare many traffic trace analyzers that use this packet format. We encourage users to exploit the many tools available foranalyzing pcap traces. In this tutorial, we concentrate on viewing pcap traces with tcpdump.The code used to enable pcap tracing is a one-liner.pointToPoint.EnablePcapAll ("myfirst");
Go ahead and insert this line of code after the ASCII tracing code we just added to scratch/myfirst.cc. Noticethat we only passed the string “myfirst,” and not “myfirst.pcap” or something similar. This is because the parameteris a prefix, not a complete file name. The helper will actually create a trace file for every point-to-point device in thesimulation. The file names will be built using the prefix, the node number, the device number and a ”.pcap” suffix.In our example script, we will eventually see files named “myfirst-0-0.pcap” and “myfirst-1-0.pcap” which are thepcap traces for node 0-device 0 and node 1-device 0, respectively.Once you have added the line of code to enable pcap tracing, you can run the script in the usual way:$ ./waf --run scratch/myfirst
If you look at the top level directory of your distribution, you should now see three log files: myfirst.tr is theASCII trace file we have previously examined. myfirst-0-0.pcap and myfirst-1-0.pcap are the new pcapfiles we just generated.
The easiest thing to do at this point will be to use tcpdump to look at the pcap files.$ tcpdump -nn -tt -r myfirst-0-0.pcapreading from file myfirst-0-0.pcap, link-type PPP (PPP)2.000000 IP 10.1.1.1.49153 > 10.1.1.2.9: UDP, length 10242.514648 IP 10.1.1.2.9 > 10.1.1.1.49153: UDP, length 1024
You can see in the dump of myfirst-0-0.pcap (the client device) that the echo packet is sent at 2 seconds intothe simulation. If you look at the second dump (myfirst-1-0.pcap) you can see that packet being received at2.257324 seconds. You see the packet being echoed back at 2.257324 seconds in the second dump, and finally, yousee the packet being received back at the client in the first dump at 2.514648 seconds.
48 Chapter 5. Tweaking ns-3 Tutorial, Release ns-3.27
If you are unfamiliar with Wireshark, there is a web site available from which you can download programs anddocumentation: is a graphical user interface which can be used for displaying these trace files. If you have Wiresharkavailable, you can open each of the trace files and display the contents as if you had captured the packets using apacket sniffer.
50 Chapter 5. Tweaking CHAPTER
SIX
BUILDING TOPOLOGIES
In this section we are going to expand our mastery of ns-3 network devices and channels to cover an example of a busnetwork. ns-3 provides a net device and channel we call CSMA (Carrier Sense Multiple Access).The ns-3 CSMA device models a simple network in the spirit of Ethernet. A real Ethernet uses CSMA/CD (CarrierSense Multiple Access with Collision Detection) scheme with exponentially increasing backoff to contend for theshared transmission medium. The ns-3 CSMA device and channel models only a subset of this.Just as we have seen point-to-point topology helper objects when constructing point-to-point topologies, we will seeequivalent CSMA topology helpers in this section. The appearance and operation of these helpers should look quitefamiliar to you.We provide an example script in our examples/tutorial directory. This script builds on the first.ccscript and adds a CSMA network to the point-to-point simulation we’ve already considered. Go ahead and openexamples/tutorial/second.cc in your favorite editor. You will have already seen enough ns-3 code to un-derstand most of what is going on in this example, but we will go over the entire script and examine some of theoutput.Just as in the first.cc example (and in all ns-3 examples) the file begins with an emacs mode line and some GPLboilerplate.The actual code begins by loading module include files just as was done in the first.cc example.#include "ns3/core-module.h"#include "ns3/network-module.h"#include "ns3/csma-module.h"#include "ns3/internet-module.h"#include "ns3/point-to-point-module.h"#include "ns3/applications-module.h"#include "ns3/ipv4-global-routing-helper.h"
One thing that can be surprisingly useful is a small bit of ASCII art that shows a cartoon of the network topologyconstructed in the example. You will find a similar “drawing” in most of our examples.In this case, you can see that we are going to extend our point-to-point example (the link between the nodes n0 andn1 below) by hanging a bus network off of the right side. Notice that this is the default network topology since youcan actually vary the number of nodes created on the LAN. If you set nCsma to one, there will be a total of two nodeson the LAN (CSMA channel) — one required node and one “extra” node. By default there are three “extra” nodes asseen below:// Default Network Topology//// 10.1.1.0// n0 -------------- n1 n2 n3 n4
51ns-3 Tutorial, Release ns-3.27
// point-to-point | | | |// ================// LAN 10.1.2.0
Then the ns-3 namespace is used and a logging component is defined. This is all just as it was in first.cc, sothere is nothing new yet.using namespace ns3;
NS_LOG_COMPONENT_DEFINE ("SecondScriptExample");
The main program begins with a slightly different twist. We use a verbose flag to determine whether or not theUdpEchoClientApplication and UdpEchoServerApplication logging components are enabled. Thisflag defaults to true (the logging components are enabled) but allows us to turn off logging during regression testingof this example.You will see some familiar code that will allow you to change the number of devices on the CSMA network viacommand line argument. We did something similar when we allowed the number of packets sent to be changed in thesection on command line arguments. The last line makes sure you have at least one “extra” node.The code consists of variations of previously covered API so you should be entirely comfortable with the followingcode at this point in the tutorial.bool verbose = true;uint32_t nCsma = 3;
CommandLine cmd;cmd.AddValue ("nCsma", "Number of \"extra\" CSMA nodes/devices", nCsma);cmd.AddValue ("verbose", "Tell echo applications to log if true", verbose);
if (verbose) { LogComponentEnable("UdpEchoClientApplication", LOG_LEVEL_INFO); LogComponentEnable("UdpEchoServerApplication", LOG_LEVEL_INFO); }
The next step is to create two nodes that we will connect via the point-to-point link. The NodeContainer is usedto do this just as was done in first.cc.NodeContainer p2pNodes;p2pNodes.Create (2);
Next, we declare another NodeContainer to hold the nodes that will be part of the bus (CSMA) network. First, wejust instantiate the container object itselfCSMA network. Since we already have one node in the CSMA network – the one that will have both a point-to-pointand CSMA net device, the number of “extra” nodes means the number nodes you desire in the CSMA section minusone.
The next bit of code should be quite familiar by now. We instantiate a PointToPointHelper and set the associateddefault Attributes so that we create a five megabit per second transmitter on devices created using the helper anda two millisecond delay on channels created by the helper.PointToPointHelper pointToPoint;pointToPoint.SetDeviceAttribute ("DataRate", StringValue ("5Mbps"));pointToPoint.SetChannelAttribute ("Delay", StringValue ("2ms"));
NetDeviceContainer p2pDevices;p2pDevices = pointToPoint.Install (p2pNodes);
We then instantiate a NetDeviceContainer to keep track of the point-to-point net devices and we Installdevices on the point-to-point nodes.We mentioned above that you were going to see a helper for CSMA devices and channels, and the next lines introducethem. The CsmaHelper works just like a PointToPointHelper, but it creates and connects CSMA devicesand channels. In the case of a CSMA device and channel pair, notice that the data rate is specified by a channelAttribute instead of a device Attribute. This is because a real CSMA network does not allow one to mix, forexample, 10Base-T and 100Base-T devices on a given channel. We first set the data rate to 100 megabits per second,and then set the speed-of-light delay of the channel to 6560 nano-seconds (arbitrarily chosen as 1 nanosecond per footover a 100 meter segment). Notice that you can set an Attribute using its native data type.CsmaHelper csma;csma.SetChannelAttribute ("DataRate", StringValue ("100Mbps"));csma.SetChannelAttribute ("Delay", TimeValue (NanoSeconds (6560)));
NetDeviceContainer csmaDevices;csmaDevices = csma.Install (csmaNodes);
Just as we created a NetDeviceContainer to hold the devices created by the PointToPointHelper we createa NetDeviceContainer to hold the devices created by our CsmaHelper. We call the Install method of theCsmaHelper to install the devices into the nodes of the csmaNodes NodeContainer.We now have our nodes, devices and channels created, but we have no protocol stacks present. Just as in thefirst.cc script, we will use the InternetStackHelper to install these stacks.InternetStackHelper stack;stack.Install (p2pNodes.Get (0));stack.Install (csmaNodes);
Recall that we took one of the nodes from the p2pNodes container and added it to the csmaNodes container. Thuswe only need to install the stacks on the remaining p2pNodes node, and all of the nodes in the csmaNodes containerto cover all of the nodes in the simulation.Just as in the first.cc example script, we are going to use the Ipv4AddressHelper to assign IP addresses toour device interfaces. First we use the network 10.1.1.0 to create the two addresses needed for our two point-to-pointdevices.Ipv4AddressHelper address;address.SetBase ("10.1.1.0", "255.255.255.0");Ipv4InterfaceContainer p2pInterfaces;p2pInterfaces = address.Assign (p2pDevices);
Recall that we save the created interfaces in a container to make it easy to pull out addressing information later for usein setting up the applications.We now need to assign IP addresses to our CSMA device interfaces. The operation works just as it did for the point-to-point case, except we now are performing the operation on a container that has a variable number of CSMA devices— remember we made the number of CSMA devices changeable by command line argument. The CSMA deviceswill be associated with IP addresses from network number 10.1.2.0 in this case, as seen below.
Now we have a topology built, but we need applications. This section is going to be fundamentally similar to theapplications section of first.cc but we are going to instantiate the server on one of the nodes that has a CSMAdevice and the client on the node having only a point-to-point device.First, we set up the echo server. We create a UdpEchoServerHelper and provide a required Attribute value tothe constructor which is the server port number. Recall that this port can be changed later using the SetAttributemethod if desired, but we require it to be provided to the constructor.UdpEchoServerHelper echoServer (9);
Recall that the csmaNodes NodeContainer contains one of the nodes created for the point-to-point network andnCsma “extra” nodes. What we want to get at is the last of the “extra” nodes. The zeroth entry of the csmaNodescontainer will be the point-to-point node. The easy way to think of this, then, is if we create one “extra” CSMA node,then it will be at index one of the csmaNodes container. By induction, if we create nCsma “extra” nodes the lastone will be at index nCsma. You see this exhibited in the Get of the first line of code.The client application is set up exactly as we did in the first.cc example script. Again, we provide requiredAttributes to the UdpEchoClientHelper in the constructor (in this case the remote address and port). Wetell the client to send packets to the server we just installed on the last of the “extra” CSMA nodes. We install theclient on the leftmost point-to-point node seen in the topology illustration));
Since we have actually built an internetwork here, we need some form of internetwork routing. ns-3 provides what wecall global routing to help you out. Global routing takes advantage of the fact that the entire internetwork is accessiblein the simulation and runs through the all of the nodes created for the simulation — it does the hard work of setting uprouting for you without having to configure routers.Basically, what happens is that each node behaves as if it were an OSPF router that communicates instantly andmagically with all other routers behind the scenes. Each node generates link advertisements and communicates themdirectly to a global route manager which uses this global information to construct the routing tables for each node.Setting up this form of routing is a one-liner:Ipv4GlobalRoutingHelper::PopulateRoutingTables ();
Next we enable pcap tracing. The first line of code to enable pcap tracing in the point-to-point helper should be familiarto you by now. The second line enables pcap tracing in the CSMA helper and there is an extra parameter you haven’tencountered yet.pointToPoint.EnablePcapAll ("second");csma.EnablePcap ("second", csmaDevices.Get (1), true);
The CSMA network is a multi-point-to-point network. This means that there can (and are in this case) multipleendpoints on a shared medium. Each of these endpoints has a net device associated with it. There are two basic
alternatives to gathering trace information from such a network. One way is to create a trace file for each net deviceand store only the packets that are emitted or consumed by that net device. Another way is to pick one of the devicesand place it in promiscuous mode. That single device then “sniffs” the network for all packets and stores them in asingle pcap file. This is how tcpdump, for example, works. That final parameter tells the CSMA helper whether ornot to arrange to capture packets in promiscuous mode.In this example, we are going to select one of the devices on the CSMA network and ask it to perform a promiscuoussniff of the network, thereby emulating what tcpdump would do. If you were on a Linux machine you might do some-thing like tcpdump -i eth0 to get the trace. In this case, we specify the device using csmaDevices.Get(1),which selects the first device in the container. Setting the final parameter to true enables promiscuous captures.The last section of code just runs and cleans up the simulation just like the first.cc example. Simulator::Run (); Simulator::Destroy (); return 0;}
In order to run this example, copy the second.cc example script into the scratch directory and use waf to build justas you did with the first.cc example. If you are in the top-level directory of the repository you just type,$ cp examples/tutorial/second.cc scratch/mysecond.cc$ ./waf
Warning: We use the file second.cc as one of our regression tests to verify that it works exactly as we think it shouldin order to make your tutorial experience a positive one. This means that an executable named second already existsin the project. To avoid any confusion about what you are executing, please do the renaming to mysecond.ccsuggested above.If you are following the tutorial religiously (you are, aren’t you) you will still have the NS_LOG variable set, so goahead and clear that variable and run the program.$ export NS_LOG=$ ./waf --run scratch/mysecond
Since we have set up the UDP echo applications to log just as we did in first.cc, you will see similar output whenyou run the script.Waf: Entering directory `/home/craigdo/repos/ns-3-allinone/ns-3-dev/build'Waf: Leaving directory `/home/craigdo/repos/ns-3-allinone/ns-3-dev/build''build' finished successfully (0.415s)Sent 1024 bytes to 10.1.2.4Received 1024 bytes from 10.1.1.1Received 1024 bytes from 10.1.2.4
Recall that the first message, “Sent 1024 bytes to 10.1.2.4,” is the UDP echo client sending a packet tothe server. In this case, the server is on a different network (10.1.2.0). The second message, “Received 1024bytes from 10.1.1.1,” three trace files:second-0-0.pcap second-1-0.pcap second-2-0.pcap
Let’s take a moment to look at the naming of these files. They all have the same form,<name>-<node>-<device>.pcap. For example, the first file in the listing is second-0-0.pcapwhich is the pcap trace from node zero, device zero. This is the point-to-point net device on node zero. The filesecond-1-0.pcap is the pcap trace for device zero on node one, also a point-to-point net device; and the filesecond-2-0.pcap is the pcap trace for device zero on node two.
If you refer back to the topology illustration at the start of the section, you will see that node zero is the leftmost nodeof the point-to-point link and node one is the node that has both a point-to-point device and a CSMA device. Youwill see that node two is the first “extra” node on the CSMA network and its device zero was selected as the device tocapture the promiscuous-mode trace.Now, let’s follow the echo packet through the internetwork. First, do a tcpdump of the trace file for the leftmostpoint-to-point node — node zero.$ tcpdump -nn -tt -r second-0-0.pcap
The first line of the dump indicates that the link type is PPP (point-to-point) which we expect. You then see theecho packet leaving node zero via the device associated with IP address 10.1.1.1 headed for IP address 10.1.2.4 (therightmost CSMA node). This packet will move over the point-to-point link and be received by the point-to-point netdevice on node one. Let’s take a look:$ tcpdump -nn -tt -r second-1-0.pcap
You should now see the pcap trace output of the other side of the point-to-point link
Here we see that the link type is also PPP as we would expect. You see the packet from IP address 10.1.1.1 (that wassent at 2.000000 seconds) headed toward IP address 10.1.2.4 appear on this interface. Now, internally to this node, thepacket will be forwarded to the CSMA interface and we should see it pop out on that device headed for its ultimatedestination.Remember that we selected node 2 as the promiscuous sniffer node for the CSMA network so let’s then look atsecond-2-0.pcap and see if its there.$ tcpdump -nn -tt -r second-2-0.pcap
You should now see the promiscuous dump of node two, device zero:reading from file second-2-0.pcap, link-type EN10MB (Ethernet)2.007698 ARP, Request who-has 10.1.2.4 (ff:ff:ff:ff:ff:ff) tell 10.1.2.1, length 502.007710 ARP, Reply 10.1.2.4 is-at 00:00:00:00:00:06, length 502.007803 IP 10.1.1.1.49153 > 10.1.2.4.9: UDP, length 10242.013815 ARP, Request who-has 10.1.2.1 (ff:ff:ff:ff:ff:ff) tell 10.1.2.4, length 502.013828 ARP, Reply 10.1.2.1 is-at 00:00:00:00:00:03, length 502.013921 IP 10.1.2.4.9 > 10.1.1.1.49153: UDP, length 1024
As you can see, the link type is now “Ethernet”. Something new has appeared, though. The bus network needs ARP, theAddress Resolution Protocol. Node one knows it needs to send the packet to IP address 10.1.2.4, but it doesn’t knowthe MAC address of the corresponding node. It broadcasts on the CSMA network (ff:ff:ff:ff:ff:ff) asking for the devicethat has IP address 10.1.2.4. In this case, the rightmost node replies saying it is at MAC address 00:00:00:00:00:06.Note that node two is not directly involved in this exchange, but is sniffing the network and reporting all of the trafficit sees.This exchange is seen in the following lines,2.007698 ARP, Request who-has 10.1.2.4 (ff:ff:ff:ff:ff:ff) tell 10.1.2.1, length 502.007710 ARP, Reply 10.1.2.4 is-at 00:00:00:00:00:06, length 50
Then node one, device one goes ahead and sends the echo packet to the UDP echo server at IP address 10.1.2.4.2.007803 IP 10.1.1.1.49153 > 10.1.2.4.9: UDP, length 1024
The server receives the echo request and turns the packet around trying to send it back to the source. The server knowsthat this address is on another network that it reaches via IP address 10.1.2.1. This is because we initialized globalrouting and it has figured all of this out for us. But, the echo server node doesn’t know the MAC address of the firstCSMA node, so it has to ARP for it just like the first CSMA node had to do.2.013815 ARP, Request who-has 10.1.2.1 (ff:ff:ff:ff:ff:ff) tell 10.1.2.4, length 502.013828 ARP, Reply 10.1.2.1 is-at 00:00:00:00:00:03, length 50
The server then sends the echo back to the forwarding node.2.013921 IP 10.1.2.4.9 > 10.1.1.1.49153: UDP, length 1024
You can now see the echoed packet coming back onto the point-to-point link as the last line of the trace dump
Lastly, you can look back at the node that originated the echo$ tcpdump -nn -tt -r second-0-0.pcap
and see that the echoed packet arrives back at the source at 2.017607 seconds,reading from file second-0-0.pcap, link-type PPP (PPP)2.000000 IP 10.1.1.1.49153 > 10.1.2.4.9: UDP, length 10242.017607 IP 10.1.2.4.9 > 10.1.1.1.49153: UDP, length 1024
Finally, recall that we added the ability to control the number of CSMA devices in the simulation by command lineargument. You can change this argument in the same way as when we looked at changing the number of packetsechoed in the first.cc example. Try running the program with the number of “extra” devices set to four:$ ./waf --run "scratch/mysecond --nCsma=4"
Notice that the echo server has now been relocated to the last of the CSMA nodes, which is 10.1.2.5 instead of thedefault case, 10.1.2.4.It is possible that you may not be satisfied with a trace file generated by a bystander in the CSMA network. You mayreally want to get a trace from a single device and you may not be interested in any other traffic on the network. Youcan do this fairly easily.Let’s take a look at scratch/mysecond.cc and add that code enabling us to be more specific. ns-3 helpersprovide methods that take a node number and device number as parameters. Go ahead and replace the EnablePcapcalls with the calls below.
We know that we want to create a pcap file with the base name “second” and we also know that the device of interestin both cases is going to be zero, so those parameters are not really interesting.In order to get the node number, you have two choices: first, nodes are numbered in a monotonically increasing fashionstarting from zero in the order in which you created them. One way to get a node number is to figure this number out“manually” by contemplating the order of node creation. If you take a look at the network topology illustration at thebeginning of the file, we did this for you and you can see that the last CSMA node is going to be node number nCsma+ 1. This approach can become annoyingly difficult in larger simulations.An alternate way, which we use here, is to realize that the NodeContainers contain pointers to ns-3 Node Objects.The Node Object has a method called GetId which will return that node’s ID, which is the node number we seek.Let’s go take a look at the Doxygen for the Node and locate that method, which is further down in the ns-3 core codethan we’ve seen so far; but sometimes you have to search diligently for useful things.Go to the Doxygen documentation for your release (recall that you can find it on the project web site). You can get tothe Node documentation by looking through at the “Classes” tab and scrolling down the “Class List” until you findns3::Node. Select ns3::Node and you will be taken to the documentation for the Node class. If you now scrolldown to the GetId method and select it, you will be taken to the detailed documentation for the method. Using theGetId method can make determining node numbers much easier in complex topologies.Let’s clear the old trace files out of the top-level directory to avoid confusion about what is going on,$ rm *.pcap$ rm *.tr
If you build the new script and run the simulation setting nCsma to 100,$ ./waf --run "scratch/mysecond --nCsma=100"
Note that the echo server is now located at 10.1.2.101 which corresponds to having 100 “extra” CSMA nodes with theecho server on the last one. If you list the pcap files in the top level directory you will see,second-0-0.pcap second-100-0.pcap second-101-0.pcap
The trace file second-0-0.pcap is the “leftmost” point-to-point device which is the echo packet source. The filesecond-101-0.pcap corresponds to the rightmost CSMA device which is where the echo server resides. Youmay have noticed that the final parameter on the call to enable pcap tracing on the echo server node was false. Thismeans that the trace gathered on that node was in non-promiscuous mode.To illustrate the difference between promiscuous and non-promiscuous traces, we also requested a non-promiscuoustrace for the next-to-last node. Go ahead and take a look at the tcpdump for second-100-0.pcap.$ tcpdump -nn -tt -r second-100-0.pcap
You can now see that node 100 is really a bystander in the echo exchange. The only packets that it receives are theARP requests which are broadcast to the entire CSMA network.
You can now see that node 101 is really the participant in the echo exchange.reading from file second-101-0.pcap, link-type EN10MB (Ethernet)2.006698 ARP, Request who-has 10.1.2.101 (ff:ff:ff:ff:ff:ff) tell 10.1.2.1, length 502.006698 ARP, Reply 10.1.2.101 is-at 00:00:00:00:00:67, length 502.006803 IP 10.1.1.1.49153 > 10.1.2.101.9: UDP, length 10242.013803 ARP, Request who-has 10.1.2.1 (ff:ff:ff:ff:ff:ff) tell 10.1.2.101, length 502.013828 ARP, Reply 10.1.2.1 is-at 00:00:00:00:00:03, length 502.013828 IP 10.1.2.101.9 > 10.1.1.1.49153: UDP, length 1024
This is a convenient place to make a small excursion and make an important point. It may or may not be obvious toyou, but whenever one is using a simulation, it is important to understand exactly what is being modeled and what isnot. It is tempting, for example, to think of the CSMA devices and channels used in the previous section as if theywere real Ethernet devices; and to expect a simulation result to directly reflect what will happen in a real Ethernet.This is not the case.A model is, by definition, an abstraction of reality. It is ultimately the responsibility of the simulation script author todetermine the so-called “range of accuracy” and “domain of applicability” of the simulation as a whole, and thereforeits constituent parts.In some cases, like Csma, it can be fairly easy to determine what is not modeled. By reading the model description(csma.h) you can find that there is no collision detection in the CSMA model and decide on how applicable its usewill be in your simulation or what caveats you may want to include with your results. In other cases, it can be quiteeasy to configure behaviors that might not agree with any reality you can go out and buy. It will prove worthwhile tospend some time investigating a few such instances, and how easily you can swerve outside the bounds of reality inyour simulations.As you have seen, ns-3 provides Attributes which a user can easily set to change model behavior. Consider twoof the Attributes of the CsmaNetDevice: Mtu and EncapsulationMode. The Mtu attribute indicates theMaximum Transmission Unit to the device. This is the size of the largest Protocol Data Unit (PDU) that the devicecan send.The MTU defaults to 1500 bytes in the CsmaNetDevice. This default corresponds to a number found in RFC 894,“A Standard for the Transmission of IP Datagrams over Ethernet Networks.” The number is actually derived from themaximum packet size for 10Base5 (full-spec Ethernet) networks – 1518 bytes. If you subtract the DIX encapsulationoverhead for Ethernet packets (18 bytes) you will end up with a maximum possible data size (MTU) of 1500 bytes.One can also find that the MTU for IEEE 802.3 networks is 1492 bytes. This is because LLC/SNAP encapsulation addsan extra eight bytes of overhead to the packet. In both cases, the underlying hardware can only send 1518 bytes, butthe data size is different.In order to set the encapsulation mode, the CsmaNetDevice provides an Attribute calledEncapsulationMode which can take on the values Dix or Llc. These correspond to Ethernet and LLC/SNAPframing respectively.If one leaves the Mtu at 1500 bytes and changes the encapsulation mode to Llc, the result will be a network thatencapsulates 1500 byte PDUs with LLC/SNAP framing resulting in packets of 1526 bytes, which would be illegal
in many networks, since they can transmit a maximum of 1518 bytes per packet. This would most likely result in asimulation that quite subtly does not reflect the reality you might be expecting.Just to complicate the picture, there exist jumbo frames (1500 < MTU <= 9000 bytes) and super-jumbo (MTU > 9000bytes) frames that are not officially sanctioned by IEEE but are available in some high-speed (Gigabit) networks andNICs. One could leave the encapsulation mode set to Dix, and set the Mtu Attribute on a CsmaNetDeviceto 64000 bytes – even though an associated CsmaChannel DataRate was set at 10 megabits per second. Thiswould essentially model an Ethernet switch made out of vampire-tapped 1980s-style 10Base5 networks that supportsuper-jumbo datagrams. This is certainly not something that was ever made, nor is likely to ever be made, but it isquite easy for you to configure.In the previous example, you used the command line to create a simulation that had 100 Csma nodes. You could havejust as easily created a simulation with 500 nodes. If you were actually modeling that 10Base5 vampire-tap network,the maximum length of a full-spec Ethernet cable is 500 meters, with a minimum tap spacing of 2.5 meters. Thatmeans there could only be 200 taps on a real network. You could have quite easily built an illegal network in that wayas well. This may or may not result in a meaningful simulation depending on what you are trying to model.Similar situations can occur in many places in ns-3 and in any simulator. For example, you may be able to positionnodes in such a way that they occupy the same space at the same time, or you may be able to configure amplifiers ornoise levels that violate the basic laws of physics.ns-3 generally favors flexibility, and many models will allow freely setting Attributes without trying to enforceany arbitrary consistency or particular underlying spec.The thing to take home from this is that ns-3 is going to provide a super-flexible base for you to experiment with. Itis up to you to understand what you are asking the system to do and to make sure that the simulations you create havesome meaning and some connection with a reality defined by you.
In this section we are going to further expand our knowledge of ns-3 network devices and channels to cover anexample of a wireless network. ns-3 provides a set of 802.11 models that attempt to provide an accurate MAC-levelimplementation of the 802.11 specification and a “not-so-slow” PHY-level model of the 802.11a specification.Just as we have seen both point-to-point and CSMA topology helper objects when constructing point-to-point topolo-gies, we will see equivalent Wifi topology helpers in this section. The appearance and operation of these helpersshould look quite familiar to you.We provide an example script in our examples/tutorial directory. This script builds on the second.cc scriptand adds a Wi-Fi network. Go ahead and open examples/tutorial/third.cc in your favorite editor. You willhave already seen enough ns-3 code to understand most of what is going on in this example, but there are a few newthings, so we will go over the entire script and examine some of the output.Just as in the second.cc example (and in all ns-3 examples) the file begins with an emacs mode line and some GPLboilerplate.Take a look at the ASCII art (reproduced below) that shows the default network topology constructed in the example.You can see that we are going to further extend our example by hanging a wireless network off of the left side. Noticethat this is a default network topology since you can actually vary the number of nodes created on the wired andwireless networks. Just as in the second.cc script case, if you change nCsma, it will give you a number of “extra”CSMA nodes. Similarly, you can set nWifi to control how many STA (station) nodes are created in the simulation.There will always be one AP (access point) node on the wireless network. By default there are three “extra” CSMAnodes and three wireless STA nodes.The code begins by loading module include files just as was done in the second.cc example. There are a couple ofnew includes corresponding to the wifi module and the mobility module which we will discuss below.
"
You can see that we are adding a new network device to the node on the left side of the point-to-point link that becomesthe access point for the wireless network. A number of wireless STA nodes are created to fill out the new 10.1.3.0network as shown on the left side of the illustration.After the illustration, the ns-3 namespace is used and a logging component is defined. This should all be quitefamiliar by now.using namespace ns3;
NS_LOG_COMPONENT_DEFINE ("ThirdScriptExample");
The main program begins just like second.cc by adding some command line parameters for enabling or disablinglogging components and for changing the number of devices created.bool verbose = true;uint32_t nCsma = 3;uint32_t nWifi = 3;
CommandLine cmd;cmd.AddValue ("nCsma", "Number of \"extra\" CSMA nodes/devices", nCsma);cmd.AddValue ("nWifi", "Number of wifi STA devices", nWifi);cmd.AddValue ("verbose", "Tell echo applications to log if true", verbose);
cmd.Parse (argc,argv);
Just as in all of the previous examples, the next step is to create two nodes that we will connect via the point-to-pointlink.NodeContainer p2pNodes;p2pNodes.Create (2);
Next, we see an old friend. We instantiate a PointToPointHelper and set the associated default Attributes
so that we create a five megabit per second transmitter on devices created using the helper and a two millisecond delayon channels created by the helper. We then Install the devices on the nodes and the channel between them.PointToPointHelper pointToPoint;pointToPoint.SetDeviceAttribute ("DataRate", StringValue ("5Mbps"));pointToPoint.SetChannelAttribute ("Delay", StringValue ("2ms"));
Next, we declare another NodeContainer to hold the nodes that will be part of the bus (CSMA) network CSMAnetwork.We then instantiate a CsmaHelper and set its Attributes as we did in the previous example. We create aNetDeviceContainer to keep track of the created CSMA net devices and then we Install CSMA devices onthe selected nodes.CsmaHelper csma;csma.SetChannelAttribute ("DataRate", StringValue ("100Mbps"));csma.SetChannelAttribute ("Delay", TimeValue (NanoSeconds (6560)));
Next, we are going to create the nodes that will be part of the Wi-Fi network. We are going to create a numberof “station” nodes as specified by the command line argument, and we are going to use the “leftmost” node of thepoint-to-point link as the node for the access point.NodeContainer wifiStaNodes;wifiStaNodes.Create (nWifi);NodeContainer wifiApNode = p2pNodes.Get (0);
The next bit of code constructs the wifi devices and the interconnection channel between these wifi nodes. First, weconfigure the PHY and channel helpers:YansWifiChannelHelper channel = YansWifiChannelHelper::Default ();YansWifiPhyHelper phy = YansWifiPhyHelper::Default ();
For simplicity, this code uses the default PHY layer configuration and channel models which aredocumented in the API doxygen documentation for the YansWifiChannelHelper::Default andYansWifiPhyHelper::Default methods. Once these objects are created, we create a channel object andassociate it to our PHY layer object manager to make sure that all the PHY layer objects created by theYansWifiPhyHelper share the same underlying channel, that is, they share the same wireless medium and cancommunication and interfere:phy.SetChannel (channel.Create ());
Once the PHY helper is configured, we can focus on the MAC layer. Here we choose to work with non-Qos MACs.WifiMacHelper object is used to set MAC parameters.
WifiHelper wifi;wifi.SetRemoteStationManager ("ns3::AarfWifiManager");
WifiMacHelper mac;
The SetRemoteStationManager method tells the helper the type of rate control algorithm to use. Here, it isasking the helper to use the AARF algorithm — details are, of course, available in Doxygen.Next, we configure the type of MAC, the SSID of the infrastructure network we want to setup and make sure that ourstations don’t perform active probing:Ssid ssid = Ssid ("ns-3-ssid");mac.SetType ("ns3::StaWifiMac", "Ssid", SsidValue (ssid), "ActiveProbing", BooleanValue (false));
This code first creates an 802.11 service set identifier (SSID) object that will be used to set the value of the “Ssid”Attribute of the MAC layer implementation. The particular kind of MAC layer that will be created by the helperis specified by Attribute as being of the “ns3::StaWifiMac” type. “QosSupported” Attribute is set to false bydefault for WifiMacHelper objects. The combination of these two configurations means that the MAC instancenext created will be a non-QoS non-AP station (STA) in an infrastructure BSS (i.e., a BSS with an AP). Finally, the“ActiveProbing” Attribute is set to false. This means that probe requests will not be sent by MACs created by thishelper.Once all the station-specific parameters are fully configured, both at the MAC and PHY layers, we can invoke ournow-familiar Install method to create the Wi-Fi devices of these stations:NetDeviceContainer staDevices;staDevices = wifi.Install (phy, mac, wifiStaNodes);
We have configured Wi-Fi for all of our STA nodes, and now we need to configure the AP (access point) node. Webegin this process by changing the default Attributes of the WifiMacHelper to reflect the requirements of theAP.mac.SetType ("ns3::ApWifiMac", "Ssid", SsidValue (ssid));
In this case, the WifiMacHelper is going to create MAC layers of the “ns3::ApWifiMac”, the latter specifying thata MAC instance configured as an AP should be created. We do not change the default setting of “QosSupported”Attribute, so it remains false - disabling 802.11e/WMM-style QoS support at created APs.The next lines create the single AP which shares the same set of PHY-level Attributes (and channel) as thestations:NetDeviceContainer apDevices;apDevices = wifi.Install (phy, mac, wifiApNode);
Now, we are going to add mobility models. We want the STA nodes to be mobile, wandering around inside a boundingbox, and we want to make the AP node stationary. We use the MobilityHelper to make this easy for us. First, weinstantiate a MobilityHelper object and set some Attributes controlling the “position allocator” functionality.MobilityHelper mobility;
mobility.SetPositionAllocator ("ns3::GridPositionAllocator", "MinX", DoubleValue (0.0), "MinY", DoubleValue (0.0), "DeltaX", DoubleValue (5.0), "DeltaY", DoubleValue (10.0), "GridWidth", UintegerValue (3), "LayoutType", StringValue ("RowFirst"));
This code tells the mobility helper to use a two-dimensional grid to initially place the STA nodes. Feel free to explorethe Doxygen for class ns3::GridPositionAllocator to see exactly what is being done.We have arranged our nodes on an initial grid, but now we need to tell them how to move. We choose theRandomWalk2dMobilityModel which has the nodes move in a random direction at a random speed aroundinside a bounding box.mobility.SetMobilityModel ("ns3::RandomWalk2dMobilityModel", "Bounds", RectangleValue (Rectangle (-50, 50, -50, 50)));
We now tell the MobilityHelper to install the mobility models on the STA nodes.mobility.Install (wifiStaNodes);
We want the access point to remain in a fixed position during the simulation. We accomplish this by setting themobility model for this node to be the ns3::ConstantPositionMobilityModel:mobility.SetMobilityModel ("ns3::ConstantPositionMobilityModel");mobility.Install (wifiApNode);
We now have our nodes, devices and channels created, and mobility models chosen for the Wi-Fi nodes, but we haveno protocol stacks present. Just as we have done previously many times, we will use the InternetStackHelperto install these stacks.InternetStackHelper stack;stack.Install (csmaNodes);stack.Install (wifiApNode);stack.Install (wifiStaNodes);
Just as in the second.cc example script, we are going to use the Ipv4AddressHelper to assign IP addresses toour device interfaces. First we use the network 10.1.1.0 to create the two addresses needed for our two point-to-pointdevices. Then we use network 10.1.2.0 to assign addresses to the CSMA network and then we assign addresses fromnetwork 10.1.3.0 to both the STA devices and the AP on the wireless network.Ipv4AddressHelper address;
We put the echo server on the “rightmost” node in the illustration at the start of the file. We have done this before.UdpEchoServerHelper echoServer (9);
And we put the echo client on the last STA node we created, pointing it to the server on the CSMA network. We havealso seen similar operations before.
ApplicationContainer clientApps = echoClient.Install (wifiStaNodes.Get (nWifi - 1));clientApps.Start (Seconds (2.0));clientApps.Stop (Seconds (10.0));
Since we have built an internetwork here, we need to enable internetwork routing just as we did in the second.ccexample script.Ipv4GlobalRoutingHelper::PopulateRoutingTables ();
One thing that can surprise some users is the fact that the simulation we just created will never “naturally” stop. Thisis because we asked the wireless access point to generate beacons. It will generate beacons forever, and this will resultin simulator events being scheduled into the future indefinitely, so we must tell the simulator to stop even though itmay have beacon generation events scheduled. The following line of code tells the simulator to stop so that we don’tsimulate beacons forever and enter what is essentially an endless loop.Simulator::Stop (Seconds (10.0));
These three lines of code will start pcap tracing on both of the point-to-point nodes that serves as our backbone, willstart a promiscuous (monitor) mode trace on the Wi-Fi network, and will start a promiscuous trace on the CSMAnetwork. This will let us see all of the traffic with a minimum number of trace files.Finally, we actually run the simulation, clean up and then exit the program. Simulator::Run (); Simulator::Destroy (); return 0;}
In order to run this example, you have to copy the third.cc example script into the scratch directory and use Waf tobuild just as you did with the second.cc example. If you are in the top-level directory of the repository you wouldtype,$ cp examples/tutorial/third.cc scratch/mythird.cc$ ./waf$ ./waf --run scratch/mythird
Again, since we have set up the UDP echo applications just as we did in the second.cc script, you will see similaroutput.Waf: Entering directory `/home/craigdo/repos/ns-3-allinone/ns-3-dev/build'Waf: Leaving directory `/home/craigdo/repos/ns-3-allinone/ns-3-dev/build''build' finished successfully (0.407s
Recall that the first message, Sent 1024 bytes to 10.1.2.4,” is the UDP echo client sending a packet tothe server. In this case, the client is on the wireless network (10.1.3.0). The second message, “Received 1024bytes from 10.1.3.3,” four trace files from this simulation, two from nodezero and two from node one:third-0-0.pcap third-0-1.pcap third-1-0.pcap third-1-1.pcap
The file “third-0-0.pcap” corresponds to the point-to-point device on node zero – the left side of the “backbone”. Thefile “third-1-0.pcap” corresponds to the point-to-point device on node one – the right side of the “backbone”. The file“third-0-1.pcap” will be the promiscuous (monitor mode) trace from the Wi-Fi network and the file “third-1-1.pcap”will be the promiscuous trace from the CSMA network. Can you verify this by inspecting the code?Since the echo client is on the Wi-Fi network, let’s start there. Let’s take a look at the promiscuous (monitor mode)trace we captured on that network.$ tcpdump -nn -tt -r third-0-1.pcap
You should see some wifi-looking contents you haven’t seen here before:reading from file third-0-1.pcap, link-type IEEE802_11 (802.11)0.000025 Beacon (ns-3-ssid) [6.0* 9.0 12.0 18.0 24.0 36.0 48.0 54.0 Mbit] IBSS0.000308 Assoc Request (ns-3-ssid) [6.0 9.0 12.0 18.0 24.0 36.0 48.0 54.0 Mbit]0.000324 Acknowledgment RA:00:00:00:00:00:080.000402 Assoc Response AID(0) :: Successful0.000546 Acknowledgment RA:00:00:00:00:00:0a0.000721 Assoc Request (ns-3-ssid) [6.0 9.0 12.0 18.0 24.0 36.0 48.0 54.0 Mbit]0.000737 Acknowledgment RA:00:00:00:00:00:070.000824 Assoc Response AID(0) :: Successful0.000968 Acknowledgment RA:00:00:00:00:00:0a0.001134 Assoc Request (ns-3-ssid) [6.0 9.0 12.0 18.0 24.0 36.0 48.0 54.0 Mbit]0.001150 Acknowledgment RA:00:00:00:00:00:090.001273 Assoc Response AID(0) :: Successful0.001417 Acknowledgment RA:00:00:00:00:00:0a0.102400 Beacon (ns-3-ssid) [6.0* 9.0 12.0 18.0 24.0 36.0 48.0 54.0 Mbit] IBSS0.204800 Beacon (ns-3-ssid) [6.0* 9.0 12.0 18.0 24.0 36.0 48.0 54.0 Mbit] IBSS0.307200 Beacon (ns-3-ssid) [6.0* 9.0 12.0 18.0 24.0 36.0 48.0 54.0 Mbit] IBSS
You can see that the link type is now 802.11 as you would expect. You can probably understand what is going on andfind the IP echo request and response packets in this trace. We leave it as an exercise to completely parse the tracedump.Now, look at the pcap file of the left side of the point-to-point link,$ tcpdump -nn -tt -r third-0-0.pcap
This is the echo packet going from left to right (from Wi-Fi to CSMA) and back again across the point-to-point link.Now, look at the pcap file of the right side of the point-to-point link,$ tcpdump -nn -tt -r third-1-0.pcap
This is also the echo packet going from left to right (from Wi-Fi to CSMA) and back again across the point-to-pointlink with slightly different timings as you might expect.The echo server is on the CSMA network, let’s look at the promiscuous trace there:$ tcpdump -nn -tt -r third-1-1.pcap
This should be easily understood. If you’ve forgotten, go back and look at the discussion in second.cc. This is thesame sequence.Now, we spent a lot of time setting up mobility models for the wireless network and so it would be a shame to finish upwithout even showing that the STA nodes are actually moving around during the simulation. Let’s do this by hookinginto the MobilityModel course change trace source. This is just a sneak peek into the detailed tracing sectionwhich is coming up, but this seems a very nice place to get an example in.As mentioned in the “Tweaking ns-3” section, the ns-3 tracing system is divided into trace sources and trace sinks,and we provide functions to connect the two. We will use the mobility model predefined course change trace sourceto originate the trace events. We will need to write a trace sink to connect to that source that will display some prettyinformation for us. Despite its reputation as being difficult, it’s really quite simple. Just before the main program of thescratch/mythird.cc script (i.e., just after the NS_LOG_COMPONENT_DEFINE statement), add the followingfunction:voidCourseChange (std::string context, Ptr<const MobilityModel> model){ Vector position = model->GetPosition (); NS_LOG_UNCOND (context << " x = " << position.x << ", y = " << position.y);}
This code just pulls the position information from the mobility model and unconditionally logs the x and y positionof the node. We are going to arrange for this function to be called every time the wireless node with the echo clientchanges its position. We do this using the Config::Connect function. Add the following lines of code to thescript just before the Simulator::Run call.std::ostringstream oss;oss << "/NodeList/" << wifiStaNodes.Get (nWifi - 1)->GetId () << "/$ns3::MobilityModel/CourseChange";
What we do here is to create a string containing the tracing namespace path of the event to which we want to connect.First, we have to figure out which node it is we want using the GetId method as described earlier. In the case of the
default number of CSMA and wireless nodes, this turns out to be node seven and the tracing namespace path to themobility model would look like,/NodeList/7/$ns3::MobilityModel/CourseChange
Based on the discussion in the tracing section, you may infer that this trace path references the seventh node in theglobal NodeList. It specifies what is called an aggregated object of type ns3::MobilityModel. The dollar signprefix implies that the MobilityModel is aggregated to node seven. The last component of the path means that we arehooking into the “CourseChange” event of that model.We make a connection between the trace source in node seven with our trace sink by calling Config::Connectand passing this namespace path. Once this is done, every course change event on node seven will be hooked into ourtrace sink, which will in turn print out the new position.If you now run the simulation, you will see the course changes displayed as they happen.'build' finished successfully (5.989s)/NodeList/7/$ns3::MobilityModel/CourseChange x = 10, y = 0/NodeList/7/$ns3::MobilityModel/CourseChange x = 10.3841, y = 0.923277/NodeList/7/$ns3::MobilityModel/CourseChange x = 10.2049, y = 1.90708/NodeList/7/$ns3::MobilityModel/CourseChange x = 10.8136, y = 1.11368/NodeList/7/$ns3::MobilityModel/CourseChange x = 10.8452, y = 2.11318/NodeList/7/$ns3::MobilityModel/CourseChange x = 10.9797, y = 3.10409/NodeList/7/$ns3::MobilityModel/CourseChange x = 11.3273, y = 4.04175/NodeList/7/$ns3::MobilityModel/CourseChange x = 12.013, y = 4.76955/NodeList/7/$ns3::MobilityModel/CourseChange x = 12.4317, y = 5.67771/NodeList/7/$ns3::MobilityModel/CourseChange x = 11.4607, y = 5.91681/NodeList/7/$ns3::MobilityModel/CourseChange x = 12.0155, y = 6.74878/NodeList/7/$ns3::MobilityModel/CourseChange x = 13.0076, y = 6.62336/NodeList/7/$ns3::MobilityModel/CourseChange x = 12.6285, y = 5.698/NodeList/7/$ns3::MobilityModel/CourseChange x = 13.32, y = 4.97559/NodeList/7/$ns3::MobilityModel/CourseChange x = 13.1134, y = 3.99715/NodeList/7/$ns3::MobilityModel/CourseChange x = 13.8359, y = 4.68851/NodeList/7/$ns3::MobilityModel/CourseChange x = 13.5953, y = 3.71789/NodeList/7/$ns3::MobilityModel/CourseChange x = 12.7595, y = 4.26688/NodeList/7/$ns3::MobilityModel/CourseChange x = 11.7629, y = 4.34913/NodeList/7/$ns3::MobilityModel/CourseChange x = 11.2292, y = 5.19485/NodeList/7/$ns3::MobilityModel/CourseChange x = 10.2344, y = 5.09394/NodeList/7/$ns3::MobilityModel/CourseChange x = 9.3601, y = 4.60846/NodeList/7/$ns3::MobilityModel/CourseChange x = 8.40025, y = 4.32795/NodeList/7/$ns3::MobilityModel/CourseChange x = 9.14292, y = 4.99761/NodeList/7/$ns3::MobilityModel/CourseChange x = 9.08299, y = 5.99581/NodeList/7/$ns3::MobilityModel/CourseChange x = 8.26068, y = 5.42677/NodeList/7/$ns3::MobilityModel/CourseChange x = 8.35917, y = 6.42191/NodeList/7/$ns3::MobilityModel/CourseChange x = 7.66805, y = 7.14466/NodeList/7/$ns3::MobilityModel/CourseChange x = 6.71414, y = 6.84456/NodeList/7/$ns3::MobilityModel/CourseChange x = 6.42489, y = 7.80181
SEVEN
TRACING
7.1 Background
As mentioned in Using the Tracing System, the whole point of running an ns-3 simulation is to generate output forstudy. You have two basic strategies to obtain output from ns-3: using generic pre-defined bulk output mechanismsand parsing their content to extract interesting information; or somehow developing an output mechanism that conveysexactly (and perhaps only) the information wanted.Using pre-defined bulk output mechanisms has the advantage of not requiring any changes to ns-3, but it may requirewriting scripts to parse and filter for data of interest. Often, PCAP or NS_LOG output messages are gathered duringsimulation runs and separately run through scripts that use grep, sed or awk to parse the messages and reduce andtransform the data to a manageable form. Programs must be written to do the transformation, so this does not comefor free. NS_LOG output is not considered part of the ns-3 API, and can change without warning between releases. Inaddition, NS_LOG output is only available in debug builds, so relying on it imposes a performance penalty. Of course,if the information of interest does not exist in any of the pre-defined output mechanisms, this approach fails.If you need to add some tidbit of information to the pre-defined bulk mechanisms, this can certainly be done; and ifyou use one of the ns-3 mechanisms, you may get your code added as a contribution.ns-3 provides another mechanism, called Tracing, that avoids some of the problems inherent in the bulk output mech-anisms. It has several important advantages. First, you can reduce the amount of data you have to manage by onlytracing the events of interest to you (for large simulations, dumping everything to disk for post-processing can createI/O bottlenecks). Second, if you use this method, you can control the format of the output directly so you avoid thepostprocessing step with sed, awk, perl or python scripts. If you desire, your output can be formatted directlyinto a form acceptable by gnuplot, for example (see also GnuplotHelper). You can add hooks in the core which canthen be accessed by other users, but which will produce no information unless explicitly asked to do so. For thesereasons, we believe that the ns-3 tracing system is the best way to get information out of a simulation and is alsotherefore one of the most important mechanisms to understand in ns-3.
There are many ways to get information out of a program. The most straightforward way is to just print the informationdirectly to the standard output, as in:#include <iostream>...voidSomeFunction (void){ uint32_t x = SOME_INTERESTING_VALUE; ... std::cout << "The value of x is " << x << std::endl;
69ns-3 Tutorial, Release ns-3.27
Nobody is going to prevent you from going deep into the core of ns-3 and adding print statements. This is insanelyeasy to do and, after all, you have complete control of your own ns-3 branch. This will probably not turn out to bevery satisfactory in the long term, though.As the number of print statements increases in your programs, the task of dealing with the large number of outputs willbecome more and more complicated. Eventually, you may feel the need to control what information is being printedin some way, perhaps by turning on and off certain categories of prints, or increasing or decreasing the amount ofinformation you want. If you continue down this path you may discover that you have re-implemented the NS_LOGmechanism (see Using the Logging Module). In order to avoid that, one of the first things you might consider is usingNS_LOG itself.We mentioned above that one way to get information out of ns-3 is to parse existing NS_LOG output for interestinginformation. If you discover that some tidbit of information you need is not present in existing log output, you couldedit the core of ns-3 and simply add your interesting information to the output stream. Now, this is certainly betterthan adding your own print statements since it follows ns-3 coding conventions and could potentially be useful to otherpeople as a patch to the existing core.Let’s pick a random example. If you wanted to add more logging to the ns-3 TCP socket(tcp-socket-base.cc) you could just add a new message down in the implementation. Notice that inTcpSocketBase::ProcessEstablished () there is no log message for the reception of a SYN+ACK inESTABLISHED state. You could simply add one, changing the code. Here is the original:/* Received a packet upon ESTABLISHED state. This function is mimicking the role of tcp_rcv_established() in tcp_input.c in Linux kernel. */voidTcpSocketBase::ProcessEstablished (Ptr<Packet> packet, const TcpHeader& tcpHeader){ NS_LOG_FUNCTION (this << tcpHeader); ...
To log the SYN+ACK case, you can add a new NS_LOG_LOGIC in the if statement body:/* Received a packet upon ESTABLISHED state. This function is mimicking the role of tcp_rcv_established() in tcp_input.c in Linux kernel. */voidTcpSocketBase::ProcessEstablished (Ptr<Packet> packet, const TcpHeader& tcpHeader){ NS_LOG_FUNCTION (this << tcpHeader); ... else if (tcpflags == (TcpHeader::SYN | TcpHeader::ACK)) { // No action for received SYN+ACK, it is probably a duplicated packet NS_LOG_LOGIC ("TcpSocketBase " << this << " ignoring SYN+ACK"); } ...
This may seem fairly simple and satisfying at first glance, but something to consider is that you will be writing code toadd NS_LOG statements and you will also have to write code (as in grep, sed or awk scripts) to parse the log outputin order to isolate your information. This is because even though you have some control over what is output by thelogging system, you only have control down to the log component level, which is typically an entire source code file.If you are adding code to an existing module, you will also have to live with the output that every other developer hasfound interesting. You may find that in order to get the small amount of information you need, you may have to wade
70 Chapter 7. Tracing ns-3 Tutorial, Release ns-3.27
through huge amounts of extraneous messages that are of no interest to you. You may be forced to save huge log filesto disk and process them down to a few lines whenever you want to do anything.Since there are no guarantees in ns-3 about the stability of NS_LOG output, you may also discover that pieces of logoutput which you depend on disappear or change between releases. If you depend on the structure of the output, youmay find other messages being added or deleted which may affect your parsing code.Finally, NS_LOG output is only available in debug builds, you can’t get log output from optimized builds, which runabout twice as fast. Relying on NS_LOG imposes a performance penalty.For these reasons, we consider prints to std::cout and NS_LOG messages to be quick and dirty ways to get moreinformation out of ns-3, but not suitable for serious work.It is desirable to have a stable facility using stable APIs that allow one to reach into the core system and only get theinformation required. It is desirable to be able to do this without having to change and recompile the core system. Evenbetter would be a system that notified user code when an item of interest changed or an interesting event happened sothe user doesn’t have to actively poke around in the system looking for things.The ns-3 tracing system is designed to work along those lines and is well-integrated with the Attribute and Configsubsystems allowing for relatively simple use scenarios.
7.2 Overview
The ns-3 tracing system is built on the concepts of independent tracing sources and tracing sinks, along with a uniformmechanism for connecting sources to sinks.Trace sources are entities that can signal events that happen in a simulation and provide access to interesting underlyingdata. For example, a trace source could indicate when a packet is received by a net device and provide access to thepacket contents for interested trace sinks. A trace source might also indicate when an interesting state change happensin a model. For example, the congestion window of a TCP model is a prime candidate for a trace source. Every timethe congestion window changes connected trace sinks are notified with the old and new value.Trace sources are not useful by themselves; they must be connected to other pieces of code that actually do somethinguseful with the information provided by the source. The entities that consume trace information are called trace sinks.Trace sources are generators of data and trace sinks are consumers. This explicit division allows for large numbers oftrace sources to be scattered around the system in places which model authors believe might be useful. Inserting tracesources introduces a very small execution overhead.There can be zero or more consumers of trace events generated by a trace source. One can think of a trace source as akind of point-to-multipoint information link. Your code looking for trace events from a particular piece of core codecould happily coexist with other code doing something entirely different from the same information.Unless a user connects a trace sink to one of these sources, nothing is output. By using the tracing system, both youand other people hooked to the same trace source are getting exactly what they want and only what they want out ofthe system. Neither of you are impacting any other user by changing what information is output by the system. If youhappen to add a trace source, your work as a good open-source citizen may allow other users to provide new utilitiesthat are perhaps very useful overall, without making any changes to the ns-3 core.
Let’s take a few minutes and walk through a simple tracing example. We are going to need a little background onCallbacks to understand what is happening in the example, so we have to take a small detour right away.
7.2. Overview 71ns-3 Tutorial, Release ns-3.27
Callbacks
The goal of the Callback system in ns-3 is to allow one piece of code to call a function (or method in C++) without anyspecific inter-module dependency. This ultimately means you need some kind of indirection – you treat the addressof the called function as a variable. This variable is called a pointer-to-function variable. The relationship betweenfunction and pointer-to-function is really no different that that of object and pointer-to-object.In C the canonical example of a pointer-to-function is a pointer-to-function-returning-integer (PFI). For a PFI takingone int parameter, this could be declared like,int (*pfi)(int arg) = 0;
(But read the C++-FAQ Section 33 before writing code like this!) What you get from this is a variable named simplypfi that is initialized to the value 0. If you want to initialize this pointer to something meaningful, you need to havea function with a matching signature. In this case, you could provide a function that looks like:int MyFunction (int arg) {}
If you have this target, you can initialize the variable to point to your function:pfi = MyFunction;
You can then call MyFunction indirectly using the more suggestive form of the call:int result = (*pfi) (1234);
This is suggestive since it looks like you are dereferencing the function pointer just like you would dereference anypointer. Typically, however, people take advantage of the fact that the compiler knows what is going on and will justuse a shorter form:int result = pfi (1234);
This looks like you are calling a function named pfi, but the compiler is smart enough to know to call through thevariable pfi indirectly to the function MyFunction.Conceptually, this is almost exactly how the tracing system works. Basically, a trace sink is a callback. When atrace sink expresses interest in receiving trace events, it adds itself as a Callback to a list of Callbacks internally heldby the trace source. When an interesting event happens, the trace source invokes its operator(...) providingzero or more arguments. The operator(...) eventually wanders down into the system and does somethingremarkably like the indirect call you just saw, providing zero or more parameters, just as the call to pfi above passedone parameter to the target function MyFunction.The important difference that the tracing system adds is that for each trace source there is an internal list of Callbacks.Instead of just making one indirect call, a trace source may invoke multiple Callbacks. When a trace sink expressesinterest in notifications from a trace source, it basically just arranges to add its own function to the callback list.If you are interested in more details about how this is actually arranged in ns-3, feel free to peruse the Callback sectionof the ns-3 Manual.
Walkthrough: fourth.cc
We have provided some code to implement what is really the simplest example of tracing that can be assembled. Youcan find this code in the tutorial directory as fourth.cc. Let’s walk through it:/* -*- Mode:C++; c-file-style:"gnu"; indent-tabs-mode:nil; -*- *//* * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as * published by the Free Software Foundation;
72 Chapter 7. Tracing ns-3 Tutorial, Release ns-3/object.h"#include "ns3/uinteger.h"#include "ns3/traced-value.h"#include "ns3/trace-source-accessor.h"
#include <iostream>
Most of this code should be quite familiar to you. As mentioned above, the trace system makes heavy use of theObject and Attribute systems, so you will need to include them. The first two includes above bring in the declarationsfor those systems explicitly. You could use the core module header to get everything at once, but we do the includesexplicitly here to illustrate how simple this all really is.The file, traced-value.h brings in the required declarations for tracing of data that obeys value semantics. Ingeneral, value semantics just means that you can pass the object itself around, rather than passing the address of theobject. What this all really means is that you will be able to trace all changes made to a TracedValue in a really simpleway.Since the tracing system is integrated with Attributes, and Attributes work with Objects, there must be an ns-3 Objectfor the trace source to live in. The next code snippet declares and defines a simple Object we can work with.class MyObject : public Object{public: static TypeId GetTypeId (void) { static TypeId tid = TypeId ("MyObject") .SetParent (Object::GetTypeId ()) .SetGroupName ("MyGroup") .AddConstructor<MyObject> () .AddTraceSource ("MyInteger", "An integer value to trace.", MakeTraceSourceAccessor (&MyObject::m_myInt), "ns3::TracedValueCallback::Int32") ; return tid; }
MyObject () {} TracedValue<int32_t> m_myInt;};
The two important lines of code, above, with respect to tracing are the .AddTraceSource and the TracedValuedeclaration of m_myInt.The .AddTraceSource provides the “hooks” used for connecting the trace source to the outside world throughthe Config system. The first argument is a name for this trace source, which makes it visible in the Config system.
7.2. Overview 73ns-3 Tutorial, Release ns-3.27
The second argument is a help string. Now look at the third argument, in fact focus on the argument of the thirdargument: &MyObject::m_myInt. This is the TracedValue which is being added to the class; it is always a classdata member. (The final argument is the name of a typedef for the TracedValue type, as a string. This is used togenerate documentation for the correct Callback function signature, which is useful especially for more general typesof Callbacks.)The TracedValue<> declaration provides the infrastructure that drives the callback process. Any time the underly-ing value is changed the TracedValue mechanism will provide both the old and the new value of that variable, in thiscase an int32_t value. The trace sink function traceSink for this TracedValue will need the signaturevoid (* traceSink)(int32_t oldValue, int32_t newValue);
All trace sinks hooking this trace source must have this signature. We’ll discuss below how you can determine therequired callback signature in other cases.Sure enough, continuing through fourth.cc we see:voidIntTrace (int32_t oldValue, int32_t newValue){ std::cout << "Traced " << oldValue << " to " << newValue << std::endl;}
This is the definition of a matching trace sink. It corresponds directly to the callback function signature. Once it isconnected, this function will be called whenever the TracedValue changes.We have now seen the trace source and the trace sink. What remains is code to connect the source to the sink, whichhappens in main:intmain (int argc, char *argv[]){ Ptr<MyObject> myObject = CreateObject<MyObject> (); myObject->TraceConnectWithoutContext ("MyInteger", MakeCallback(&IntTrace));
myObject->m_myInt = 1234;}
Here we first create the MyObject instance in which the trace source lives.The next step, the TraceConnectWithoutContext, forms the connection between the trace source and the tracesink. The first argument is just the trace source name “MyInteger” we saw above. Notice the MakeCallback tem-plate function. This function does the magic required to create the underlying ns-3 Callback object and associate itwith the function IntTrace. TraceConnect makes the association between your provided function and over-loaded operator() in the traced variable referred to by the “MyInteger” Attribute. After this association is made,the trace source will “fire” your provided callback function.The code to make all of this happen is, of course, non-trivial, but the essence is that you are arranging for some-thing that looks just like the pfi() example above to be called by the trace source. The declaration of theTracedValue<int32_t> m_myInt; in the Object itself performs the magic needed to provide the over-loaded assignment operators that will use the operator() to actually invoke the Callback with the desiredparameters. The .AddTraceSource performs the magic to connect the Callback to the Config system, andTraceConnectWithoutContext performs the magic to connect your function to the trace source, which isspecified by Attribute name.Let’s ignore the bit about context for now.Finally, the line assigning a value to m_myInt:
74 Chapter 7. Tracing ns-3 Tutorial, Release ns-3.27
myObject->m_myInt = 1234;
should be interpreted as an invocation of operator= on the member variable m_myInt with the integer 1234passed as a parameter.Since m_myInt is a TracedValue, this operator is defined to execute a callback that returns void and takes twointeger values as parameters — an old value and a new value for the integer in question. That is exactly the functionsignature for the callback function we provided — IntTrace.To summarize, a trace source is, in essence, a variable that holds a list of callbacks. A trace sink is a function usedas the target of a callback. The Attribute and object type information systems are used to provide a way to connecttrace sources to trace sinks. The act of “hitting” a trace source is executing an operator on the trace source whichfires callbacks. This results in the trace sink callbacks who registering interest in the source being called with theparameters provided by the source.If you now build and run this example,$ ./waf --run fourth
you will see the output from the IntTrace function execute as soon as the trace source is hit:Traced 0 to 1234
When we executed the code, myObject->m_myInt = 1234;, the trace source fired and automatically providedthe before and after values to the trace sink. The function IntTrace then printed this to the standard output.
The TraceConnectWithoutContext call shown above in the simple example is actually very rarely used in thesystem. More typically, the Config subsystem is used to select a trace source in the system using what is called aConfig path. We saw an example of this in the previous section where we hooked the “CourseChange” event when wewere experimenting with third.cc.Recall that we defined a trace sink to print course change information from the mobility models of our simulation. Itshould now be a lot more clear to you what this function is doing:voidCourseChange (std::string context, Ptr<const MobilityModel> model){ Vector position = model->GetPosition (); NS_LOG_UNCOND (context << " x = " << position.x << ", y = " << position.y);}
When we connected the “CourseChange” trace source to the above trace sink, we used a Config path to specify thesource when we arranged a connection between the pre-defined trace source and the new trace sink:std::ostringstream oss;oss << "/NodeList/" << wifiStaNodes.Get (nWifi - 1)->GetId () << "/$ns3::MobilityModel/CourseChange";
Let’s try and make some sense of what is sometimes considered relatively mysterious code. For the purposes ofdiscussion, assume that the Node number returned by the GetId() is “7”. In this case, the path above turns out to be
7.2. Overview 75ns-3 Tutorial, Release ns-3.27
"/NodeList/7/$ns3::MobilityModel/CourseChange"
The last segment of a config path must be an Attribute of an Object. In fact, if you had a pointer to the Objectthat has the “CourseChange” Attribute handy, you could write this just like we did in the previous example. Youknow by now that we typically store pointers to our Nodes in a NodeContainer. In the third.cc example, theNodes of interest are stored in the wifiStaNodes NodeContainer. In fact, while putting the path together, we usedthis container to get a Ptr<Node> which we used to call GetId(). We could have used this Ptr<Node> to call aConnect method directly:Ptr<Object> theObject = wifiStaNodes.Get (nWifi - 1);theObject->TraceConnectWithoutContext ("CourseChange", MakeCallback (&CourseChange));
In the third.cc example, we actually wanted an additional “context” to be delivered along with the Callbackparameters (which will be explained below) so we could actually use the following equivalent code:Ptr<Object> theObject = wifiStaNodes.Get (nWifi - 1);theObject->TraceConnect ("CourseChange", MakeCallback (&CourseChange));
It turns out that the internal code for Config::ConnectWithoutContext and Config::Connect actuallyfind a Ptr<Object> and call the appropriate TraceConnect method at the lowest level.The Config functions take a path that represents a chain of Object pointers. Each segment of a path corresponds toan Object Attribute. The last segment is the Attribute of interest, and prior segments must be typed to contain or findObjects. The Config code parses and “walks” this path until it gets to the final segment of the path. It then interpretsthe last segment as an Attribute on the last Object it found while walking the path. The Config functions thencall the appropriate TraceConnect or TraceConnectWithoutContext method on the final Object. Let’s seewhat happens in a bit more detail when the above path is walked.The leading “/” character in the path refers to a so-called namespace. One of the predefined namespaces in the configsystem is “NodeList” which is a list of all of the nodes in the simulation. Items in the list are referred to by indicesinto the list, so “/NodeList/7” refers to the eighth Node in the list of nodes created during the simulation (recall indicesstart at 0’). This reference is actually a ‘‘Ptr<Node>‘ and so is a subclass of an ns3::Object.As described in the Object Model section of the ns-3 Manual, we make widespread use of object aggregation. Thisallows us to form an association between different Objects without building a complicated inheritance tree or prede-ciding what objects will be part of a Node. Each Object in an Aggregation can be reached from the other Objects.In our example the next path segment being walked begins with the “$” character. This indicates to the config systemthat the segment is the name of an object type, so a GetObject call should be made looking for that type. It turnsout that the MobilityHelper used in third.cc arranges to Aggregate, or associate, a mobility model to each ofthe wireless Nodes. When you add the “$” you are asking for another Object that has presumably been previouslyaggregated. You can think of this as switching pointers from the original Ptr<Node> as specified by “/NodeList/7” toits associated mobility model — which is of type ns3::MobilityModel. If you are familiar with GetObject,we have asked the system to do the following:Ptr<MobilityModel> mobilityModel = node->GetObject<MobilityModel> ()
We are now at the last Object in the path, so we turn our attention to the Attributes of that Object. TheMobilityModel class defines an Attribute called “CourseChange”. You can see this by looking at the source codein src/mobility/model/mobility-model.cc and searching for “CourseChange” in your favorite editor.You should find.AddTraceSource ("CourseChange", "The value of the position and/or velocity vector changed", MakeTraceSourceAccessor (&MobilityModel::m_courseChangeTrace), "ns3::MobilityModel::CourseChangeCallback")
76 Chapter 7. Tracing ns-3 Tutorial, Release ns-3.27
If you look for the corresponding declaration of the underlying traced variable in mobility-model.h you will findTracedCallback<Ptr<const MobilityModel> > m_courseChangeTrace;
The type declaration TracedCallback identifies m_courseChangeTrace as a special list of Callbacks thatcan be hooked using the Config functions described above. The typedef for the callback function signature is alsodefined in the header file:typedef void (* CourseChangeCallback)(Ptr<const MobilityModel> * model);
The MobilityModel class is designed to be a base class providing a common interface for all of the specific sub-classes. If you search down to the end of the file, you will see a method defined called NotifyCourseChange():voidMobilityModel::NotifyCourseChange (void) const{ m_courseChangeTrace(this);}
Derived classes will call into this method whenever they do a course change to support tracing. This method in-vokes operator() on the underlying m_courseChangeTrace, which will, in turn, invoke all of the registeredCallbacks, calling all of the trace sinks that have registered interest in the trace source by calling a Config function.So, in the third.cc example we looked at, whenever a course change is made in one of theRandomWalk2dMobilityModel instances installed, there will be a NotifyCourseChange() call which callsup into the MobilityModel base class. As seen above, this invokes operator() on m_courseChangeTrace,which in turn, calls any registered trace sinks. In the example, the only code registering an interest was the code thatprovided the Config path. Therefore, the CourseChange function that was hooked from Node number seven willbe the only Callback called.The final piece of the puzzle is the “context”. Recall that we saw an output looking something like the following fromthird.cc:/NodeList/7/$ns3::MobilityModel/CourseChange x = 7.27897, y =2.22677
The first part of the output is the context. It is simply the path through which the config code located the tracesource. In the case we have been looking at there can be any number of trace sources in the system correspond-ing to any number of nodes with mobility models. There needs to be some way to identify which trace sourceis actually the one that fired the Callback. The easy way is to connect with Config::Connect, instead ofConfig::ConnectWithoutContext.
The first question that inevitably comes up for new users of the Tracing system is, “Okay, I know that there must betrace sources in the simulation core, but how do I find out what trace sources are available to me?”The second question is, “Okay, I found a trace source, how do I figure out the Config path to use when I connect toit?”The third question is, “Okay, I found a trace source and the Config path, how do I figure out what the return type andformal arguments of my callback function need to be?”The fourth question is, “Okay, I typed that all in and got this incredibly bizarre error message, what in the world doesit mean?”We’ll address each of these in turn.
7.2. Overview 77ns-3 Tutorial, Release ns-3.27
Okay, I know that there must be trace sources in the simulation core, but how do I find out what trace sources areavailable to me?The answer to the first question is found in the ns-3 API documentation. If you go to the project web site, ns-3project, you will find a link to “Documentation” in the navigation bar. If you select this link, you will be taken to ourdocumentation page. There is a link to “Latest Release” that will take you to the documentation for the latest stablerelease of ns-3. If you select the “API Documentation” link, you will be taken to the ns-3 API documentation page.In the sidebar you should see a hierachy that begins • ns-3 • ns-3 Documentation • All TraceSources • All Attributes • All GlobalValuesThe list of interest to us here is “All TraceSources”. Go ahead and select that link. You will see, perhaps not toosurprisingly, a list of all of the trace sources available in ns-3.As an example, scroll down to ns3::MobilityModel. You will find an entry forCourseChange: The value of the position and/or velocity vector changed
You should recognize this as the trace source we used in the third.cc example. Perusing this list will be helpful.
Okay, I found a trace source, how do I figure out the Config path to use when I connect to it?If you know which object you are interested in, the “Detailed Description” section for the class will list all availabletrace sources. For example, starting from the list of “All TraceSources,” click on the ns3::MobilityModel link,which will take you to the documentation for the MobilityModel class. Almost at the top of the page is a one linebrief description of the class, ending in a link “More...”. Click on this link to skip the API summary and go to the“Detailed Description” of the class. At the end of the description will be (up to) three lists: • Config Paths: a list of typical Config paths for this class. • Attributes: a list of all attributes supplied by this class. • TraceSources: a list of all TraceSources available from this class.First we’ll discuss the Config paths.Let’s assume that you have just found the “CourseChange” trace source in the “All TraceSources” list and youwant to figure out how to connect to it. You know that you are using (again, from the third.cc example) anns3::RandomWalk2dMobilityModel. So either click on the class name in the “All TraceSources” list, orfind ns3::RandomWalk2dMobilityModel in the “Class List”. Either way you should now be looking at the“ns3::RandomWalk2dMobilityModel Class Reference” page.If you now scroll down to the “Detailed Description” section, after the summary list of class methods and attributes (orjust click on the “More...” link at the end of the class brief description at the top of the page) you will see the overalldocumentation for the class. Continuing to scroll down, find the “Config Paths” list: Config Paths ns3::RandomWalk2dMobilityModel is accessible through the following paths with Config::Set and Config::Connect:
78 Chapter 7. Tracing ns-3 Tutorial, Release ns-3.27
• “/NodeList/[i]/$ns3::MobilityModel/$ns3::RandomWalk2dMobilityModel”The documentation tells you how to get to the RandomWalk2dMobilityModel Object. Compare the string abovewith the string we actually used in the example code:"/NodeList/7/$ns3::MobilityModel"
The difference is due to the fact that two GetObject calls are implied in the string found in the documentation. Thefirst, for $ns3::MobilityModel will query the aggregation for the base class. The second implied GetObjectcall, for $ns3::RandomWalk2dMobilityModel, is used to cast the base class to the concrete implementationclass. The documentation shows both of these operations for you. It turns out that the actual trace source you arelooking for is found in the base class.Look further down in the “Detailed Description” section for the list of trace sources. You will find No TraceSources are defined for this type. TraceSources defined in parent class ‘‘ns3::MobilityModel‘‘ • CourseChange: The value of the position and/or velocity vector changed. Callback signature: ns3::MobilityModel::CourseChangeCallbackThis is exactly what you need to know. The trace source of interest is found in ns3::MobilityModel (which youknew anyway). The interesting thing this bit of API Documentation tells you is that you don’t need that extra cast inthe config path above to get to the concrete class, since the trace source is actually in the base class. Therefore theadditional GetObject is not required and you simply use the path:"/NodeList/[i]/$ns3::MobilityModel"
As an aside, another way to find the Config path is to grep around in the ns-3 codebase for someone who has alreadyfigured it out. You should always try to copy someone else’s working code before you start to write your own. Trysomething like:$ find . -name '*.cc' | xargs grep CourseChange | grep Connect
and you may find your answer along with working code. For example, in this case,src/mobility/examples/main-random-topology.cc has something just waiting for you to use:Config::Connect ("/NodeList/*/$ns3::MobilityModel/CourseChange", MakeCallback (&CourseChange));
Okay, I found a trace source and the Config path, how do I figure out what the return type and formal arguments ofmy callback function need to be?The easiest way is to examine the callback signature typedef, which is given in the “Callback signature” of the tracesource in the “Detailed Description” for the class, as shown above.Repeating the “CourseChange” trace source entry from ns3::RandomWalk2dMobilityModel we have: • CourseChange: The value of the position and/or velocity vector changed. Callback signature: ns3::MobilityModel::CourseChangeCallback
7.2. Overview 79ns-3 Tutorial, Release ns-3.27
The callback signature is given as a link to the relevant typedef, where we find typedef void (* CourseChangeCallback)(std::string context, Ptr<const MobilityModel> * model); TracedCallback signature for course change notifications. If the callback is connected using ConnectWithoutContext omit the context argument from the signature. Parameters: [in] context The context string supplied by the Trace source. [in] model The MobilityModel which is changing course.As above, to see this in use grep around in the ns-3 codebase for an example. The example above, fromsrc/mobility/examples/main-random-topology.cc, connects the “CourseChange” trace source to theCourseChange function in the same file:static voidCourseChange (std::string context, Ptr<const MobilityModel> model){ ...}
There is a one-to-one correspondence between the template parameter list in the declaration and the formal argumentsof the callback function. Here, there is one template parameter, which is a Ptr<const MobilityModel>. Thistells you that you need a function that returns void and takes a Ptr<const MobilityModel>. For example:voidCourseChange (Ptr<const MobilityModel> model){ ...}
That’s all you need if you want to Config::ConnectWithoutContext. If you want a context, you need toConfig::Connect and use a Callback function that takes a string context, then the template arguments:voidCourseChange (std::string context, Ptr<const MobilityModel> model){
80 Chapter 7. Tracing ns-3 Tutorial, Release ns-3.27
If you want to ensure that your CourseChangeCallback function is only visible in your local file, you can addthe keyword static and come up with:static voidCourseChange (std::string path, Ptr<const MobilityModel> model){ ...}
Implementation
This section is entirely optional. It is going to be a bumpy ride, especially for those unfamiliar with the details oftemplates. However, if you get through this, you will have a very good handle on a lot of the ns-3 low level idioms.So, again, let’s figure out what signature of callback function is required for the “CourseChange” trace source. Thisis going to be painful, but you only need to do this once. After you get through this, you will be able to just look at aTracedCallback and understand it.The first thing we need to look at is the declaration of the trace source. Recall that this is in mobility-model.h,where we have previously found:TracedCallback<Ptr<const MobilityModel> > m_courseChangeTrace;
This declaration is for a template. The template parameter is inside the angle-brackets, so we are really interested infinding out what that TracedCallback<> is. If you have absolutely no idea where this might be found, grep isyour friend.We are probably going to be interested in some kind of declaration in the ns-3 source, so first change into the srcdirectory. Then, we know this declaration is going to have to be in some kind of header file, so just grep for it using:$ find . -name '*.h' | xargs grep TracedCallback
You’ll see 303 lines fly by (I piped this through wc to see how bad it was). Although that may seem like a lot, that’snot really a lot. Just pipe the output through more and start scanning through it. On the first page, you will see somevery suspiciously template-looking stuff.TracedCallback<T1,T2,T3,T4,T5,T6,T7,T8>::TracedCallback ()TracedCallback<T1,T2,T3,T4,T5,T6,T7,T8>::ConnectWithoutContext (c ...TracedCallback<T1,T2,T3,T4,T5,T6,T7,T8>::Connect (const CallbackB ...TracedCallback<T1,T2,T3,T4,T5,T6,T7,T8>::DisconnectWithoutContext ...TracedCallback<T1,T2,T3,T4,T5,T6,T7,T8>::Disconnect (const Callba ...TracedCallback<T1,T2,T3,T4,T5,T6,T7,T8>::operator() (void) const ...TracedCallback<T1,T2,T3,T4,T5,T6,T7,T8>::operator() (T1 a1) const .. ...
It turns out that all of this comes from the header file traced-callback.h which sounds very promising. You canthen take a look at mobility-model.h and see that there is a line which confirms this hunch:
7.2. Overview 81ns-3 Tutorial, Release ns-3.27
#include "ns3/traced-callback.h"
Of course, you could have gone at this from the other direction and started by looking at the includes inmobility-model.h and noticing the include of traced-callback.h and inferring that this must be the fileyou want.In either case, the next step is to take a look at src/core/model/traced-callback.h in your favorite editorto see what is happening.You will see a comment at the top of the file that should be comforting: An ns3::TracedCallback has almost exactly the same API as a normal ns3::Callback but instead of for- warding calls to a single function (as an ns3::Callback normally does), it forwards calls to a chain of ns3::Callback.This should sound very familiar and let you know you are on the right track.Just after this comment, you will findtemplate<typename T1 = empty, typename T2 = empty, typename T3 = empty, typename T4 = empty, typename T5 = empty, typename T6 = empty, typename T7 = empty, typename T8 = empty>class TracedCallback{ ...
This tells you that TracedCallback is a templated class. It has eight possible type parameters with default values. Goback and compare this with the declaration you are trying to understand:TracedCallback<Ptr<const MobilityModel> > m_courseChangeTrace;
The typename T1 in the templated class declaration corresponds to the Ptr<const MobilityModel> in thedeclaration above. All of the other type parameters are left as defaults. Looking at the constructor really doesn’ttell you much. The one place where you have seen a connection made between your Callback function and thetracing system is in the Connect and ConnectWithoutContext functions. If you scroll down, you will see aConnectWithoutContext method here:template<typename T1, typename T2, typename T3, typename T4, typename T5, typename T6, typename T7, typename T8>voidTracedCallback<T1,T2,T3,T4,T5,T6,T7,T8>::ConnectWithoutContext ...{ Callback<void,T1,T2,T3,T4,T5,T6,T7,T8> cb; cb.Assign (callback); m_callbackList.push_back (cb);}
You are now in the belly of the beast. When the template is instantiated for the declaration above, the compiler willreplace T1 with Ptr<const MobilityModel>.voidTracedCallback<Ptr<const MobilityModel>::ConnectWithoutContext ... cb{ Callback<void, Ptr<const MobilityModel> > cb; cb.Assign (callback); m_callbackList.push_back (cb);}
82 Chapter 7. Tracing ns-3 Tutorial, Release ns-3.27
You can now see the implementation of everything we’ve been talking about. The code creates a Callback of the righttype and assigns your function to it. This is the equivalent of the pfi = MyFunction we discussed at the start ofthis section. The code then adds the Callback to the list of Callbacks for this source. The only thing left is to look atthe definition of Callback. Using the same grep trick as we used to find TracedCallback, you will be able tofind that the file ./core/callback.h is the one we need to look at.If you look down through the file, you will see a lot of probably almost incomprehensible template code. You willeventually come to some API Documentation for the Callback template class, though. Fortunately, there is someEnglish: Callback template class. This class template implements the Functor Design Pattern. It is used to declare the type of a Callback: • the first non-optional template argument represents the return type of the callback. • the remaining (optional) template arguments represent the type of the subsequent arguments to the callback. • up to nine arguments are supported.We are trying to figure out what theCallback<void, Ptr<const MobilityModel> > cb;
declaration means. Now we are in a position to understand that the first (non-optional) template argument, void, rep-resents the return type of the Callback. The second (optional) template argument, Ptr<const MobilityModel>represents the type of the first argument to the callback.The Callback in question is your function to receive the trace events. From this you can infer that you need a functionthat returns void and takes a Ptr<const MobilityModel>. For example,voidCourseChangeCallback (Ptr<const MobilityModel> model){ ...}
That’s all you need if you want to Config::ConnectWithoutContext. If you want a context, you need toConfig::Connect and use a Callback function that takes a string context. This is because the Connect functionwill provide the context for you. You’ll need:voidCourseChangeCallback (std::string context, Ptr<const MobilityModel> model){ ...}
If you want to ensure that your CourseChangeCallback is only visible in your local file, you can add the keywordstatic and come up with:static voidCourseChangeCallback (std::string path, Ptr<const MobilityModel> model){ ...}
which is exactly what we used in the third.cc example. Perhaps you should now go back and reread the previoussection (Take My Word for It).If you are interested in more details regarding the implementation of Callbacks, feel free to take a look at the ns-3manual. They are one of the most frequently used constructs in the low-level parts of ns-3. It is, in my opinion, a quite
7.2. Overview 83ns-3 Tutorial, Release ns-3.27
elegant thing.
7.2.7 TracedValues
Earlier in this section, we presented a simple piece of code that used a TracedValue<int32_t> to demonstratethe basics of the tracing code. We just glossed over the what a TracedValue really is and how to find the return typeand formal arguments for the callback.As we mentioned, the file, traced-value.h brings in the required declarations for tracing of data that obeys valuesemantics. In general, value semantics just means that you can pass the object itself around, rather than passingthe address of the object. We extend that requirement to include the full set of assignment-style operators that arepre-defined for plain-old-data (POD) types: operator= (assignment) operator*= operator/= operator+= operator-= operator++ (both prefix and postfix) operator-- (both prefix and postfix) operator<<= operator>>= operator&= operator|= operator%= operator^=What this all really means is that you will be able to trace all changes made using those operators to a C++ objectwhich has value semantics.The TracedValue<> declaration we saw above provides the infrastructure that overloads the operators mentionedabove and drives the callback process. On use of any of the operators above with a TracedValue it will provide boththe old and the new value of that variable, in this case an int32_t value. By inspection of the TracedValue dec-laration, we know the trace sink function will have arguments (int32_t oldValue, int32_t newValue).The return type for a TracedValue callback function is always void, so the expected callback signature for thesink function traceSink will be:void (* traceSink)(int32_t oldValue, int32_t newValue);
The .AddTraceSource in the GetTypeId method provides the “hooks” used for connecting the trace source tothe outside world through the Config system. We already discussed the first three agruments to AddTraceSource:the Attribute name for the Config system, a help string, and the address of the TracedValue class data member.The final string argument, “ns3::TracedValueCallback::Int32” in the example, is the name of a typedef for thecallback function signature. We require these signatures to be defined, and give the fully qualified type name toAddTraceSource, so the API documentation can link a trace source to the function signature. For TracedValue thesignature is straightforward; for TracedCallbacks we’ve already seen the API docs really help.
Let’s do an example taken from one of the best-known books on TCP around. “TCP/IP Illustrated, Volume 1: TheProtocols,” by W. Richard Stevens is a classic. I just flipped the book open and ran across a nice plot of both thecongestion window and sequence numbers versus time on page 366. Stevens calls this, “Figure 21.10. Value of cwndand send sequence number while data is being transmitted.” Let’s just recreate the cwnd part of that plot in ns-3 usingthe tracing system and gnuplot.
84 Chapter 7. Tracing ns-3 Tutorial, Release ns-3.27
The first thing to think about is how we want to get the data out. What is it that we need to trace? So let’s consult “AllTrace Sources” list to see what we have to work with. Recall that this is found in the ns-3 API Documentation. If youscroll through the list, you will eventually find: ns3::TcpSocketBase • CongestionWindow: The TCP connection’s congestion window • SlowStartThreshold: TCP slow start threshold (bytes)It turns out that the ns-3 TCP implementation lives (mostly) in the filesrc/internet/model/tcp-socket-base.cc while congestion control variants are in files such assrc/internet/model/tcp-bic.cc. If you don’t know this a priori, you can use the recursive grep trick:$ find . -name '*.cc' | xargs grep -i tcp
You will find page after page of instances of tcp pointing you to that file.Bringing up the class documentation for TcpSocketBase and skipping to the list of TraceSources you will find TraceSources • CongestionWindow: The TCP connection’s congestion window Callback signature: ns3::TracedValueCallback::Uint32Clicking on the callback typedef link we see the signature you now know to expect:typedef void(* ns3::TracedValueCallback::Int32)(int32_t oldValue, int32_t newValue)
You should now understand this code completely. If we have a pointer to the TcpSocketBase object, we canTraceConnect to the “CongestionWindow” trace source if we provide an appropriate callback target. This is thesame kind of trace source that we saw in the simple example at the start of this section, except that we are talking aboutuint32_t instead of int32_t. And we know that we have to provide a callback function with that signature.
It’s always best to try and find working code laying around that you can modify, rather than starting from scratch. Sothe first order of business now is to find some code that already hooks the “CongestionWindow” trace source and seeif we can modify it. As usual, grep is your friend:$ find . -name '*.cc' | xargs grep CongestionWindow
This should look very familiar to you. We mentioned above that if we had a pointer to the TcpSocketBase, wecould TraceConnect to the “CongestionWindow” trace source. That’s exactly what we have here; so it turns outthat this line of code does exactly what we want. Let’s go ahead and extract the code we need from this function(Ns3TcpCwndTestCase1::DoRun (void)). If you look at this function, you will find that it looks just like anns-3 script. It turns out that is exactly what it is. It is a script run by the test framework, so we can just pull it out
and wrap it in main instead of in DoRun. Rather than walk through this, step, by step, we have provided the file thatresults from porting this test back to a native ns-3 script – examples/tutorial/fifth.cc.
The fifth.cc example demonstrates an extremely important rule that you must understand before using any kindof trace source: you must ensure that the target of a Config::Connect command exists before trying to use it.This is no different than saying an object must be instantiated before trying to call it. Although this may seem obviouswhen stated this way, it does trip up many people trying to use the system for the first time.Let’s return to basics for a moment. There are three basic execution phases that exist in any ns-3 script. The first phaseis sometimes called “Configuration Time” or “Setup Time,” and exists during the period when the main function ofyour script is running, but before Simulator::Run is called. The second phase is sometimes called “SimulationTime” and exists during the time period when Simulator::Run is actively executing its events. After it completesexecuting the simulation, Simulator::Run will return control back to the main function. When this happens, thescript enters what can be called the “Teardown Phase,” which is when the structures and objects created during setupare taken apart and released.Perhaps the most common mistake made in trying to use the tracing system is assuming that entities constructed dy-namically during simulation time are available during configuration time. In particular, an ns-3 Socket is a dynamicobject often created by Applications to communicate between Nodes. An ns-3 Application always has a“Start Time” and a “Stop Time” associated with it. In the vast majority of cases, an Application will not attempt tocreate a dynamic object until its StartApplication method is called at some “Start Time”. This is to ensure thatthe simulation is completely configured before the app tries to do anything (what would happen if it tried to connect toa Node that didn’t exist yet during configuration time?). As a result, during the configuration phase you can’t connecta trace source to a trace sink if one of them is created dynamically during the simulation.The two solutions to this conundrum are 1. Create a simulator event that is run after the dynamic object is created and hook the trace when that event is executed; or 2. Create the dynamic object at configuration time, hook it then, and give the object to the system to use during simulation time.We took the second approach in the fifth.cc example. This decision required us to create the MyAppApplication, the entire purpose of which is to take a Socket as a parameter.
Now, let’s take a look at the example program we constructed by dissecting the congestion window test. Openexamples/tutorial/fifth.cc in your favorite editor. You should see some familiar looking code:/* -*-
86 Chapter 7. Tracing ns-3 Tutorial, Release ns-3.27
#include <fstream>#include "ns3/core-module.h"#include "ns3/network-module.h"#include "ns3/internet-module.h"#include "ns3/point-to-point-module.h"#include "ns3/applications-module.h"
NS_LOG_COMPONENT_DEFINE ("FifthScriptExample");
This has all been covered, so we won’t rehash it. The next lines of source are the network illustration and a commentaddressing the problem described above with Socket.// ===========================================================================//// node 0 node 1// +----------------+ +----------------+// | ns-3 TCP | | ns-3 TCP |// +----------------+ +----------------+// | 10.1.1.1 | | 10.1.1.2 |// +----------------+ +----------------+// | point-to-point | | point-to-point |// +----------------+ +----------------+// | |// +---------------------+// 5 Mbps, 2 ms////// We want to look at changes in the ns-3 TCP congestion window. We need// to crank up a flow and hook the CongestionWindow attribute on the socket// of the sender. Normally one would use an on-off application to generate a// flow, but this has a couple of problems. First, the socket of the on-off// application is not created until Application Start time, so we wouldn't be// able to hook the socket (now) at configuration time. Second, even if we// could arrange a call after start time, the socket is not public so we// couldn't get at it.//// So, we can cook up a simple version of the on-off application that does what// we want. On the plus side we don't need all of the complexity of the on-off// application. On the minus side, we don't have a helper, so we have to get// a little more involved in the details, but this is trivial.//// So first, we create a socket and do the trace connect on it; then we pass// this socket into the constructor of our simple application which we then// install in the source node.// ===========================================================================//
public:
MyApp (); virtual ~MyApp();
private: virtual void StartApplication (void); virtual void StopApplication (void);
Ptr<Socket> m_socket; Address m_peer; uint32_t m_packetSize; uint32_t m_nPackets; DataRate m_dataRate; EventId m_sendEvent; bool m_running; uint32_t m_packetsSent;};
You can see that this class inherits from the ns-3 Application class. Take a look atsrc/network/model/application.h if you are interested in what is inherited. The MyApp class isobligated to override the StartApplication and StopApplication methods. These methods are automati-cally called when MyApp is required to start and stop sending data during the simulation.
Starting/Stopping Applications
It is worthwhile to spend a bit of time explaining how events actually get started in the system. This is another fairlydeep explanation, and can be ignored if you aren’t planning on venturing down into the guts of the system. It is useful,however, in that the discussion touches on how some very important parts of ns-3 work and exposes some importantidioms. If you are planning on implementing new models, you probably want to understand this section.The most common way to start pumping events is to start an Application. This is done as the result of the following(hopefully) familar lines of an ns-3 script:ApplicationContainer apps = ...apps.Start (Seconds (1.0));apps.Stop (Seconds (10.0));
88 Chapter 7. Tracing ns-3 Tutorial, Release ns-3.27
simulator at the appropriate time. In the case of MyApp you will find that MyApp::StartApplication doesthe initial Bind, and Connect on the socket, and then starts data flowing by calling MyApp::SendPacket.MyApp::StopApplication stops generating packets by cancelling any pending send events then closes thesocket.One of the nice things about ns-3 is that you can completely ignore the implementation details of how yourApplication is “automagically” called by the simulator at the correct time. But since we have already ventureddeep into ns-3 already, let’s go for it.If you look at src/network/model/application.cc you will find that the SetStartTime methodof an Application just sets the member variable m_startTime and the SetStopTime method just setsm_stopTime. From there, without some hints, the trail will probably end.The key to picking up the trail again is to know that there is a global list of all of the nodes in the system. Wheneveryou create a node in a simulation, a pointer to that Node is added to the global NodeList.Take a look at src/network/model/node-list.cc and search for NodeList::Add. The public staticimplementation calls into a private implementation called NodeListPriv::Add. This is a relatively commonidom in ns-3. So, take a look at NodeListPriv::Add. There you will find,Simulator::ScheduleWithContext (index, TimeStep (0), &Node::Initialize, node);
This tells you that whenever a Node is created in a simulation, as a side-effect, a call to that node’s Initializemethod is scheduled for you that happens at time zero. Don’t read too much into that name, yet. It doesn’t mean thatthe Node is going to start doing anything, it can be interpreted as an informational call into the Node telling it that thesimulation has started, not a call for action telling the Node to start doing something.So, NodeList::Add indirectly schedules a call to Node::Initialize at time zero to advise a new Node thatthe simulation has started. If you look in src/network/model/node.h you will, however, not find a methodcalled Node::Initialize. It turns out that the Initialize method is inherited from class Object. Allobjects in the system can be notified when the simulation starts, and objects of class Node are just one kind of thoseobjects.Take a look at src/core/model/object.cc next and search for Object::Initialize. This codeis not as straightforward as you might have expected since ns-3 Objects support aggregation. The code inObject::Initialize then loops through all of the objects that have been aggregated together and calls theirDoInitialize method. This is another idiom that is very common in ns-3, sometimes called the “template designpattern.”: a public non-virtual API method, which stays constant across implementations, and that calls a private vir-tual implementation method that is inherited and implemented by subclasses. The names are typically something likeMethodName for the public API and DoMethodName for the private API.This tells us that we should look for a Node::DoInitialize method in src/network/model/node.ccfor the method that will continue our trail. If you locate the code, you will find a method that loops throughall of the devices in the Node and then all of the applications in the Node calling device->Initialize andapplication->Initialize respectively.You may already know that classes Device and Application both inherit from class Object and so thenext step will be to look at what happens when Application::DoInitialize is called. Take a look atsrc/network/model/application.cc and you will find:voidApplication::DoInitialize (void){ m_startEvent = Simulator::Schedule (m_startTime, &Application::StartApplication, this); if (m_stopTime != TimeStep (0)) { m_stopEvent = Simulator::Schedule (m_stopTime, &Application::StopApplication, this); } Object::DoInitialize ();}
Here, we finally come to the end of the trail. If you have kept it all straight, when you implement an ns-3Application, your new application inherits from class Application. You override the StartApplicationand StopApplication methods and provide mechanisms for starting and stopping the flow of data out of your newApplication. When a Node is created in the simulation, it is added to a global NodeList. The act of adding aNode to this NodeList causes a simulator event to be scheduled for time zero which calls the Node::Initializemethod of the newly added Node to be called when the simulation starts. Since a Node inherits from Object,this calls the Object::Initialize method on the Node which, in turn, calls the DoInitialize methodson all of the Objects aggregated to the Node (think mobility models). Since the Node Object has overriddenDoInitialize, that method is called when the simulation starts. The Node::DoInitialize method calls theInitialize methods of all of the Applications on the node. Since Applications are also Objects,this causes Application::DoInitialize to be called. When Application::DoInitialize is called, itschedules events for the StartApplication and StopApplication calls on the Application. These callsare designed to start and stop the flow of data from the ApplicationThis has been another fairly long journey, but it only has to be made once, and you now understand another very deeppiece of ns-3.
MyApp::~MyApp(){ m_socket = 0;}
The existence of the next bit of code is the whole reason why we wrote this Application in the first place.voidMyApp::Setup (Ptr<Socket> socket, Address address, uint32_t packetSize, uint32_t nPackets, DataRate dataRate){ m_socket = socket; m_peer = address; m_packetSize = packetSize; m_nPackets = nPackets; m_dataRate = dataRate;}
This code should be pretty self-explanatory. We are just initializing member variables. The important one fromthe perspective of tracing is the Ptr<Socket> socket which we needed to provide to the application duringconfiguration time. Recall that we are going to create the Socket as a TcpSocket (which is implemented byTcpSocketBase) and hook its “CongestionWindow” trace source before passing it to the Setup method.
90 Chapter 7. Tracing ns-3 Tutorial, Release ns-3.27
voidMyApp::StartApplication (void){ m_running = true; m_packetsSent = 0; m_socket->Bind (); m_socket->Connect (m_peer); SendPacket ();}
The above code is the overridden implementation Application::StartApplication that will be automati-cally called by the simulator to start our Application running at the appropriate time. You can see that it doesa Socket Bind operation. If you are familiar with Berkeley Sockets this shouldn’t be a surprise. It performs therequired work on the local side of the connection just as you might expect. The following Connect will do what isrequired to establish a connection with the TCP at Address m_peer. It should now be clear why we need to defer alot of this to simulation time, since the Connect is going to need a fully functioning network to complete. After theConnect, the Application then starts creating simulation events by calling SendPacket.The next bit of code explains to the Application how to stop creating simulation events.voidMyApp::StopApplication (void){ m_running = false;
if (m_sendEvent.IsRunning ()) { Simulator::Cancel (m_sendEvent); }
if (m_socket) { m_socket->Close (); }}
Every time a simulation event is scheduled, an Event is created. If the Event is pending execution or executing, itsmethod IsRunning will return true. In this code, if IsRunning() returns true, we Cancel the event whichremoves it from the simulator event queue. By doing this, we break the chain of events that the Applicationis using to keep sending its Packets and the Application goes quiet. After we quiet the Application weClose the socket which tears down the TCP connection.The socket is actually deleted in the destructor when the m_socket = 0 is executed. This removes the last referenceto the underlying Ptr<Socket> which causes the destructor of that Object to be called.Recall that StartApplication called SendPacket to start the chain of events that describes theApplication behavior.voidMyApp::SendPacket (void){ Ptr<Packet> packet = Create<Packet> (m_packetSize); m_socket->Send (packet);
Here, you see that SendPacket does just that. It creates a Packet and then does a Send which, if you knowBerkeley Sockets, is probably just what you expected to see.It is the responsibility of the Application to keep scheduling the chain of events, so the next lines callScheduleTx to schedule another transmit event (a SendPacket) until the Application decides it has sentenough.voidMyApp::ScheduleTx (void){ if (m_running) { Time tNext (Seconds (m_packetSize * 8 / static_cast<double> (m_dataRate.GetBitRate ()))); m_sendEvent = Simulator::Schedule (tNext, &MyApp::SendPacket, this); }}
Here, you see that ScheduleTx does exactly that. If the Application is running (if StopApplication hasnot been called) it will schedule a new event, which calls SendPacket again. The alert reader will spot somethingthat also trips up new users. The data rate of an Application is just that. It has nothing to do with the data rate ofan underlying Channel. This is the rate at which the Application produces bits. It does not take into accountany overhead for the various protocols or channels that it uses to transport the data. If you set the data rate of anApplication to the same data rate as your underlying Channel you will eventually get a buffer overflow.
Trace Sinks
The whole point of this exercise is to get trace callbacks from TCP indicating the congestion window has been updated.The next piece of code implements the corresponding trace sink:static voidCwndChange (uint32_t oldCwnd, uint32_t newCwnd){ NS_LOG_UNCOND (Simulator::Now ().GetSeconds () << "\t" << newCwnd);}
This should be very familiar to you now, so we won’t dwell on the details. This function just logs the current simulationtime and the new value of the congestion window every time it is changed. You can probably imagine that you couldload the resulting output into a graphics program (gnuplot or Excel) and immediately see a nice graph of the congestionwindow behavior over time.We added a new trace sink to show where packets are dropped. We are going to add an error model to this code also,so we wanted to demonstrate this working.static voidRxDrop (Ptr<const Packet> p){ NS_LOG_UNCOND ("RxDrop at " << Simulator::Now ().GetSeconds ());}
This trace sink will be connected to the “PhyRxDrop” trace source of the point-to-point NetDevice. Thistrace source fires when a packet is dropped by the physical layer of a NetDevice. If you take a smalldetour to the source (src/point-to-point/model/point-to-point-net-device.cc) you will seethat this trace source refers to PointToPointNetDevice::m_phyRxDropTrace. If you then look insrc/point-to-point/model/point-to-point-net-device.h for this member variable, you will findthat it is declared as a TracedCallback<Ptr<const Packet> >. This should tell you that the callback targetshould be a function that returns void and takes a single parameter which is a Ptr<const Packet> (assuming weuse ConnectWithoutContext) – just what we have above.
92 Chapter 7. Tracing ns-3 Tutorial, Release ns-3.27
Main Programfile.The next few lines of code show something new. If we trace a connection that behaves perfectly, we will end up with amonotonically increasing congestion window. To see any interesting behavior, we really want to introduce link errorswhich will drop packets, cause duplicate ACKs and trigger the more interesting behaviors of the congestion window.ns-3 provides ErrorModel objects which can be attached to Channels. We are using the RateErrorModelwhichvalue. We then set the resulting instantiated RateErrorModel as the error model used by the point-to-pointNetaddresses for the point-to-point devices.Since we are using TCP, we need something on the destination Node to receive TCP connections and data. ThePacketSink Application is commonly used in ns-3 for that purpose.uint (20.));
This code instantiates a PacketSinkHelper and tells it to create sockets using the classns3::TcpSocketFactory. This class implements a design pattern called “object factory” which is acommonly used mechanism for specifying a class used to create objects in an abstract way. Here, instead of having tocreate the objects themselves, you provide the PacketSinkHelper a string that specifies a TypeId string used tocreateexplicit TypeId for the object factory used to create the socket. This is a slightly lower level call than thePacketSinkHelper call above, and uses an explicit C++ type instead of one referred to by a string. Otherwise, itis conceptually the same thing.Once the TcpSocket is created and attached to the Node, we can use TraceConnectWithoutContext toconnect the CongestionWindow trace source to our trace sink.Recall that we coded an Application so we could take that Socket we just made (during configuration time) anduse it in simulation time. We now have to instantiate that Application. We didn’t go to any trouble to create ahelper to manage the Application so we are going to have to create and install it “manually”. This is actually quiteeasywhat Socket to use, what address to connect to, how much data to send at each send event, how many send events togenerate and the rate at which to produce data from those events.Next, we manually add the MyApp Application to the source Node and explicitly call the Start and Stopmethods on the Application to tell it when to start and stop doing its thing.We need to actually do the connect from the receiver point-to-point NetDevice drop event to our RxDrop callbacknow.devices.Get (1)->TraceConnectWithoutContext("PhyRxDrop", MakeCallback (&RxDrop));
It should now be obvious that we are getting a reference to the receiving Node NetDevice from its container andconnecting the trace source defined by the attribute “PhyRxDrop” on that device to the trace sink RxDrop.Finally, we tell the simulator to override any Applications and just stop processing events at 20 seconds into thesimulation. Simulator::Stop (Seconds(20)); Simulator::Run (); Simulator::Destroy ();
return 0;}
Recall that as soon as Simulator::Run is called, configuration time ends, and simulation time begins. All of thework we orchestrated by creating the Application and teaching it how to connect and send data actually happens
94 Chapter 7. Tracing ns-3 Tutorial, Release ns-3.27
Since we have provided the file fifth.cc for you, if you have built your distribution (in debug mode since it usesNS_LOG – recall that optimized builds optimize out NS_LOG) it will be waiting for you to run.$ ./waf --run fifthWaf: Entering directory `/home/craigdo/repos/ns-3-allinone-dev/ns-3-dev/build'Waf: Leaving directory `/home/craigdo/repos/ns-3-allinone-dev/ns-3-dev/build''build' finished successfully (0.684s)1 5361.0093 10721.01528 16081.02167 2144...1.11319 80401.12151 85761.12983 9112RxDrop at 1.13696...
You can probably see immediately a downside of using prints of any kind in your traces. We get those extraneous wafmessages printed all over our interesting information along with those RxDrop messages. We will remedy that soon,but I’m sure you can’t wait to see the results of all of this work. Let’s redirect that output to a file called cwnd.dat:$ ./waf --run fifth > cwnd.dat 2>&1
Now edit up “cwnd.dat” in your favorite editor and remove the waf build status and drop lines, leaving only the traceddata (you could also comment out the TraceConnectWithoutContext("PhyRxDrop", MakeCallback(&RxDrop)); in the script to get rid of the drop prints just as easily.You can now run gnuplot (if you have it installed) and tell it to generate some pretty pictures:$ gnuplotgnuplot> set terminal png size 640,480gnuplot> set output "cwnd.png"gnuplot> plot "cwnd.dat" using 1:2 title 'Congestion Window' with linespointsgnuplot> exit
You should now have a graph of the congestion window versus time sitting in the file “cwnd.png” that looks like:
In the previous section, we showed how to hook a trace source and get hopefully interesting information out ofa simulation. Perhaps you will recall that we called logging to the standard output using std::cout a “bluntinstrument” much earlier in this chapter. We also wrote about how it was a problem having to parse the log output inorder to isolate interesting information. It may have occurred to you that we just spent a lot of time implementing anexample that exhibits all of the problems we purport to fix with the ns-3 tracing system! You would be correct. But,bear with us. We’re not done yet.One of the most important things we want to do is to is to have the ability to easily control the amount of outputcoming out of the simulation; and we also want to save those data to a file so we can refer back to it later. We can usethe mid-level trace helpers provided in ns-3 to do just that and complete the picture.
96 Chapter 7. Tracing ns-3 Tutorial, Release ns-3.27
We provide a script that writes the cwnd change and drop events developed in the example fifth.cc to disk inseparate files. The cwnd changes are stored as a tab-separated ASCII file and the drop events are stored in a PCAPfile. The changes to make this happen are quite small.
Walkthrough: sixth.cc
Let’s take a look at the changes required to go from fifth.cc to sixth.cc. Openexamples/tutorial/sixth.cc in your favorite editor. You can see the first change by searching forCwndChange. You will find that we have changed the signatures for the trace sinks and have added a single line toeach sink that writes the traced information to a stream representing a}
static voidRxDrop (Ptr<PcapFileWrapper> file, Ptr<const Packet> p){ NS_LOG_UNCOND ("RxDrop at " << Simulator::Now ().GetSeconds ()); file->Write(Simulator::Now(), p);}
We have added a “stream” parameter to the CwndChange trace sink. This is an object that holds (keeps safelyalive) a C++ output stream. It turns out that this is a very simple object, but one that manages lifetime issues forthe stream and solves a problem that even experienced C++ users run into. It turns out that the copy constructor forstd::ostream is marked private. This means that std::ostreams do not obey value semantics and cannot beused in any mechanism that requires the stream to be copied. This includes the ns-3 callback system, which as youmay recall, requires objects that obey value semantics. Further notice that we have added the following line in theCwndChange trace sink implementation:*stream->GetStream () << Simulator::Now ().GetSeconds () << "\t" << oldCwnd << "\t" << newCwnd << std
This would be very familiar code if you replaced *stream->GetStream () with std::cout, as in:std::cout << Simulator::Now ().GetSeconds () << "\t" << oldCwnd << "\t" << newCwnd << std::endl;
This illustrates that the Ptr<OutputStreamWrapper> is really just carrying around a std::ofstream foryou, and you can use it here like any other output stream.A similar situation happens in RxDrop except that the object being passed around (a Ptr<PcapFileWrapper>)represents a PCAP file. There is a one-liner in the trace sink to write a timestamp and the contents of the packet beingdropped to the PCAP file:file->Write(Simulator::Now(), p);
Of course, if we have objects representing the two files, we need to create them somewhere and also cause them to bepassed to the trace sinks. If you look in the main function, you will find new code to do just)
PcapHelper pcapHelper;
In the first section of the code snippet above, we are creating the ASCII trace file, creating an object responsible formanaging it and using a variant of the callback creation function to arrange for the object to be passed to the sink. OurASCII trace helpers provide a rich set of functions to make using text (ASCII) files easy. We are just going to illustratethe use of the file stream creation function here.The CreateFileStream function is basically going to instantiate a std::ofstream object and create a newfile (or truncate an existing file). This std::ofstream is packaged up in an ns-3 object for lifetime managementand copy constructor issue resolution.We then take this ns-3 object representing the file and pass it to MakeBoundCallback(). This function createsa callback just like MakeCallback(), but it “binds” a new value to the callback. This value is added as the firstargument to the callback before it is called.Essentially, MakeBoundCallback(&CwndChange, stream) causes the trace source to add the additional“stream” parameter to the front of the formal parameter list before invoking the callback. This changes the re-quired signature of the CwndChange sink to match the one shown above, which includes the “extra” parameterPtr<OutputStreamWrapper> stream.In the second section of code in the snippet above, we instantiate a PcapHelper to do the same thing for our PCAPtrace file that we did with the AsciiTraceHelper. The line of code,Ptr<PcapFileWrapper> file = pcapHelper.CreateFile ("sixth.pcap","w", PcapHelper::DLT_PPP);
creates a PCAP file named “sixth.pcap” with file mode “w”. This means that the new file is truncated (contents deleted)if an existing file with that name is found. The final parameter is the “data link type” of the new PCAP file. These arethe same as the PCAP library data link types defined in bpf.h if you are familiar with PCAP. In this case, DLT_PPPindicates that the PCAP file is going to contain packets prefixed with point to point headers. This is true since thepackets are coming from our point-to-point device driver. Other common data link types are DLT_EN10MB (10 MBEthernet) appropriate for csma devices and DLT_IEEE802_11 (IEEE 802.11) appropriate for wifi devices. These aredefined in src/network/helper/trace-helper.h if you are interested in seeing the list. The entries in thelist match those in bpf.h but we duplicate them to avoid a PCAP source dependence.A ns-3 object representing the PCAP file is returned from CreateFile and used in a bound callback exactly as itwas in the ASCII case.An important detour: It is important to notice that even though both of these objects are declared in very similar ways,Ptr<PcapFileWrapper> file ...Ptr<OutputStreamWrapper> stream ...
The underlying objects are entirely different. For example, the Ptr<PcapFileWrapper> is a smart pointer toan ns-3 Object that is a fairly heavyweight thing that supports Attributes and is integrated into the Config system.The Ptr<OutputStreamWrapper>, on the other hand, is a smart pointer to a reference counted object that is avery lightweight thing. Remember to look at the object you are referencing before making any assumptions about the“powers” that object may have.For example, take a look at src/network/utils/pcap-file-wrapper.h in the distribution and notice,class PcapFileWrapper : public Object
that class PcapFileWrapper is an ns-3 Object by virtue of its inheritance. Then look atsrc/network/model/output-stream-wrapper.h and notice,class OutputStreamWrapper : publicSimpleRefCount<OutputStreamWrapper>
98 Chapter 7. Tracing ns-3 Tutorial, Release ns-3.27
that this object is not an ns-3 Object at all, it is “merely” a C++ object that happens to support intrusive referencecounting.The point here is that just because you read Ptr<something> it does not necessarily mean that something is anns-3 Object on which you can hang ns-3 Attributes, for example.Now, back to the example. If you build and run this example,$ ./waf --run sixth
you will see the same messages appear as when you ran “fifth”, but two new files will appear in the top-level directoryof your ns-3 distribution.sixth.cwnd sixth.pcap
Since “sixth.cwnd” is an ASCII text file, you can view it with cat or your favorite file viewer.1 0 5361.0093 536 10721.01528 1072 16081.02167 1608 2144...9.69256 5149 52049.89311 5204 5259
You have a tab separated file with a timestamp, an old congestion window and a new congestion window suitable fordirectly importing into your plot program. There are no extraneous prints in the file, no parsing or editing is required.Since “sixth.pcap” is a PCAP file, you can fiew it with tcpdump.reading from file sixth.pcap, link-type PPP (PPP)1.136956 IP 10.1.1.1.49153 > 10.1.1.2.8080: Flags [.], seq 17177:17681, ack 1, win 32768, options [TS1.403196 IP 10.1.1.1.49153 > 10.1.1.2.8080: Flags [.], seq 33280:33784, ack 1, win 32768, options [TS...7.426220 IP 10.1.1.1.49153 > 10.1.1.2.8080: Flags [.], seq 785704:786240, ack 1, win 32768, options [9.630693 IP 10.1.1.1.49153 > 10.1.1.2.8080: Flags [.], seq 882688:883224, ack 1, win 32768, options [
You have a PCAP file with the packets that were dropped in the simulation. There are no other packets present in thefile and there is nothing else present to make life difficult.It’s been a long journey, but we are now at a point where we can appreciate the ns-3 tracing system. We have pulledimportant events out of the middle of a TCP implementation and a device driver. We stored those events directly infiles usable with commonly known tools. We did this without modifying any of the core code involved, and we didthis in only 18 lines of)
static void
PcapHelper pcapHelper;Ptr<PcapFileWrapper> file = pcapHelper.CreateFile ("sixth.pcap", "w", PcapHelper::DLT_PPP);devices.Get (1)->TraceConnectWithoutContext("PhyRxDrop", MakeBoundCallback (&RxDrop, file));
The ns-3 trace helpers provide a rich environment for configuring and selecting different trace events and writing themto files. In previous sections, primarily Building Topologies, we have seen several varieties of the trace helper methodsdesigned for use inside other (device) helpers.Perhaps you will recall seeing some of these variations:pointToPoint.EnablePcapAll ("second");pointToPoint.EnablePcap ("second", p2pNodes.Get (0)->GetId (), 0);csma.EnablePcap ("third", csmaDevices.Get (0), true);pointToPoint.EnableAsciiAll (ascii.CreateFileStream ("myfirst.tr"));
What may not be obvious, though, is that there is a consistent model for all of the trace-related methods found in thesystem. We will now take a little time and take a look at the “big picture”.There are currently two primary use cases of the tracing helpers in ns-3: device helpers and protocol helpers. Devicehelpers look at the problem of specifying which traces should be enabled through a (node, device) pair. For example,you may want to specify that PCAP tracing should be enabled on a particular device on a specific node. This followsfrom the ns-3 device conceptual model, and also the conceptual models of the various device helpers. Followingnaturally from this, the files created follow a <prefix>-<node>-<device> naming convention.Protocol helpers look at the problem of specifying which traces should be enabled through a protocol and interfacepair. This follows from the ns-3 protocol stack conceptual model, and also the conceptual models of internet stackhelpers. Naturally, the trace files should follow a <prefix>-<protocol>-<interface> naming convention.The trace helpers therefore fall naturally into a two-dimensional taxonomy. There are subtleties that prevent all fourclasses from behaving identically, but we do strive to make them all work as similarly as possible; and wheneverpossible there are analogs for all methods in all classes. PCAP ASCII Device Helper X X Protocol Helper X XWe use an approach called a mixin to add tracing functionality to our helper classes. A mixin is a class that providesfunctionality when it is inherited by a subclass. Inheriting from a mixin is not considered a form of specialization butis really a way to collect functionality.Let’s take a quick look at all four of these cases and their respective mixins.
PCAP
The goal of these helpers is to make it easy to add a consistent PCAP trace facility to an ns-3 device. We want all of thevarious flavors of PCAP tracing to work the same across all devices, so the methods of these helpers are inherited bydevice helpers. Take a look at src/network/helper/trace-helper.h if you want to follow the discussionwhile looking at real code.The class PcapHelperForDevice is a mixin provides the high level functionality for using PCAP tracing in anns-3 device. Every device must implement a single virtual method inherited from this class.virtual void EnablePcapInternal (std::string prefix, Ptr<NetDevice> nd, bool promiscuous, bool explic
The signature of this method reflects the device-centric view of the situation at this level. All of the public methodsinherited from class PcapUserHelperForDevice reduce to calling this single device-dependent implementationmethod. For example, the lowest level PCAP method,void EnablePcap (std::string prefix, Ptr<NetDevice> nd, bool promiscuous = false, bool explicitFilena
will call the device implementation of EnablePcapInternal directly. All other public PCAP tracing methodsbuild on this implementation to provide additional user-level functionality. What this means to the user is that alldevice helpers in the system will have all of the PCAP trace methods available; and these methods will all work in thesame way across devices if the device implements EnablePcapInternal correctly.
Methods
void EnablePcap (std::string prefix, Ptr<NetDevice> nd, bool promiscuous = false, bool explicitFilenavoid EnablePcap (std::string prefix, std::string ndName, bool promiscuous = false, bool explicitFilenvoid EnablePcap (std::string prefix, NetDeviceContainer d, bool promiscuous = false);void EnablePcap (std::string prefix, NodeContainer n, bool promiscuous = false);void EnablePcap (std::string prefix, uint32_t nodeid, uint32_t deviceid, bool promiscuous = false);void EnablePcapAll (std::string prefix, bool promiscuous = false);
In each of the methods shown above, there is a default parameter called promiscuous that defaults to false. Thisparameter indicates that the trace should not be gathered in promiscuous mode. If you do want your traces to includeall traffic seen by the device (and if the device supports a promiscuous mode) simply add a true parameter to any ofthe calls above. For example,Ptr<NetDevice> nd;...helper.EnablePcap ("prefix", nd, true);
• You can enable PCAP tracing on a particular node/net-device pair by providing a std::string representing an object name service string to an EnablePcap method. The Ptr<NetDevice> is looked up from the name string. Again, the <Node> is implicit since the named net device must belong to exactly one Node. For example, Names::Add ("server" ...); Names::Add ("server/eth0" ...); ... helper.EnablePcap ("prefix", "server/ath0");
• You can enable PCAPPcap ("prefix", d);
• You can enable PCAPPcap ("prefix", n);
• You can enable PCAP tracing on the basis of Node ID and device ID as well as with explicit Ptr. Each Node in the system has an integer Node ID and each device connected to a Node has an integer device ID. helper.EnablePcap ("prefix", 21, 1);
• Finally, you can enable PCAP tracing for all devices in the system, with the same type as that managed by the device helper. helper.EnablePcapAll ("prefix");
Filenames
Implicit in the method descriptions above is the construction of a complete filename by the implementation method.By convention, PCAP traces in the ns-3 system are of the form <prefix>-<node id>-<device id>.pcapAs previously mentioned, every Node in the system will have a system-assigned Node id; and every device will havean interface index (also called a device id) relative to its node. By default, then, a PCAP trace file created as a resultof enabling tracing on the first device of Node 21 using the prefix “prefix” would be prefix-21-1.pcap.You can always use the ns-3 object name service to make this more clear. For example, if you use the object nameservice to assign the name “server” to Node 21, the resulting PCAP trace file name will automatically become,prefix-server-1.pcap and if you also assign the name “eth0” to the device, your PCAP file name will au-tomatically pick this up and be called prefix-server-eth0.pcap.Finally, two of the methods shown above,void EnablePcap (std::string prefix, Ptr<NetDevice> nd, bool promiscuous = false, bool explicitFilenavoid EnablePcap (std::string prefix, std::string ndName, bool promiscuous = false, bool explicitFilen
have a default parameter called explicitFilename. When set to true, this parameter disables the automaticfilename completion mechanism and allows you to create an explicit filename. This option is only available in themethods which enable PCAP tracing on a single device.For example, in order to arrange for a device helper to create a single promiscuous PCAP capture file of a specificname my-pcap-file.pcap on a given device, one could:Ptr<NetDevice> nd;...helper.EnablePcap ("my-pcap-file.pcap", nd, true, true);
The first true parameter enables promiscuous mode traces and the second tells the helper to interpret the prefixparameter as a complete filename.
ASCII
The behavior of the ASCII trace helper mixin is substantially similar to the PCAP version. Take a look atsrc/network/helper/trace-helper.h if you want to follow the discussion while looking at real code.The class AsciiTraceHelperForDevice adds the high level functionality for using ASCII tracing to a devicehelper class. As in the PCAP case, every device must implement a single virtual method inherited from the ASCIItrace mixin.virtual void EnableAsciiInternal (Ptr<OutputStreamWrapper> stream, std::string prefix, Ptr<NetDevice> nd, bool explicitFilename) = 0;
The signature of this method reflects the device-centric view of the situation at this level; and also the fact that thehelper may be writing to a shared output stream. All of the public ASCII-trace-related methods inherited from classAsciiTraceHelperForDevice reduce to calling this single device- dependent implementation method. Forexample, the lowest level ascii trace methods,void EnableAscii (std::string prefix, Ptr<NetDevice> nd, bool explicitFilename = false);void EnableAscii (Ptr<OutputStreamWrapper> stream, Ptr<NetDevice> nd);
will call the device implementation of EnableAsciiInternal directly, providing either a valid prefix or stream.All other public ASCII tracing methods will build on these low-level functions to provide additional user-levelfunctionality. What this means to the user is that all device helpers in the system will have all of the ASCIItrace methods available; and these methods will all work in the same way across devices if the devices implementEnablAsciiInternal correctly.
void EnableAscii (std::string prefix, uint32_t nodeid, uint32_t deviceid, bool explicitFilename);void EnableAscii (Ptr<OutputStreamWrapper> stream, uint32_t nodeid, uint32_t deviceid);
You are encouraged to peruse the API Documentation for class AsciiTraceHelperForDevice to find the detailsof these methods; but to summarize ... • There are twice as many methods available for ASCII tracing as there were for PCAP tracing. This is because, in addition to the PCAP-style model where traces from each unique node/device pair are written to a unique file, we support a model in which trace information for many node/device pairs is written to a common file. This means that the <prefix>-<node>-<device> file name generation mechanism is replaced by a mechanism to refer to a common file; and the number of API methods is doubled to allow all combinations. • Just as in PCAP tracing, you can enable ASCII tracing on a particular (node, net-device) pair by providing a Ptr<NetDevice> to an EnableAscii method. The Ptr<Node> is implicit since the net device must belong to exactly one Node. For example,Ptr<NetDevice> nd;...helper.EnableAscii ("prefix", nd);
• The first four methods also include a default parameter called explicitFilename that operate similar to equivalent parameters in the PCAP case. net device and have all traces sent to a single file, you can do that as well by using an object to refer to a single file. We have already seen this in the “cwnd” example above: Ptr<NetDevice> nd1; Ptr<NetDevice> nd2; ... Ptr<OutputStreamWrapper> stream = asciiTraceHelper.CreateFileStream ("trace-file-name.tr"); ... helper.EnableAscii (stream, nd1); helper.EnableAscii (stream, nd2);
In this case, trace contexts are written to the ASCII trace file since they are required to disambiguate traces from the two devices. Note that since the user is completely specifying the file name, the string should include the ,tr suffix for consistency. • You can enable ASCII tracing on a particular (node, net-device) pair by providing a std::string represent- ing an object name service string to an EnablePcap method. The Ptr<NetDevice> is looked up from the name string. Again, the <Node> is implicit since the named net device must belong to exactly one Node. For example, Names::Add ("client" ...); Names::Add ("client/eth0" ...); Names::Add ("server" ...); Names::Add ("server/eth0" ...); ... helper.EnableAscii ("prefix", "client/eth0"); helper.EnableAscii ("prefix", "server/eth0");
This would result in a single trace file called trace-file-name.tr that contains all of the trace events for both devices. The events would be disambiguated by trace context strings. • You can enable ASCIIAscii ("prefix", d);
Combining all of the traces into a single file is accomplished similarly to the examples above: NetDeviceContainer d = ...; ... Ptr<OutputStreamWrapper> stream = asciiTraceHelper.CreateFileStream ("trace-file-name.tr"); ... helper.EnableAscii (stream, d);
• You can enable ASCIIAsci as with explicit Ptr. Each Node in the system has an integer Node ID and each device connected to a Node has an integer device ID. helper.EnableAscii ("prefix", 21, 1);
Of course, the traces can be combined into a single file as shown above.
• Finally, you can enable PCAP tracing for all devices in the system, with the same type as that managed by the device helper. helper.EnableAsciiAll ("prefix");
This would result in a number of ASCII trace files being created, one for every device in the system of the type managed by the helper. All of these files will follow the <prefix>-<node id>-<device id>id>.trAs previously mentioned, every Node in the system will have a system-assigned Node id; and every device will havean interface index (also called a device id) relative to its node. By default, then, an ASCII trace file created as a resultof enabling tracing on the first device of Node 21, using the prefix “prefix”, would be prefix-21-1.tr.You can always use the ns-3 object name service to make this more clear. For example, if you use the object nameservice to assign the name “server” to Node 21, the resulting ASCII trace file name will automatically become,prefix-server-1.tr and if you also assign the name “eth0” to the device, your ASCII trace file name willautomatically pick this up and be called prefix-server-eth0.tr goal of these mixins is to make it easy to add a consistent PCAP trace facility to protocols. We want all of thevarious flavors of PCAP tracing to work the same across all protocols, so the methods of these helpers are inheritedby stack helpers. Take a look at src/network/helper/trace-helper.h if you want to follow the discussionwhilePcapIpv6 instead of EnablePcapIpv4.The class PcapHelperForIpv4 provides the high level functionality for using PCAP tracing in the Ipv4 protocol.Each protocol helper enabling these methods must implement a single virtual method inherited from this class. Therewill be a separate implementation for Ipv6, for example, but the only difference will be in the method names andsignatures. Different method names are required to disambiguate class Ipv4 from Ipv6 which are both derived fromclass Object, and methods that share the same signature.virtual void EnablePcapIpv4Internal (std::string prefix, Ptr<Ipv4> ipv4, uint32_t interface, bool explicitFilename) = 0;
The signature of this method reflects the protocol and interface-centric view of the situation at this level. All of thepublic methods inherited from class PcapHelperForIpv4 reduce to calling this single device-dependent imple-mentation method. For example, the lowest level PCAP method,
void EnablePcapIpv4 (std::string prefix, Ptr<Ipv4> ipv4, uint32_t interface, bool explicitFilename =
will call the device implementation of EnablePcapIpv4Internal directly. All other public PCAP tracing meth-ods build on this implementation to provide additional user-level functionality. What this means to the user is that allprotocol helpers in the system will have all of the PCAP trace methods available; and these methods will all work inthe same way across protocols if the helper implements EnablePcapIpv4Internal correctly.
These methods are designed to be in one-to-one correspondence with the Node- and NetDevice- centric versions ofthe device versions. Instead of Node and NetDevice pair constraints, we use protocol and interface constraints.Note that just like in the device version, there are six methods:void EnablePcapIpv4 (std::string prefix, Ptr<Ipv4> ipv4, uint32_t interface, bool explicitFilename =void EnablePcapIpv4 (std::string prefix, std::string ipv4Name, uint32_t interface, bool explicitFilenvoid EnablePcapIpv4 (std::string prefix, Ipv4InterfaceContainer c);void EnablePcapIpv4 (std::string prefix, NodeContainer n);void EnablePcapIpv4 (std::string prefix, uint32_t nodeid, uint32_t interface, bool explicitFilename);void EnablePcapIpv4All (std::string prefix);
You are encouraged to peruse the API Documentation for class PcapHelperForIpv4 to find the details of thesemethods; but to summarize ... • You can enable PCAP tracing on a particular protocol/interface pair by providing a Ptr<Ipv4> and interface to an EnablePcap method. For example, Ptr<Ipv4> ipv4 = node->GetObject<Ipv4> (); ... helper.EnablePcapIpv4 ("prefix", ipv4, 0);
• You can enable PCAP tracing on a particular node/net-device pair by providing a std::string representing an object name service string to an EnablePcap method. The Ptr<Ipv4> is looked up from the name string. For example, Names::Add ("serverIPv4" ...); ... helper.EnablePcapIpv4 ("prefix", "serverIpv4", 1);
• You can enable PCAPPcapIpv4 ("prefix", n);
• You can enable PCAP tracing on the basis of Node ID and interface as well. In this case, the node-id is translated to a Ptr<Node> and the appropriate protocol is looked up in the node. The resulting protocol and interface are used to specify the resulting trace source. helper.EnablePcapIpv4 ("prefix", 21, 1);
• Finally, you can enable PCAP tracing for all interfaces in the system, with associated protocol being the same type as that managed by the device helper. helper.EnablePcapIpv4All ("prefix");
Implicit in all of the method descriptions above is the construction of the complete filenames by the implementationmethod. By convention, PCAP traces taken for devices in the ns-3 system are of the form “<prefix>-<node id>-<deviceid>.pcap”. In the case of protocol traces, there is a one-to-one correspondence between protocols and Nodes. This isbecause protocol Objects are aggregated to Node Objects. Since there is no global protocol id in the system, weuse the corresponding Node id in file naming. Therefore there is a possibility for file name collisions in automaticallychosen trace file names. For this reason, the file name convention is changed for protocol traces.As previously mentioned, every Node in the system will have a system-assigned Node id. Since there is a one-to-onecorrespondence between protocol instances and Node instances we use the Node id. Each interface has an interfaceid relative to its protocol. We use the convention “<prefix>-n<node id>-i<interface id>.pcap” for trace file naming inprotocol helpers.Therefore, by default, a PCAP trace file created as a result of enabling tracing on interface 1 of the Ipv4 protocol ofNode 21 using the prefix “prefix” would be “prefix-n21-i1.pcap”.You can always use the ns-3 object name service to make this more clear. For example, if you use the object nameservice to assign the name “serverIpv4” to the Ptr<Ipv4> on Node 21, the resulting PCAP trace file name will auto-matically become, “prefix-nserverIpv4-i1.pcap” behavior of the ASCII trace helpers is substantially similar to the PCAP case. Take a look atsrc/network/helper/trace-helper.h if you want to follow the discussion whileAsciiIpv6 instead of EnableAsciiIpv4.The class AsciiTraceHelperForIpv4 adds the high level functionality for using ASCII tracing to a protocolhelper. Each protocol that enables these methods must implement a single virtual method inherited from this class.virtual void EnableAsciiIpv4Internal (Ptr<OutputStreamWrapper> stream, std::string prefix, Ptr<Ipv4> ipv4, uint32_t interface, bool explicitFilename) = 0;
The signature of this method reflects the protocol- and interface-centric view of the situation at this level; and alsothe fact that the helper may be writing to a shared output stream. All of the public methods inherited from classPcapAndAsciiTraceHelperForIpv4 reduce to calling this single device- dependent implementation method.For example, the lowest level ASCII trace methods,void EnableAsciiIpv4 (std::string prefix, Ptr<Ipv4> ipv4, uint32_t interface, bool explicitFilename =void EnableAsciiIpv4 (Ptr<OutputStreamWrapper> stream, Ptr<Ipv4> ipv4, uint32_t interface);
will call the device implementation of EnableAsciiIpv4Internal directly, providing either the prefix or thestream. All other public ASCII tracing methods will build on these low-level functions to provide additional user-level functionality. What this means to the user is that all device helpers in the system will have all of the ASCIItrace methods available; and these methods will all work in the same way across protocols if the protocols implementEnablAsciiIpv4Internal correctly.
void EnableAsciiIpv4 (std::string prefix, Ptr<Ipv4> ipv4, uint32_t interface, bool explicitFilename =void EnableAsciiIpv4 (Ptr<OutputStreamWrapper> stream, Ptr<Ipv4> ipv4, uint32_t interface);
void EnableAsciiIpv4 (std::string prefix, std::string ipv4Name, uint32_t interface, bool explicitFilevoid EnableAsciiIpv4 (Ptr<OutputStreamWrapper> stream, std::string ipv4Name, uint32_t interface);
void EnableAsciiIpv4 (std::string prefix, uint32_t nodeid, uint32_t deviceid, bool explicitFilename);void EnableAsciiIpv4 (Ptr<OutputStreamWrapper> stream, uint32_t nodeid, uint32_t interface);
You are encouraged to peruse the API Documentation for class PcapAndAsciiHelperForIpv4 to find the detailsof these methods; but to summarize ... • There are twice as many methods available for ASCII tracing as there were for PCAP tracing. This is because, in addition to the PCAP-style model where traces from each unique protocol/interface pair are written to a unique file, we support a model in which trace information for many protocol/interface pairs is written to a common file. This means that the <prefix>-n<node id>-<interface> file name generation mechanism is replaced by a mechanism to refer to a common file; and the number of API methods is doubled to allow all combinations. • Just as in PCAP tracing, you can enable ASCII tracing on a particular protocol/interface pair by providing a Ptr<Ipv4> and an interface to an EnableAscii method. For example, Ptr<Ipv4> ipv4; ... helper.EnableAsciiIpv4 ("prefix", ipv4, 1); interface and have all traces sent to a single file, you can do that as well by using an object to refer to a single file. We have already something similar to this in the “cwnd” example above:
In this case, trace contexts are written to the ASCII trace file since they are required to disambiguate traces from the two interfaces. Note that since the user is completely specifying the file name, the string should include the ”,tr” for consistency. • You can enable ASCII tracing on a particular protocol by providing a std::string representing an object name service string to an EnablePcap method. The Ptr<Ipv4> is looked up from the name string. The <Node> in the resulting filenames is implicit since there is a one-to-one correspondence between protocol instances and nodes, For example, Names::Add ("node1Ipv4" ...); Names::Add ("node2Ipv4" ...); ... helper.EnableAsciiIpv4 ("prefix", "node1Ipv4", 1); helper.EnableAsciiIpv4 ("prefix", "node2Ipv4", 1);
This would result in two files named “prefix-nnode1Ipv4-i1.tr” and “prefix-nnode2Ipv4-i1.tr” with traces for each interface in the respective trace file. Since all of the EnableAscii functions are overloaded to take a stream wrapper, you can use that form as well: Names::Add ("node1Ipv4" ...); Names::Add ("node2Ipv4" ...); ... Ptr<OutputStreamWrapper> stream = asciiTraceHelper.CreateFileStream ("trace-file-name.tr"); ... helper.EnableAsciiIpv4 (stream, "node1Ipv4", 1); helper.EnableAsciiIpv4 (stream, "node2Ipv4", 1);
This would result in a single trace file called “trace-file-name.tr” that contains all of the trace events for both interfaces. The events would be disambiguated by trace context strings. • You can enable ASCII tracing on a collection of protocol/interface pairs by providing an Ipv4InterfaceContainer. For each protocol of the proper type (the same type as is managed by the device helper), tracing is enabled for the corresponding interface. Again, the <Node> is implicit since there is a one-to-one correspondence between each protocol and its node. For example, NodeContainer nodes; ... NetDeviceContainer devices = deviceHelper.Install (nodes); ... Ipv4AddressHelper ipv4; ipv4.SetBase ("10.1.1.0", "255.255.255.0"); Ipv4InterfaceContainer interfaces = ipv4.Assign (devices); ... ... helper.EnableAsciiIpv4 ("prefix", interfaces);
This would result in a number of ASCII trace files being created, each of which follows the <prefix>-n<node id>-i<interface>.tr convention. Combining all of the traces into a single file is accomplished similarly to the examples above:
NodeContainer nodes; ... NetDeviceContainer devices = deviceHelper.Install (nodes); ... Ipv4AddressHelper ipv4; ipv4.SetBase ("10.1.1.0", "255.255.255.0"); Ipv4InterfaceContainer interfaces = ipv4.Assign (devices); ... Ptr<OutputStreamWrapper> stream = asciiTraceHelper.CreateFileStream ("trace-file-name.tr"); ... helper.EnableAsciiIpv4 (stream, interfaces);
• You can enable ASCIIAsciiIpv. In this case, the node-id is translated to a Ptr<Node> and the appropriate protocol is looked up in the node. The resulting protocol and interface are used to specify the resulting trace source. helper.EnableAsciiIpv4 ("prefix", 21, 1);
Of course, the traces can be combined into a single file as shown above. • Finally, you can enable ASCII tracing for all interfaces in the system, with associated protocol being the same type as that managed by the device helper. helper.EnableAsciiIpv4All ("prefix");
This would result in a number of ASCII trace files being created, one for every interface in the system related to a protocol of the type managed by the helper. All of these files will follow the <prefix>-n<node id>-i<interface id>.tr”As previously mentioned, every Node in the system will have a system-assigned Node id. Since there is a one-to-onecorrespondence between protocols and nodes we use to node-id to identify the protocol identity. Every interface on agiven protocol will have an interface index (also called simply an interface) relative to its protocol. By default, then,an ASCII trace file created as a result of enabling tracing on the first device of Node 21, using the prefix “prefix”,would be “prefix-n21-i1.tr”. Use the prefix to disambiguate multiple protocols per node.You can always use the ns-3 object name service to make this more clear. For example, if you use the object nameservice to assign the name “serverIpv4” to the protocol on Node 21, and also specify interface one, the resulting ASCIItrace file name will automatically become, “prefix-nserverIpv4-1.tr”.Several of the methods have a default parameter called explicitFilename. When set to true, this parameterdisables the automatic filename completion mechanism and allows you to create an explicit filename. This option is
only available in the methods which take a prefix and enable tracing on a single device.
7.5 Summary
ns-3 includes an extremely rich environment allowing users at several levels to customize the kinds of information thatcan be extracted from simulations.There are high-level helper functions that allow users to simply control the collection of pre-defined outputs to a finegranularity. There are mid-level helper functions to allow more sophisticated users to customize how information isextracted and saved; and there are low-level core functions to allow expert users to alter the system to present new andpreviously unexported information in a way that will be immediately accessible to users at higher levels.This is a very comprehensive system, and we realize that it is a lot to digest, especially for new users or those notintimately familiar with C++ and its idioms. We do consider the tracing system a very important part of ns-3 and sorecommend becoming as familiar as possible with it. It is probably the case that understanding the rest of the ns-3system will be quite simple once you have mastered the tracing system
EIGHT
DATA COLLECTION
Our final tutorial chapter introduces some components that were added to ns-3 in version 3.18, and that are still underdevelopment. This tutorial section is also a work-in-progress.
8.1 Motivation
One of the main points of running simulations is to generate output data, either for research purposes or simply to learnabout the system. In the previous chapter, we introduced the tracing subsystem and the example sixth.cc. fromwhich PCAP or ASCII trace files are generated. These traces are valuable for data analysis using a variety of externaltools, and for many users, such output data is a preferred means of gathering data (for analysis by external tools).However, there are also use cases for more than trace file generation, including the following: • generation of data that does not map well to PCAP or ASCII traces, such as non-packet data (e.g. protocol state machine transitions), • large simulations for which the disk I/O requirements for generating trace files is prohibitive or cumbersome, and •framework; here, we summarize with an example program some of the developing capabilities.
If the user specifies useIpv6, option, the program will be run using IPv6 instead of IPv4. The help option, availableon all ns-3 programs that support the CommandLine object as shown above, can be invoked as follows (please notethe use of double quotes):./waf --run "seventh --help"
113ns-3 Tutorial, Release ns-3.27"
This has been a short digression into IPv6 support and the command line, which was also in-troduced earlier in this tutorial. For a dedicated example of command line usage, please seesrc/core/examples/command-line-example.cc.Now back to data collection. In the examples/tutorial/ directory, type the following command: diff -usixthcreated a number of new output files:seventh-packet-byte-count-0.txtseventh-packet-byte-count-1.txtseventh-packet-byte-count.datseventh-packet-byte-count.pltseventh-packet-byte-count.pngseventh-packet-byte-count.sh
These were created by the additional statements introduced above; in particular, by a GnuplotHelper and a FileHelper.This data was produced by hooking the data collection components to ns-3 trace sources, and marshaling the data intoa formatted gnuplot and into a formatted text file. In the next sections, we’ll review each of these.
8.3 GnuplotHelper
The GnuplotHelper is an ns-3 helper object aimed at the production of gnuplot plots with as few statements aspossible, for common cases. It hooks ns-3 trace sources with data types supported by the data collection system. Notall ns-3 trace sources data types are supported, but many of the common trace types are, including TracedValues withplain old data (POD) types.Let’s look at the output produced by this helper:seventh-packet-byte-count.datseventh-packet-byte-count.pltseventh-packet-byte-count.sh
The first is a gnuplot data file with a series of space-delimited timestamps and packet byte counts. We’llcover how this particular data output was configured below, but let’s continue with the output files. Thefile seventh-packet-byte-count.plt is a gnuplot plot file, that can be opened from within gnuplot.
Readers who understand gnuplot syntax can see that this will produce a formatted output PNG file namedseventh-packet-byte-count.png. Finally, a small shell script seventh-packet-byte-count.shruns this plot file through gnuplot to produce the desired PNG (which can be viewed in an image editor); that is, thecommand:sh seventh-packet-byte-count.sh
will yield seventh-packet-byte-count.png. Why wasn’t this PNG produced in the first place? The answeris that by providing the plt file, the user can hand-configure the result if desired, before producing the PNG.The PNG image title states that this plot is a plot of “Packet Byte Count vs. Time”, and that it is plotting the probeddata corresponding to the trace source path:/NodeList/*/$ns3::Ipv6L3Protocol/Tx
Note the wild-card in the trace path. In summary, what this plot is capturing is the plot of packet bytes observed at thetransmit trace source of the Ipv6L3Protocol object; largely 596-byte TCP segments in one direction, and 60-byte TCPacks in the other (two node trace sources were matched by this trace source).How was this configured? A few statements need to be provided. First, the GnuplotHelper object must be declaredwe declared a few variables for later use:+ std::string probeType;+ std::string tracePath;+ probeType = "ns3::Ipv6PacketProbe";+ tracePath = "/NodeList/*/$ns3::Ipv6L3Protocol/Tx";
The first two arguments are the name of the probe type and the trace source path. These two are prob-ably the hardest to determine when you try to use this framework to plot other traces. The probe tracehere is the Tx trace source of class Ipv6L3Protocol. When we examine this class implementation(src/internet/model/ipv6-l3-protocol.cc) we can observe:Ip tracesource signature), we could have not used this statement (although some more complicated lower-level statementscould have been used, as described in the manual).The Ipv6PacketProbe exports, itself, some trace sources that extract the data out of the probed Packet object:TypeIdIp thiprovide the plot legend for this data series (“Packet Byte Count”), and an optional gnuplot formatting statement(GnuplotAggregator::KEY_BELOW) that we want the plot key to be inserted below the plot. Other options includeNO_KEY, KEY_INSIDE, and KEY_ABOVE.
The following traced values are supported with Probes as of this writing: TracedValue type Probe type File double DoubleProbe stats/model/double-probe.h uint8_t Uinteger8Probe stats/model/uinteger-8-probe.h uint16_t Uinteger16Probe stats/model/uinteger-16-probe.h uint32_t Uinteger32Probe stats/model/uinteger-32-probe.h bool BooleanProbe stats/model/uinteger-16-probe.h ns3::Time TimeProbe stats/model/time-probe.hThe following TraceSource types are supported by Probes as of this writing:
8.5 FileHelper
The FileHelper class is just a variation of the previous GnuplotHelper example. The example program providesformatted output of the same timestamped data, such as follows:Time (Seconds) = 9.312e+00 Packet Byte Count = 596Time (Seconds) = 9.312e+00 Packet Byte Count = 564
Two files are provided, one for node “0” and one for node “1” as can be seen in the filenames. Let’s look at the codepiece includeSPACE_SEPARATED, COMMA_SEPARATED, and TAB_SEPARATED. Users are able to change the formatting (ifFORMATTEDused,two data series were overlaid on the same plot, here, two separate files are written to disk.
8.6 Summary
Data collection support is new as of ns-3.18, and basic support for providing time series output has been added. Thebasic pattern described above may be replicated within the scope of support of the existing probes and trace sources.More capabilities including statistics processing will be added in future releases.
NINE
CONCLUSION
9.1 Futures
This document is intended as a living document. We hope and expect it to grow over time to cover more and more ofthe nuts and bolts of ns-3.Writing manual and tutorial chapters is not something we all get excited about, but it is very important to the project.If you are an expert in one of these areas, please consider contributing to ns-3 by providing one of these chapters; orany other chapter you may think is important.
9.2 Closing
ns-3 is a large and complicated system. It is impossible to cover all of the things you will need to know in one smalltutorial. Readers who want to learn more are encouraged to read the following additional documentation: • The ns-3 manual • The ns-3 model library documentation • The ns-3 Doxygen (API documentation) • The ns-3 wiki– The ns-3 development team.
121
Much more than documents.
Discover everything Scribd has to offer, including books and audiobooks from major publishers.Cancel anytime. | https://www.scribd.com/document/370106434/ns-3-tutorial-ori-pdf | CC-MAIN-2020-10 | en | refinedweb |
C++/CLI specifies several keywords as extensions to ISO C++. The way they are handled
falls into five major categories, where only the first impacts the meaning of existing
ISO C++ programs.
1. Outright reserved words
As of this writing (November 22, 2003, the day after we released the candidate base
document), C++/CLI is down to only three reserved words:
gcnew generic nullptr
An existing program that uses these words as identifiers and wants to use C++/CLI
would have to rename the identifiers. I'll return to these three again at the end.
All the other keywords, below, are contextual keywords that do not conflict with identifiers.
Any legal ISO C++ program that already uses the names below as identifiers will continue
to work as before; these keywords are not reserved words.
2. Spaced keywords
One implementation technique we are using is to specify some keywords that include
embedded whitespace. These are safe: They can't possibly conflict with any user identifiers
because no C++ program can create an identifier that contains whitespace characters.
[I'll omit the obligatory reference to Bjarne's classic April Fool's joke article
on the whitespace operator. 🙂 But what I'm saying here is true, not a joke.]
Currently these are:
for each
enum class/struct
interface class/struct
ref class/struct
value class/struct
For example, "ref class" is a single token in the lexer, and programs
that have a type or variable or namespace named ref are entirely
unaffected. (Somewhat amazingly, even most macros named ref are
unaffected and don't affect C++/CLI, unless coincidentally the next token in the macro's
definition line happens to be class or struct; more
on this near the end.)
3. Contextual keywords that can never appear where an identifier could appear
Another technique we used was to define some keywords that can only appear in positions
in the language grammar where today nothing may appear. These too are safe: They can't
conflict with any user identifiers because no identifiers could appear where the keyword
appears, and vice versa. Currently these are:
abstract finally in
override sealed where
For example, abstract as a C++/CLI keyword can only appear in a class
definition after the class name and before the base class list, where nothing can
appear today:
ref class X abstract : B1, B2 { // ok, can only be the keyword
int abstract; //
ok, just another identifier
};
class abstract { }; //
ok, just another identifier
namespace abstract { /*...*/ } // ok, just another identifier
4. Contextual keywords that can appear where an identifier could appear
Some keywords can appear in a grammar position where an identifier could also appear,
and this is the case that needs some extra attention. There are currently five keywords
in this category:
delegate event initonly
literal property
In such grammar positions, when the compiler encounters a token that is spelled the
same as one of these keywords, the compiler can't know whether the token means the
keyword or whether it means an identifier until it first does some further lookahead
to consider later tokens. For example, consider the following inside a class scope:
property int x; // ok, here property is the contextual
keyword
property x; // ok, if property
is the name of a type
Now imagine you're a compiler: What do you do when you hit the token property as
the first token of the next class member declaration? There's not enough information
to decide for sure whether it's an identifier or a keyword without looking further
ahead, and C++/CLI has to specify the decision procedure -- the rules for deciding
whether it's a keyword or an identifier. As long as the user doesn't make a mistake
(i.e., as long as it's a legal program with or without C++/CLI) the answer is clear,
because there's no ambiguity.
But now the "quality of diagnostics" issue rears its head, in this category of
contextual keywords and this category only: What if the user makes a mistake?
For example:
property x; // error, if no
type "property" exists
Let's say that we set up a disambiguation rule with the following general structure
(I'll get specific in just a moment):
1. Assume one case and try to parse what comes next that way.
2. If that fails, then assume the other case and try again.
3. If that fails, then issue a diagnostic.
In the case of property x; when there's no type in scope named property,
both #1 and #2 will fail and the question is: When we get to the diagnostic in case
#3, what error message is the user likely to see? The answer almost certainly is,
a message that applies to the second "other" case. Why? Because the compiler already
tried the first case, failed, backed up and tried the second "other" case -- and it's
still in that latter mode with all that context when it finally realizes that didn't
work either and now it has to issue the diagnostic. So by default, absent some (often
prodigious) amount of extra work inside the compiler, the diagnostic that you'll get
is the one that's easiest to give, namely the one for the case the compiler was most
recently pursuing, namely the "other" case mentioned in #2 -- because the compiler
already gave up on the first case, and went down the other path instead.
So let's get specific. Let's say that the rule we picked was:
1. Assume that it's an identifier and try to parse it that way
(i.e., by default assume no use of the keyword extension).
2. If that fails, then assume that it's the keyword and try again.
3. If that fails, then issue a diagnostic.
Under that rule, what's the diagnostic the user gets on an illegal declaration of property
x;? One that's in the context of #2 (keyword), something like "illegal property
declaration," perhaps with a "the type 'x' was not defined" or a
"you forgot to specify the type for property 'x'" in there somewhere.
On the other hand, let's say that the rule we picked was:
1. Assume that it's the keyword and try to parse it that way.
2. If that fails, then assume that it's an identifier and try again.
3. If that fails, then issue a diagnostic.
Under this rule, the diagnostic that's easy to give is something like "the type 'property'
was not defined."
Which is better?
This illustrates why it's very important to consider common mistakes and whether the
diagnostic the user will get really applies to what he was probably trying to do.
In this case, it's probably better to emit something like "no type named 'property'
exists" than "you forgot to specify a type for your property named 'x'"
-- the former is more likely to address what the user was trying to do, and it also
happens to preserve the diagnostics for ISO C++ programs.
More broadly, of course, there are other rules you can use than the two "try one way
then try the other" variants shown above. But I hope this helps to give the flavor
for the 'quality of diagnostics' problem.
- Aside: There's usually no ambiguity in the case of property (or
the other keywords in this category); the only case I know of where you could write
legal C++/CLI code where one of these five keywords could be legally interpreted
both ways, both as the keyword and as an identifier, is when the type has a global
qualification. Here's an example courtesy of Mark Hall:
initonly :: T t;
Is this a declaration of an initonly member t of
type ::T (i.e, initonly ::T t;), or
a declaration of a member t of type initonly::T (i.e, initonly::T
t; where if initonly is the name of a namespace or class then this is legal
ISO C++). Our current thinking is to adopt the rule "if it can be an identifier, it
is," and so this case would mean the latter, either always (even if there's no such
type) or perhaps only if there is such a type.
I feel compelled to add that the collaboration and input over the past year-plus from Bjarne
Stroustrup and the folks at EDG (Steve Adamczyk,
John Spicer, and Daveed Vandevoorde) has been wonderful and invaluable in this regard
specifically. It has really helped to have input from other experienced compiler writers,
including in Bjarne's case the creator of the first C++ compiler and in EDG's case
the folks who have one of the world's strongest current C++ compilers. On several
occasions all of their input has helped get rid of inadvertent assumptions about "what's
implementable" and "what's diagnosable" based on just VC++'s own compiler implementation
and its source base. What's easy for one compiler implementation is not necessarily
so for another, and it's been extremely useful to draw on the experience of comparing
notes from two current popular ones to make sure that features can be implemented
readily on various compiler architectures and source bases (not just VC++'s) and with
quality user diagnostics.
5. Not keywords, but in a namespace scope
Finally, there are a few "namespaced" keywords. These make the most sense for pseudo-library
features (ones that look and feel like library types/functions but really are special
names known to the compiler because the compiler does special things when handling
them). They appear in the stdcli namespace and are:
array interior_ptr pin_ptr
safe_cast
That's it.
Now, for a moment let's go back to case #1, reserved words. Right now we're down to
three reserved words. What would it take to get down to zero? Consider the cases:
- nullptr: This has been proposed in WG21/J16 for C++0x, and at the
last meeting three weeks ago the evolution working group (EWG) was favorable to it
but wanted a few changes. The proposal
paper was written by me and Bjarne, and we will revise the paper for the
next meeting to reflect the EWG direction. If C++0x does adopt the proposal and chooses
to take the keyword nullptr then the list of C++/CLI reserved words
goes down to two and C++/CLI would just directly follow the C++0x design for nullptr,
including any changes C++0x makes to it.
- gcnew: One obvious way to avoid taking this as a reserved word would
be to put it into bucket #1 as a spaced keyword, "gc new".
- generic: Similarly, a spaced keyword (possibly "generic template")
would avoid taking this reserved word. Unfortunately, spelling it "<anything> template"
is not only ugly, but seriously misleading because a generic really is not at all
a template.
Is it worth it to push all the way down to zero reserved words in C++/CLI? There are
pros and cons to doing so, but I've certainly always been sympathetic to the goal
of zero reserved words; Brandon and
others will surely tell you of my stubborn campaigning to kill off reserved words
(I think I've killed off over a half dozen already since I took the reins of this
effort in January, but I haven't kept an exact body count).
I think the right time to decide whether to push for zero reserved words is probably
near the end of the C++/CLI standards process (summer-ish 2004). At that point, when
all other changes and refinements have been made and everything else is in its final
form, we will have a complete (and I hope still very short) list of places where C++/CLI
could change the meaning of an existing C++ program, and that will be the best time
to consider them as a package and to make a decision whether to eliminate some or
all of them in a drive-it-to-zero cleanup push. I am looking forward to seeing what
the other participants in all C++ standards arenas, and the broader community, think
is the right thing to do as we get there.
Putting it all together, what's the impact on a legal ISO C++ program? Only:
- The (zero to three) reserved words, which we may get down to zero.
- Macros with the same name as a contextual keyword, which ought to be rare because
macros with all-lowercase names, never mind names that are common words, are already
considered bad form and liable to break way more code than just C++/CLI. (For example,
if a macro named event existed it would already be breaking most
attempts to use Standard C++ iostreams, because the iostreams library has an enum
named event.)
Let me illustrate the macro cases with two main examples that affect the spaced keywords:
// Example 1: this has a different meaning in ISO C++ and C++/CLI
#define interface struct
In ISO C++, this means change every instance of interface to struct.
In C++/CLI, because "interface struct" is a single token, the macro
means instead to change every instance of "interface struct" to nothing.
Here's the simplest workaround:
// Workaround 1: this has the same meaning in both
#define interface interface__
#define interface__ struct
Here's another example of a macro that can change the meaning of a program in ISO
C++ and C++/CLI:
// Example 2: this has a different meaning in ISO C++ and C++/CLI
#define ref const
ref class C { } c;
In ISO C++, ref goes to const and the last line
defines a class C and simultaneously declares a const object of that
type named c. This is legal code, albeit uncommon. In C++/CLI, the
macro has no effect on the class declaration because "ref class"
is a single token (whereas the macro is looking for the token ref alone,
not "ref class") and so the last line defines a ref class C and
simultaneously declares a (non-const) object of that type named c.
Here's the simplest workaround:
// Workaround 2: this has the same meaning in both
#define REF const
REF class C { } c;
But hey, macro names are supposed to be uppercase anyway. 🙂
I hope these cases are somewhere between obscure and pathological. At any rate, macros
with short and common names are generally unusual in the wild because they just break
so much stuff. I would rate example 1 above as fairly obscure (although windows.h
has exactly that line in it, alas) and example 2 as probably outright pathological
(as I would rate all macros with short and common names).
Whew. That's all for tonight.
I was all set to say, "#define interface struct is not at all obscure – it’s in the VC++ header files." So, I guess the next version of the compiler will have new header files defining interface IDispatch, etc?
Why don’t you guys do something useful like add C99 support or GCC extensions instead of this stuff? Or hell, even numeric literals with the 0b/0B prefix would be a decent start.
PingBack from | https://blogs.msdn.microsoft.com/hsutter/2003/11/23/ccli-keywords-under-the-hood/ | CC-MAIN-2017-43 | en | refinedweb |
Module::CAPIMaker - Provide a C API for your XS modules
perl -MModule::CAPIMaker -e make_c_api
If you are the author of a Perl module written using XS. Using Module::CAPIMaker you will be able to provide your module users with an easy and efficient way to access its functionality directly from their own XS modules.
The exporting/importing of the functions provided through the C API is completely handled by the support files generated by Module::CAPIMaker, and not the author of the module providing the API, neither the authors of the client modules need really to know how it works but on the other hand it does not harm to understand it and anyway it will probably easy the understanding of the module usage. So, read on :-) ..
Suppose that we have one module
Foo::XS
Foo::XS providing a C API with the help of Module::CAPIMaker and another module
Bar that uses that API.
When
Foo::XS is loaded, the addresses of the functions available through the C API are published on the global hash
%Foo::XS::C_API.
When
Bar loads, first, it ensures that
Foo::XS is loaded (loading it if required) and checks that the versions of the C API supported by the version of
Foo::XS loaded include the version required. Then, it copies the pointers on the
%Foo::XS::C_API hash to C static storage where they can be efficiently accessed without performing a hash look-up every time.
Finally calls on Bar to the C functions from
Foo::XS are transparently routed through these pointers with the help of some wrapping macros.
The C API is defined in a file named
c_api.decl. This file may contain the prototypes of the functions that will be available through the C API and several configuration settings.
From this file,
Module::CAPIMaker generates two sets of files, one set contains the files that are used by the module providing the C API in order to support it. The other set is to be used by client modules.
They are as follows:
On the C API provider side, a file named
c_api.h is generated. It defines the initialization function that populates the
%C_API hash.
There are two main files to be used by client modules, one is
perl_${c_module_name}.c containing the definition of the client side C API initialization function.
The second file is
perl_${c_module_name}.h containing the prototypes of the functions made available through the C API and a set of macros to easy their usage.
${c_module_name} is the module name lower-cased and with every non-alphanumeric sequence replaced by a underscore in order to make in C friendly. For instance,
Foo::XS becomes
foo_xs and the files generated are
perl_foo_xs.c and
perl_foo_xs.h.
A sample/skeleton
Sample.xs file is also generated.
The client files go into the directory
c_api_client where you may also like to place additional files as for instance a
typemap file.
The C API is defined through the file
c_api.decl.
Two types of entries can be included in this file: function prototypes and configuration settings.
Function declarations are identical to those you will use in a C header file (without the semicolon at the end). In example:
int foo(double) char *bar(void)
Functions that use the THX macros are also accepted:
SV *make_object(pTHX_ double *)
You have to use the prototype variant (
pTHX or
pTHX_) and Module::CAPIMaker will replace it automatically by the correct variant of the macro depending of the usage.
Configuration settings are of the form
key=value where key must match /^\w+$/ and value can be anything. For instance
module_name = Foo::XS author = Valentine Michael Smith
A backslash at the end of the line indicates that the following line is a continuation of the current one:
some_words = bicycle automobile \ house duck
Here-docs can also be used:
some_more_words = <<END car pool tree disc book END
The following configuration settings are currently supported by the module:
Perl name of the module, for instance,
Foo::XS.
C-friendly name of the module, for instance
foo_xs
Version of the Perl module. This variable should actually be set from the
Makefile.PL script and is not used internally by the C API support functions. It just appears on the comments of the generated files.
Name of the module author, to be included on the headers of the generated files.
In order to support evolution of the C API a min/max version approach is used.
The
min_version/
max_version numbers are made available through the
%C_API hash. When the client module loads, it checks that the version it requires lays between these two numbers or otherwise croaks with an error.
By default
required_version is made equal to
max_version.
The directory where the client support files are placed. By default
c_api_client.
Name of the client support header file (i.e.
perl_foo_xs.h).
Name of the client C file providing the function to initialize the client side of the C API (i.e.
perl_foo_xs.c).
Name of the support file used on the C API provider side. Its default value is
c_api.h.
Name of the macro used to avoid multiple loading of the definitions inside
${module_h_filename}.
It defaults to
uc("${c_module_name}_H_INCLUDED"). For instance
PERL_FOO_XS_H_INCLUDED.
Name of the macro used to avoid multiple loading of the definitions on the header file
c_api.h. It defaults to
C_API_H_INCLUDED.
It's possible to add a prefix to the name of the functions exported through the C API in the client side.
For instance, if the function
bar() is exported from the module
Foo::XS, setting
export_prefix=foo_ will make that function available on the client module as
foo_bar().
The text in this variable is placed inside the file
${module_c_filename} at some early point. It can be used to inject typedefs and other definitions needed to make the file compile.
The text inside this variable is placed at the end of the file
${module_c_filename}.
The text inside this variable is placed inside the file
${module_h_filename} at some early point.
The text inside this variable is placed at the end of the file
${module_h_filename}.
The name of the file with the definition of the C API. Defaults to
c_api.decl.
Obviously this setting can only be set from the command line!
Internally the module uses Text::Template to generate the support files with a set of default templates. For maximum customizability a different set of templates can be used.
See "Customizing the C API generation process".
Once your
c_api.decl file is ready use
Module::CAPIMaker to generate the C API running the companion script
make_perl_module_c_api. This script also accept a list of configuration setting from the command line. For instance:
make_perl_module_c_api module_name=Foo::XS \ author="Valentine Michael Smith"
If you want to do it from some Perl script, you can also use the
make_c_api sub exported by this module.
In order to initialize the C API from the module supporting it you have to perform the following two changes on your XS file:
c_api.h.
INIT_C_APIfrom the
BOOTsection.
For instance,
#include "EXTERN.h" #include "perl.h" #include "XSUB.h" #include "ppport.h" /* your C code goes here */ #include "c_api.h" MODULE = Foo::XS PACKAGE = Foo::XS BOOT: INIT_C_API; /* your XS function declarations go here */
In order to get the C API interface files regenerated every time the file
c_api.decl is changed, add the following lines at the end of your
Makefile.PL script.
package MY; sub postamble { my $self = shift; my $author = $self->{AUTHOR}; $author = join(', ', @$author) if ref $author; $author =~ s/'/'\''/g; return <<MAKE_FRAG c_api.h: c_api.decl \tmake_perl_module_c_api module_name=\$(NAME) module_version=\$(VERSION) author='$author' MAKE_FRAG } sub init_dirscan { my $self = shift; $self->SUPER::init_dirscan(@_); push @{$self->{H}}, 'c_api.h' unless grep $_ eq 'c_api.h', @{$self->{H}}; }
You may also like to include the generated files into the file
MANIFEST in order to not require your module users to also have
Module::CAPIMaker installed.
The module
Module::CAPIMaker exports the subroutine
make_c_api when loaded. This sub parses module settings from @ARGV and the file
a_api.decl and performs the generation of the C API support files.
In order to use functions provided by a XS module though a C API generated by Module::CAPIMaker, you have to perform the following steps:
c_api_clientdirectory into your module directory.
Makefile.PLscript, tell ExtUtils::MakeMaker to compile and link the C file just copied. That can be attained adding
OBJECT => '$(O_FILES)'arguments to the
WriteMakefilecall:
WriteMakefile(..., OBJECT => '$(O_FILES)', ...);
.cand
.xsfiles) that want to use the functions available through the C API.
#include "EXTERN.h" #include "perl.h" #include "XSUB.h" #include "ppport.h" #include "perl_foo_xs.h" MODULE=Bar PACKAGE=Bar BOOT: PERL_FOO_XS_LOAD_OR_CROAK;
Internally, the module uses Text::Template to generate the support files.
In order to allow for maximum customizability, the set of templates used can be changed.
As an example of a module using a customized set of templates see Math::Int64.
The default set of templates is embedded inside the sub-modules under Module::CAPIMaker::Template, you can use them as an starting point.
Finally, if you find the module limiting in some way don't hesitate to contact me explaining your issued. I originally wrote Module::CAPIMaker to solve my particular problems but I would gladly expand it to make it cover a wider range of problems.
Math::Int128, Math::Int64, Tie::Array::Packed are modules providing or using (or both) a C API created with the help of Module::CAPIMaker.
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself, either Perl version 5.12.4 or, at your option, any later version of Perl 5 you may have available. | http://search.cpan.org/dist/Module-CAPIMaker/lib/Module/CAPIMaker.pm | CC-MAIN-2017-43 | en | refinedweb |
Represents a field write of the form v.f1...fk = s. More...
#include <FieldRefWrite.h>
Represents a field write of the form v.f1...fk = s.
An instruction is removable if it's not present in the original source code, but introduced in the translation from the high-level language to the low-level language. For instance, if **x is an expression used in the original code, the low-level language will introduce a temporary t=*x; such instructions are "removable" for printing purposes.
Implements sail::Instruction. | http://www.cs.utexas.edu/~isil/sail/classsail_1_1FieldRefWrite.html | CC-MAIN-2017-43 | en | refinedweb |
I have to remove duplicated objects in a List.
This List have the object Blog that is like this:
public class Blog {
private String title;
private String author;
private String url;
private String description;
...
}
If you can't edit the source of the class (why not?), then you need to iterate over the list and compare each item based on the four criteria mentioned ("title, author, url and description").
To do this in a performant way, I would create a new class, something like
BlogKey which contains those four elements and which properly implements
equals() and
hashCode(). You can then iterate over the original list, constructing a
BlogKey for each and adding to a
HashMap:
Map<BlogKey, Blog> map = new HashMap<BlogKey, Blog>(); for (Blog blog : blogs) { BlogKey key = createKey(blog); if (!map.containsKey(key)) { map.put(key, blog); } } Collection<Blog> uniqueBlogs = map.values();
However the far simplest thing is to just edit the original source code of
Blog so that it correctly implements
equals() and
hashCode(). | https://codedump.io/share/FIcYopw9e0PP/1/how-to-remove-duplicate-objects-in-a-listltmyobjectgt-without-equalshashcode | CC-MAIN-2017-43 | en | refinedweb |
Hai guys My name Eddy, I'm HND Software Engineering at TATI, Malaysia, this is my 1st year of study, i need help ! somene can change below source code to templete by using C++ source code please help ... this is the source code:
-----------------------------------------------------------------------------------
#include<iostream> using namespace std; class calculator { private: int x,y; public: calculator(int=1,int=1);//contructor ~calculator();//destructor int mc(); int ac(); int sc(); int dc(); void calc(char); }; //end of class declaration //contructor defination calculator::calculator(int a,int b) //destructor {x=a;y=b;} calculator::~calculator() //destructor {} int calculator::mc() { cout<<"\n\n"<<x<<"time"<<y<<"equals"; return(x*y); } int calculator::ac() { cout<<"\n\n"<<x<<"time"<<y<<"equals"; return(x+y); } int calculator::sc() { cout<<"\n\n"<<x<<"time"<<y<<"equals"; return(x-y); } int calculator::dc() { cout<<"\n\n"<<x<<"time"<<y<<"equals"; return(x/y); } void calculator::calc(char choice) { //int on,tw; if (choice == '+') //This whole block checks what the user wants to calculate, and refers to the proper routine to calculate it. { cout<<"You selected "<<choice<<". Please enter two numbers,"; cout<<"that you want to add."<<endl;//print instructions for the user cin>>x;//Get the value of variable on cin>>y;//Get the value of variable tw cout<<ac()<<"\n\n\n\aThanks for using my calculator!";//Print a thank you message } else if (choice =='-') { cout<<"You selected "<<choice<<". Please enter two numbers,\nsepperated by spaces, that you want to subtract."<<endl; cin>>x; cin>>y; cout<<sc()<<"\n\n\n\aThanks for using my calculator!"; } else if (choice =='*') { cout<<"You selected "<<choice<<". Please enter two numbers,\nsepperated by spaces, that you want to multiply."<<endl; cin>>x; cin>>y; cout<<mc()<<"\n\n\n\aThanks for using my calculator!"; } else if (choice =='/') { cout<<"You selected "<<choice<<". Please enter two numbers,\nsepperated by spaces, that you want to divide."<<endl; cin>>x; cin>>y; cout<<dc()<<"\n\n\n\aThanks for using my calculator"; } else { cout<<"\nPlease reenter that value.\n\a"; cin>>choice; calc(choice); } } int main() { calculator object(5,6); char choice; while (choice != 'e') { cout<<"\nPlease enter +,-,*, or / and then two numbers,\nsepperated by spaces, that you wish to\nadd,subtract,multiply,or divide.\nType e and press enter to exit."; cin>>choice; if (choice != 'e') { object.calc(choice); } } return 0; }
Please someone help me convert to templete...please... as soon as posible.. | https://www.daniweb.com/programming/software-development/threads/69037/help-me | CC-MAIN-2017-43 | en | refinedweb |
public class FlashOutput extends Output
Graph graph = makeMyGraph(); FlashOutput out = new FlashOutput(100, 100); graph.draw(out); out.writeFlash(new FileOutputStream("Graph.swf"));
Graph.draw(org.faceless.graph2.Output),
Graph.setMetaData(java.lang.String, java.lang.String)
metadata
getAreas, storeAreas
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public FlashOutput(int width, int height)
width- the width of the image in user units
height- the height of the image in user units
public FlashOutput(int width, int height, Color background)
width- the width of the image
height- the height of the image
background- the background color of the movie
public void setFont(String name, Font font)
Define a font for use in the movie. This method can be used to define non-standard fonts for use in Graphs. Without calling this method the "Default" font is available and set to a sans-serif font. Additionally since 2.0.4 any fonts available to the Java process may be referenced by name without having to call this method.
For example, to use a font called "myfont" in a style:
output.setFont("myfont", Font.createFont("myfont.ttf"));
name- the name of the font, as passed to
TextStyle.setFont(java.lang.String, double). A name of "Default" will override the default font. The name is case-insensitive.
font- the Font to use
protected void doSetPaint(Paint paint)
paint- the Paint to use for drawing, text etc.
public void writeFlash(OutputStream out) throws IOException
out- the OutputStream to write the movie to
IOException
public void setDetailLevel(int detailLevel)
detailLevel- the level of detail to display. The default value is 30, and lower values give more detail, with 0 meaning "display everything" | http://bfo.com/products/graph/docs/api/org/faceless/graph2/FlashOutput.html | CC-MAIN-2017-43 | en | refinedweb |
Direct is a platform and language agnostic remote procedure call (RPC) protocol. Ext Direct allows for seamless communication between the client side of an Ext JS application and any server platform that conforms to the specification. Ext Direct is stateless and lightweight, supporting features like API discovery, call batching, and server to client events.
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119.
Ext Direct is partially based on JSON (see or RFC 4627 ), and utilizes HTML Form-based File Uploads ( RFC 1867, RFC 2388 ).
Server MAY support publishing its API to Clients via API discovery mechanism. If API discovery is supported the Server MUST respond to HTTP GET requests at preconfigured URI, returning a document with a correct content type for the browser to interpret this document as JavaScript code.
The Server API declaration MUST be valid JavaScript code that results in creation of a set of nested Objects assigned to a variable that can later be passed to Ext Direct initialization code in the Client application. It is RECOMMENDED that the code also conforms to stricter rules of JSON object syntax, for the benefit of non-JavaScript implementations that might try to parse the API declarations as JSON.
An example of API declaration code:
var Ext = Ext || {}; Ext.REMOTING_API = { "id": "provider1", "url": "ext/direct/router", "type": "remoting", "actions": { "Album": [{ "name": "getAll", "len": 0 }, { "name": "add", "params": ["name", "artist"], "strict": false }, { "name": "delete", "len": 1 }] } };
The JavaScript Object of the API declaration MUST contain the following mandatory properties:
url- The Service URI for this API.
type- MUST be either
remotingfor Remoting API, or
pollingfor Polling API.
actions- A JavaScript Object listing all Actions and Methods available for a given Remoting API. MUST be omitted in Polling API declarations.
The API declaration MAY also contain the following OPTIONAL properties:
id- The identifier for the Remoting API Provider. This is useful when there are more than one API in use.
namespace- The Namespace for the given Remoting API.
timeout- The number of milliseconds to use as the timeout for every Method invocation in this Remoting API.
Any other properties are OPTIONAL.
Each Action within the
actions property of the API declaration is an
Array of Objects that represent Methods. Actions do not have properties
in themselves. Action names MAY be nested, i.e. an Action may contain
other Actions as well as Methods.
Method declaration MUST have the following properties:
name- The name of this Method. MUST be unique within its Action.
Each Method is fully qualified by Action and Method names concatenated with dot character (.), prefixed by optional Namespace:
[Namespace.]Foo.Bar.baz
Where
Foo is outer Action name,
Bar is inner Action name, and
baz
is the Method name.
Method declaration MUST have one of the following mutually exclusive properties that describe the Method's calling convention:
len- The number of arguments required for Ordered Methods. This number MAY be 0 for Methods that do not accept arguments.
params- An array of parameters supported by a Named Method. This array MAY be empty.
formHandler- A JavaScript Boolean value of
trueindicates that this Method accepts Form submits.
Ordered Methods MUST always conform to their calling convention. When Remoting Method proxy function is called for an Ordered Method, it MUST be supplied with exactly the number of arguments required, in exactly the same order as required by the Method. If less than required number of arguments is passed, the Router MAY choose to return an Exception without invoking the actual Method.
Named Methods MAY use
strict property to control how the arguments
will be checked and passed to the Server when a Method is called:
strictis set to
true, only arguments with names listed in the
paramsarray are sent to the Server; any other arguments are discarded. This is the default behavior.
strictis set to
false, all arguments are passed to the Server. The Router SHOULD pass all arguments to the actual Method.
The Router MAY choose to return an Exception if some of the listed
parameters are missing, and omit invoking the actual Method. If the
params
Array is empty and
strict property is set to
false, the Router MUST NOT
perform any argument checks and MUST pass all arguments to the invoked
Method.
An example of a Named Method with all optional parameters:
"actions": { "TestAction": [{ "name": "named_no_strict", "params": [], "strict": false }] }
Method declaration MAY contain the optional
metadata property that
defines the type of Call Metadata accepted by the Method. If the
metadata
property is not present in Method declaration, the Client MUST NOT send
call metadata to the Server for any invocation of that Method.
The
metadata property, if present, MUST be a JavaScript object with the
following properties:
len- The number of arguments required for by-position Call Metadata. This number MUST be greater than 0.
params- The Array of names of the supported for by-name Call Metadata. This Array MAY be empty.
strict- JavaScript Boolean value that controls how Call Metadata arguments are checked.
When present, Call Metadata arguments MUST conform to their calling convention. Call Metadata calling convention MAY be different from the main Method arguments calling convention, e.g. an Ordered Method MAY accept by-name Call Metadata, or Named Method MAY accept by-position Call Metadata.
The argument checks are performed the same way as with main Method arguments.
Some examples of Method declarations accepting Call Metadata:
"actions": { "TestAction": [{ "name": "meta1", "len": 0, "metadata": { "len": 1 } }, { "name": "meta2", "len": 1, "metadata": { "params": ["foo", "bar"], "strict": false } }, { "name": "meta3", "params": [], "strict": false, "metadata": { "len": 3 } }, { "name": "meta4", "params": ["foo", "bar"], "metadata": { "params": ["baz", "qux"] } }] }
Declaring Polling API is OPTIONAL. If the Server implements more than one Event Provider, it is RECOMMENDED to include Polling API declarations along with Remoting API declaration in the same JavaScript document.
An example of API declaration with one Remoting API and one Polling API:
var Ext = Ext || {}; Ext.REMOTING_API = { "id": "provider1", "type": "remoting", "url": "ext/direct/router", "actions": { "User": [{ "name": "read", "len": 1 }, { "name": "create", "params": ["id", "username"] }] } }; Ext.POLLING_API = { "id": "provider2", "type": "polling", "url": "ext/direct/events" };
A remoting call is represented by sending a Request object, or multiple
Request objects, to a Server. Request(s) are encoded in JSON and are
sent as raw payload in a HTTP POST request to the Service URI advertised
as the
url in API Declaration. There MUST not be any other data present
in the HTTP POST, except valid JSON document containing one or more
Requests. Among HTTP headers for the POST request, there MUST be the
Content-Type header with the value of "application/json". The Client
MUST use UTF-8 character encoding for the Request document.
A Request is an Object with the following members:
type– MUST be a string "rpc".
tid– The transaction id for this Request. MUST be an integer number unique among the Requests in one batch.
action– The Action that the invoked Method belongs to. MUST be specified.
method– The Remoting Method that is to be invoked. MUST be specified.
data– A set of arguments to be passed to the called Remoting Method. MUST be either
nullfor Methods that accept 0 (zero) parameters, an Array for Ordered methods, or an Object for Named methods.
metadata- OPTIONAL set of meta-arguments to be made available to the called Remoting Method, if provided by the Client. If no metadata is associated with the call, this member MUST NOT be present in the Request.
A typical JSON encoded Request Object for an Ordered Method may look like this:
{"type":"rpc","tid":1,"action":"Foo","method":"bar","data":[42,"baz"]}
A typical JSON encoded Request object for a Named Method may look like this:
{"type":"rpc","tid":2,"action":"Foo","method":"qux","data":{"foo":"bar"}}
A typical JSON encoded Request object for an Ordered Method with by-name Call Metadata may look like this:
{"type":"rpc","tid":3,"action":"Foo","method":"fred","data":[0], "metadata":{"borgle":"throbbe"}}
Remoting Requests MAY be batched, in which case the Requests MUST be
sent as an Array of Request Objects with unique
tid members:
[ {"type":"rpc","tid":3,"action":"Foo","method":"frob","data":["foo"]}, {"type":"rpc","tid":4,"action":"Foo","method":"blerg","data":["qux"]} ]
The Server MUST support Request batching, and SHOULD attempt to return call invocation Results or Exceptions in the same order.
A response to a Remoting call MUST contain either a Result, or an Exception for each Request. Responses are encoded in JSON and returned to the Client as raw HTTP response payload, with Content-Type header of "application/json" and character encoding of UTF-8.
If Requests were sent as a batch the Server MUST return the corresponding
Responses only after all Requests were processed by the Router, and the
Responses SHOULD follow the same order as the original Requests. For each
Response, the corresponding
tid member of the original Request MUST be
passed back unchanged by the Server.
A Result is an Object with the following members:
type— MUST be a string "rpc"..
result— The data returned by the Method. MUST be present in the Response object, but MAY be
nullfor Methods that do not return any data.
A typical JSON encoded Array of Response objects may look like this:
[ {"type":"rpc","tid":1,"action":"Foo","method":"bar","result":0}, {"type":"rpc","tid":2,"action":"Foo","method":"baz","result":null} ]
An Exception is an Object with the following members:
type— MUST be a string "exception"..
message— The error message. MUST be present.
where— OPTIONAL description of where exactly the exception was raised. MAY contain stack trace, or additional information.
A typical JSON encoded Exception may look like this:
{ "type": "exception", "tid": 3, "action": "Foo", "method": "frob", "message": "Division by zero", "where": "... stack trace here ..." }
A remoting form invocation is represented by submitting an HTML form with HTTP POST request. The form content MUST conform to (HTML 4.01 specification)[5]. The Server MUST support both "application/x-www-form-urlencoded" and "multipart/form-data" content types for backwards compatibility. The Client MAY choose to implement only "multipart/form-data" encoding.
There MUST be only one Method invocation per submitted form. The Client MUST use form submission for each Method with Form Handler calling convention declared in Remoting API.
The form SHALL contain the following fields:
extType- MUST be a string "rpc".
extTID- The transaction id for this Request. MUST be an integer number unique among the Requests in one batch.
extAction- The Action that the invoked Method belongs to. MUST be specified.
extMethod- The Remoting Method that is to be invoked. MUST be specified.
extUpload- Stringified Boolean value (
"true"or
"false") indicating that file uploads are attached to this form submission.
The form MAY contain the
metadata field if there is Call Metadata
associated with the given form submission.
The form MAY contain other fields in addition to the mandatory ones listed above. These additional fields MUST be passed to the invoked Method as Named arguments.
When the form is used to upload files, encoding type MUST be "multipart/form-data".
The Server MUST respond to the form submission with a JSON document containing a valid Response or Exception object as described in sections 4.2.1 and 4.2.2. The document MUST have content type of "application/json" and UTF-8 character encoding.
When the form request contained one or more file uploads, the Server MUST return a valid HTML document with correct content type for the browser to interpret this document as HTML. The document MUST have UTF-8 character encoding.
The HTML document MUST contain a valid JSON encoded Response or Exception
as described in sections 4.2.1 and 4.2.2, enclosed in HTML
<textarea>
An example of a file upload response may look like this:
<!DOCTYPE html> <html> <head> <title>File upload response</title> </head> <body> <textarea>{JSON RESPONSE HERE}</textarea> </body> </html>
The Server MAY implement OPTIONAL event polling mechanism. Event polling is performed by periodically making HTTP GET requests to the Server. There MAY be more than one Event Provider per Server; in that case each Event Provider MUST have a separate Service URI.
An event poll request in its basic form is an HTTP GET request to the Service URI of the polled Event Provider. The Server MUST maintain a list of active Poll Handler methods for every Event Provider, and invoke each Poll Handler method every time a poll request is made. The Server MAY support passing arguments to Poll Handler methods via HTTP GET request URI but it is not required.
An event poll response MUST be a valid JSON document with a correct content type for the browser to interpret this document as JSON. The document MUST use UTF-8 character encoding.
The event poll response should contain an array of Event objects. This array MAY be empty if no Events are pending for the given request. The Server MUST NOT include Exception objects in the response array.
An event object MUST contain the following properties:
type- MUST be a string "event".
name- Event name, MUST be a string.
data- Event data, SHOULD be any valid JSON data.
An example of a typical Event object:
{ "type": "event", "name": "progressupdate", "data": { "processId": 42, "progress": 100 } }
Ext Direct makes use of the following terms:
windowobject will be used.
[Namespace.]Action.Method. | http://docs.sencha.com/extjs/5.1.2/guides/backend_connectors/direct/specification.html | CC-MAIN-2017-43 | en | refinedweb |
Hi all I am having some problems with compiling my c++ code. I have had lots of experience in Java but c++ is new.
I am trying to read in a file that contains words, sort them, then print them out to a new file in my program but the error is saying there is error on line 22 involving iterating through the elements of a vector. I am confused and any help is greatly appreciated!
Code:#include <iostream> #include <list> #include <fstream> #include <string> using namespace std; int main() { string x; list<string> myList; ifstream inFile; ofstream outFile; inFile.open("dictionary"); while (inFile >> x) { myList.push_back(x); } inFile.close(); myList.sort(); outFile.open("sortedDictionary"); for (int i = 0; i < myList.size(); i++) { outFile << myList[i] << endl; } outFile.close(); return 0; } | https://cboard.cprogramming.com/cplusplus-programming/133892-new-cplusplus.html | CC-MAIN-2017-43 | en | refinedweb |
Hybrid!
The purpose of Viewport is to size to all available space in the browser. This means that it never makes sense to nest them, and almost never makes sense to use more than one.
Why are you using both DockLayoutPanel and BorderLayoutContainer? Why not just a BorderLC, or two BorderLCs? At least that will be more consistent about how layouts are performed.
That said, you do not need and should not have a Viewport inside the DockLayoutPanel. It should be possible to add the BorderLC to the DockLP (but see above).
And if you are just using the DockLayoutPanel for a header, consider a VerticalLayoutContainer - set the top item to be a fixed size, and the 'center' region then to be 1.0,1.0 (i.e. take all remaining width and height)
Can you put this layout setup into a simple EntryPoint so that other users can try to run it and make suggestions about how else you might build this setup?
Thank you! This helped me find the problem within my code. I had an extra VerticalPanel and VerticalLayoutContainer that, when removed, helped display the BorderLC correctly. It seems that when the BorderLC is in a VerticalLC, it puts the south container on top. Weird.
The reason I needed a Viewport with DockLP is that I had a header and a footer, and I needed a way to be able to add additional static headers (depending on which page I was viewing). The Viewport is used amongst all the pages in my app. So sometimes it will have an additional north widget that displayed navigational buttons along with a menu bar, and sometimes it will only have a menu bar. So I needed the flexibility to be able to add multiple north widgets.
VerticalLC should behave with a BorderLC in it, provided you give the BorderLC a size - what VerticalLayoutData did you assign when you added it?
Code:
public class Test implements EntryPoint { public void onModuleLoad() { //Only make one viewport, that gets added to the root panel Viewport root = new Viewport(); //First, the outer wrapper: VerticalLayoutContainer outer = new VerticalLayoutContainer(); //Header section can go in here - take all width available (i.e. 1.0), and only height required(i.e. -1): outer.add(new Label("Replace with real header"), new VerticalLayoutData(1.0, -1)); BorderLayoutContainer border = new BorderLayoutContainer(); //add content to the border LC... border.setNorthWidget(createContentPanel("North")); border.setSouthWidget(createContentPanel("South")); border.setEastWidget(createContentPanel("East")); border.setCenterWidget(createContentPanel("Center")); //Add the BorderLC with both height/width as 1.0 to use all remaining space: outer.add(border, new VerticalLayoutData(1.0,1.0)); root.setWidget(outer); RootPanel.get().add(root); } private ContentPanel createContentPanel(String heading) { ContentPanel panel = new ContentPanel(); panel.setHeadingText(heading); panel.setWidget(new Label(heading + " content goes here")); return panel; } } | http://www.sencha.com/forum/showthread.php?253437-Nesting-Viewports&mode=hybrid | CC-MAIN-2014-52 | en | refinedweb |
Linux Software › Search › added some
Tag «added some»: downloads
Search results for «added some»:
FloboPuyo 0.20 by iOS
FloboPuyo is a clone of the famous PuyoPuyo.
What's New in This Release:
Improved game control settings
Added key repetition
Made the "main puyo" more visible by blinking
Added score board and hall of fame
Added a new music and game theme
Added some sound effects
Added OpenGL suppor…
pyxiph 0.4 by Austin Bingham
pyxiph is a collection of Python extensions for Xiph libraries (such as ogg, vorbis, and ao).
The bindings use boost.python and use scons for the build system.
What's New in This Release:
moved all code into appropriate namespaces
added support for fundamental encoding
added example show……
Linuxo Live! 0.4 by Linuxo Live! Team
Linuxo Live! is a new and modern Linux distribution.
What's New in This Release:
added better support for SATA disks, now no problems with hdd installation on SATA
added Koffice 1.3.5 with full serbian translation
fixed KDE keyboard change (on taskbar), SR, SP and US keyboard selection activated…
mp3cleanup 0.9 by Alex Butcher a…
Slisp 2.2 by Sandro Sigala
Slisp project is written in C, and I recently added a header file lisp.h that contains an array of common-used lisp functions, so if you want to extend SLISP programming in Lisp you are allowed too.
What's New in This Release:
Added Bignums support (GMP)
Added a lot of functions
Added let,…
SunGazer Packetfilter 0.5.2 by Marius Brehler s…
Aften 0.05 by Justin Ruggles
Aften project is a simple, open-source, A/52 (AC-3) audio encoder.
Here are some key features of "Aften":
Implemented my own wav reader
Converted the fixed-point algorithms to floating-point
Rearranged the methods and structures
Added stereo rematrixing (mid/side)
Added short block MDCT…
mod_ruby 1.2.5 by Shugo Maeda.
Requirements…
TinyFugue 5.0 beta 7 by Ken Keys
TinyFugue project is a MUD client.
TinyFugue, aka "tf", is a flexible, screen-oriented MUD client, for use with any type of MUD. TinyFugue is one of the most popular and powerful mud clients.
What's New in This Release:
Added 16-color names: "gray", "brightred"..."brightwhite".
Added 256-c…
TicTacToeGTK 2.0 by Obada Denis
TicTacToeGTK is a new vision of old Game with new features and options.
Requirements:
GTK+ version 2.2.x
What's New in This Release:
Added Game Menu.
New file : menu.c - manage menu actions
In cpu.c,draw.c,menu.c fixed multiple include bug.
Confirm Exit dialog Added
In file cpu.c a…
Yet Another SQL*Plus Replacement 1.82 by Nathan Shafer
Y…
htmloptim 1.2 by DM bits
htmloptim project reduces the size of an HTML file by removing unnecessary characters like spaces, tabs, line feeds, and blank lines.
What's New in This Release:
added the -p option (remove trailing spaces and tabs)
added skipping of more < pre > like tags: < listing > < plaintext > < xmp >
…
TuxGuitar 0.8 by Julian Casadesus
TuxGuitar is a multitrack guitar tablature editor and player. TuxGuitar project can open GP3 and GP4 files.
Here are some key features of "TuxGuitar":
Fixed a bug in tTablature editor
Multitrack display
Autoscroll while playing
Note duration management
Various effects (bend, slide, vibr…
C++ Sockets 2.1 by Anders Hedstr?m
C++ Sockets is a C++ wrapper for BSD-style sockets.
Here are some key features of "Cplusplus Sockets":
SSL support
IPv6 support
tcp and udp sockets
encrypted tcp
http protocol
highly customizable error handling
What's New in This Release:
A segfault crash related to threading was…
Pitchtune 0.0.4 by Haakon Andre Hjortland
Pitchtune (Precise Instrument Tweaking for Crispy Harmony tuner) is an oscilloscope-style musical instrument tuning program.
It can also be used to find the frequency of sounds. Pitchtune uses the GTK+ toolkit.
What's New in This Release:
ALSA support (capture and playback).
OSS support de…
cbcv 0.4 by Pawel S. Veselov
cbcv is a Java class and byte code verifier. It verifies static class file structure, external references, and analyzes operand stack and local variables through byte code execution emulation.
cbcv includes CLDC standard verification.
What's New in This Release:
Implemented '-j' option to verify….…
Wiki on a Stick 0.04 by Andre Wagner
Wiki on a Stick is a personal wiki that lives in a single self-modifying HTML file that contains the software, interface, and database.
It's useful for taking notes, for use as a calendar, and for documenting software, etc. Wiki on a Stick currently only works in Firefox
Requirements:
Mozill…
Simple Spreadsheet 0.5 by Thomas Bley
Simple Spreadsheet is a webbased spreadsheet program written in Javascript, HTML, CSS and PHP.
Simple Spreadsheet features formulas, charts, numeric formats, keyboard navigation, etc. Javascript is used for the default data format and for the macros and formulas.
Requirements:
Apache 1.3 | http://nixbit.com/search/added-some/ | CC-MAIN-2014-52 | en | refinedweb |
Flux - stream processing toolkit
version 1.03
use Flux::Simple qw( array_in array_out mapper ); my $in = array_in([ 5, 6, 7 ]); $in = $in | mapper { shift() * 2 }; my @result; my $out = array_out(\@result); $out = mapper { shift() * 3 } | mapper { shift() . "x" } | $out; $out->write($in->read); $out->write($in->read); say for @result; # Output: # 30x # 36x
Flux is the stream processing framework.
Flux::* module namespace includes:
Flux::Catalogmodule for the simple access to your collection of streams.
Flux is a framework, but you can use lower-level parts of it without higher-level parts. For example, you can read and write files with Flux::File without declaring it in the Flux catalog.
This distribution and other
Flux-* distributions on CPAN are the result of the refactoring of our in-house framework.
It should be stable. We used it in production for years. But remember that:
1) I'm rewriting this in Moo/Moose, and there can be bugs.
2) I can refactor some API aspects in the process.
3) Not all of the code is uploaded yet.
Message::Passing is similar to Flux.
Unlike Flux, it's asynchronous (Flux can be made asynchronous by using Coro, but its basic APIs are blocking).
IO::Pipeline syntax is similar to Flux mappers.
I'm sure there're many others. Stream processing is reinvented. | http://search.cpan.org/~mmcleric/Flux-1.03/lib/Flux.pm | CC-MAIN-2014-52 | en | refinedweb |
...one of the most highly
regarded and expertly designed C++ library projects in the
world. — Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
odeint is a header-only library, no linking against pre-compiled code is required. It can be included by
#include <boost/numeric/odeint.hpp>
which includes all headers of the library. All functions and classes from odeint live in the namespace
using namespace boost::numeric::odeint;
It is also possible to include only parts of the library. This is the recommended way since it saves a lot of compilation time.
#include <boost/numeric/odeint/stepper/XYZ.hpp>-. | http://www.boost.org/doc/libs/1_57_0/libs/numeric/odeint/doc/html/boost_numeric_odeint/getting_started/usage__compilation__headers.html | CC-MAIN-2014-52 | en | refinedweb |
I have recently run onto this article by Ivan Shcherbakov called 10+ powerful debugging tricks with Visual Studio. Though the article presents some rather basic tips of debugging with Visual Studio, there are others at least as helpful as those. Therefore I put together a list of ten more debugging tips for native development that work with at least Visual Studio 2008. (If you work with managed code, the debugger has even more features and there are several articles on CodeProject that present them.) Here is my list of additional tips:
For more debugging tips check the second article in the series, 10 Even More Visual Studio Debugging Tips for Native Development.
It is possible to instruct the debugger to break when an exception occurs, before a handler is invoked. That allows you to debug your application immediately after the exception occurs. Navigating the Call Stack should allow you to figure the root cause of the exception.
Visual Studio allows you to specify what category or particular exception you want to break on. A dialog is available from Debug > Exceptions menu. You can specify native (or managed) exceptions and aside from the default exceptions known to the debugger, you can add your custom exceptions.
Here is an example with the debugger breaking when a std::exception is thrown. of scope. When that happens, the variable in the Watch window is disabled and cannot be inspected any more (nor updated) even if the object is still alive and well.
It is possible to continue to watch it in full capability if you know the address of the object. You can then cast the address to a pointer of the object type and put that in the Watch window.
In the example bellow, _foo is no longer accessible in the Watch window after stepping out of do_foo(). However, taking its address and casting it to foo* we can still watch the object.
If you work with large arrays (let say at least some hundred elements, but maybe even less) expanding the array in the Watch window and looking for some particular range of elements is cumbersome, because you have to scroll a lot.
And if the array is allocated on the heap you can't even expand its elements in the Watch window.
There is a solution for that. You can use the syntax (array + <offset>), <count> to watch a particular range of <count> elements starting at the <offset> position (of course, array here is your actual object).
If you want to watch the entire array, you can simply say array, <count>.
If your array is on the heap, then you can expand it in the Watch window, but to watch a particular range you'd have to use a slightly different the syntax: ((T*)array + <offset>), <count> (notice this syntax also works with arrays on the heap). In this case T is the type of the array's elements.
If you work with MFC and use the "array" containers from it, like CArray, CDWordArray, CStringArray, etc., you can of course apply the same filtering, except that you must watch the m_pData member of the array, which is the actual buffer holding the data.
Many times when you debug the code you probably step into functions you would like to step over, whether it's constructors, assignment operators or others. One of those that used to bother me the most was the CString constructor.
Here is an example when stepping into take_a_string() function first steps into CString's constructor.
CString
take_a_string()
void take_a_string(CString const &text)
{
}
void test_string()
{
take_a_string(_T("sample"));
}
Luckily it is possible to tell the debugger to step over some methods, classes or entire namespaces.
The way this was implemented has changed. Back in the days of VS 6 this used to be specified through the autoexp.dat file.
Since Visual Studio 2002 this was changed to Registry settings. To enable stepping over functions you need to add some values in Registry (you can find all the details here):
To skip stepping into any CString method I have added the following rule:
Having this enabled, even when you press to step into take_a_string() in the above example the debugger skips the CString's constructor.
Seldom you might need to attach with the debugger to a program, but you cannot do it with the Attach window (maybe because the break would occur too fast to catch by attaching), nor you can start the program in debugger in the first place. You can cause a break of the program and give the debugger a chance to attach by calling the __debugbreak() intrinsic.
__debugbreak()
void break_for_debugging()
{
__debugbreak();
}
There are actually other ways to do this, such as triggering interruption 3, but this only works with x86 platforms (ASM is no longer supported for x64 in C++).
There is also a DebugBreak() function, but this is not portable, so the intrinsic is the recommended method.
__asm int 3;
When your program executes the intrinsic it stops, and you get a chance to attach a debugger to the process.
Additional readings:
It is possible to show a particular text in the debugger's output window by calling DebugOutputString. If there is no debugger attached, the function does nothing.
DebugOutputString
Memory leaks are an important problem in native development and finding them could be a serious challenging especially in large projects. Visual Studio provides reports about detected memory leaks and there are other applications (free or commercial) to help you with that. In some situations though, it is possible to use the debugger to break when an allocation that eventually leaks is done. To do this however, you must find a reproducible allocation number (which might not be that easy though). If you are able to do that, then the debugger can break the moment that is performed.
Let's consider this code that allocates 8 bytes, but never releases the allocated memory. Visual Studio displays a report of the leaked objects, and running this several times I could see it's always the same allocation number (341).
void leak_some_memory()
{
char* buffer = new char[8];
}
Dumping objects ->
d:\marius\vc++\debuggingdemos\debuggingdemos.cpp(103) : {341} normal block at 0x00F71F38, 8 bytes long.
Data: < > CD CD CD CD CD CD CD CD
Object dump complete.
The steps for breaking on a particular (reproducible) allocation are:
Following these steps for my example with allocation number 341 I was able to identify the source of the leak:
Debug and Release builds are meant for different purposes. While a Debug configuration is used for development, a Release configuration, as the name implies should be used for the final version of a program. Since it's supposed that the application meets the required quality to be published, such a configuration contains optimizations and settings that break the debugging experience of a Debug build. Still, sometimes you'd like to be able to debug the Release build the same way you debug the Debug build. To do that, you need to perform some changes in the configuration.
However, in this case one could argue you no longer debug the Release build, but rather a mixture of the Debug and the Release builds.
There are several things you should do; the mandatory ones are:
Another important debugging experience is remote debugging. This is a larger topic, covered many times, so I just want to summarize a bit.
Remote Debugging Monitor downloads:
The debugging tips presented in this article and the original article that inspired this one should provide the necessary tips for most of the debugging experiences and problems. To get more information about these tips I suggest following the additional readings.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
if(error_condition_met)
{
std::stringstream buffer:
buffer << __FILE__ << "(" << __LINE__ << "): Something went wrong\n";
::OutputDebugString(buffer.str().c_str());
}
C:\bigdavedev\my_test_project\main.cpp(56): Something went wrong
boost\:\:shared_ptr\<.*=NoStepInto
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/469416/10-More-Visual-Studio-Debugging-Tips-for-Native-De?msg=4386952 | CC-MAIN-2014-52 | en | refinedweb |
DirectoryNotFoundException Class
The exception that is thrown when part of a file or directory cannot be found.
For a list of all members of this type, see DirectoryNotFoundException Members.
System.Object
System.Exception
System.SystemException
System.IO.IOException
System.IO.DirectoryNotFoundException
[Visual Basic] <Serializable> Public Class DirectoryNotFoundException Inherits IOException [C#] [Serializable] public class DirectoryNotFoundException : IOException [C++] [Serializable] public __gc class DirectoryNotFoundException : public IOException [JScript] public Serializable class DirectoryNotFoundException extends IOException
DirectoryNotFoundException Members | System.IO Namespace | Exception | Handling and Throwing Exceptions | Working with I/O | Reading Text from a File | Writing Text to a File | http://msdn.microsoft.com/en-us/library/System.IO.DirectoryNotFoundException(v=vs.71).aspx | CC-MAIN-2014-52 | en | refinedweb |
JSP - Expression Language (EL)
JSP Expression Language (EL) makes it possible to easily access application data stored in JavaBeans components. JSP EL allows you to create expressions both (a) arithmetic and (b) logical. Within a JSP EL expression, you can use integers, floating point numbers, strings, the built-in constants true and false for boolean values, and null.
Simple Syntax:
Typically, when you specify an attribute value in a JSP tag, you simply use a string. For example:
<jsp:setProperty
JSP EL allows you to specify an expression for any of these attribute values. A simple syntax for JSP EL is as follows:
${expr}
Here expr specifies the expression itself. The most common operators in JSP EL are . and []. These two operators allow you to access various attributes of Java Beans and built-in JSP objects.
For example above syntax <jsp:setProperty> tag can be written with an expression like:
<jsp:setProperty
When the JSP compiler sees the ${} form in an attribute, it generates code to evaluate the expression and substitues the value of expresson.
You can also use JSP EL expressions within template text for a tag. For example, the <jsp:text> tag simply inserts its content within the body of a JSP. The following <jsp:text> declaration inserts <h1>Hello JSP!</h1> into the JSP output:
<jsp:text> <h1>Hello JSP!</h1> </jsp:text>
You can include a JSP EL expression in the body of a <jsp:text> tag (or any other tag) with the same ${} syntax you use for attributes. For example:
<jsp:text> Box Perimeter is: ${2*box.width + 2*box.height} </jsp:text>
EL expressions can use parentheses to group subexpressions. For example, ${(1 + 2) * 3} equals 9, but ${1 + (2 * 3)} equals 7.
To deactivate the evaluation of EL expressions, we specify the isELIgnored attribute of the page directive as below:
<%@ page isELIgnored ="true|false" %>
The valid values of this attribute are true and false. If it is true, EL expressions are ignored when they appear in static text or tag attributes. If it is false, EL expressions are evaluated by the container.
Basic Operators in EL:
JSP Expression Language (EL) supports most of the arithmetic and logical operators supported by Java. Below is the list of most frequently used operators:
Functions in JSP EL :
JSP EL allows you to use functions in expressions as well. These functions must be defined in custom tag libraries. A function usage has the following syntax:
${ns:func(param1, param2, ...)}
Where ns is the namespace of the function, func is the name of the function and param1 is the first parameter value. For example, the function fn:length, which is part of the JSTL library can be used as follows to get the the length of a string.
${fn:length("Get my length")}
To use a function from any tag library (standard or custom), you must install that library on your server and must include the library in your JSP using <taglib> directive as explained in JSTL chapter.
JSP EL Implicit Objects:
The JSP expression language supports the following implicit objects:
You can use these objects in an expression as if they were variables. Here are few examples which would clear the concept:
The pageContext Object:
The pageContext object gives you access to the pageContext JSP object. Through the pageContext object, you can access the request object. For example, to access the incoming query string for a request, you can use the expression:
${pageContext.request.queryString}
The Scope Objects:
The pageScope, requestScope, sessionScope, and applicationScope variables provide access to variables stored at each scope level.
For example, If you need to explicitly access the box variable in the application scope, you can access it through the applicationScope variable as applicationScope.box.
The param and paramValues Objects:
The param and paramValues objects give you access to the parameter values normally available through the request.getParameter and request.getParameterValues methods.
For example, to access a parameter named order, use the expression ${param.order} or ${param["order"]}.
Following is the example to access a request parameter named username:
<%@ page <p>${param["username"]}</p> </div> </body> </html>
The param object returns single string values, whereas the paramValues object returns string arrays.
header and headerValues Objects:
The header and headerValues objects give you access to the header values normally available through the request.getHeader and request.getHeaders methods.
For example, to access a header named user-agent, use the expression ${header.user-agent} or ${header["user-agent"]}.
Following is the example to access a header parameter named user-agent:
<%@ page <p>${header["user-agent"]}</p> </div> </body> </html>
This would display something as follows:
The header object returns single string values, whereas the headerValues object returns string arrays. | http://www.tutorialspoint.com/cgi-bin/printversion.cgi?tutorial=jsp&file=jsp_expression_language.htm | CC-MAIN-2014-52 | en | refinedweb |
Movable Python: The Portable Python Distribution
Contents
Movable Python has lots of command line options. Most of these can be
configured through the GUI. Several of the command line
options are the same as (or mimic) the Python command line options.
This page documents the different options, and ends with a discussion of some
of the Python command line options that aren't supported by Movable Python.
If you type movpy -h at the command line, you get the following help
usage: movpy [option] ... [-c cmd | file | -] [arg] ...
Options and arguments :
-c cmd : program passed in as string (terminates option list)
-f : Change working directory to the script directory
-h : print this help message and exit
-i : inspect interactively after running script,
uses InteractiveConsole or IPython - probably needs a real stdin
-m mod : run library module as a script (terminates option list)
-u : unbuffered stdout and stderr
-V : print the movpy version and Python version number and exit
-x : skip first line of source, allowing use of non-Unix forms of #!cmd
-IPOFF : Don't use IPython, even if it is available, default to InteractiveConsole
-p : Attempt pysco.full() (incompatible with IPython interactive shell)
-b : Pause for <enter> after terminating script
-o : override saved options, only use comand line options
-koff : A command line option for the GUI, force no console
-k : A command line option for the GUI, force a console
-la file : Log output to file, open append in write mode
-lw file : Log output to file, open logfile in write mode
file : program read from script file
- : drop straight into InteractiveConsole or IPython
-pylab : Launch IPython in pylab mode
--config dir : Directory for config files (~ will be expanded)
arg ...: arguments passed to program in sys.argv[1:]
Run movpy without a file argument to bring up a lightweight GUI to choose a script
('-c', '-', '-V' or '-h' options override this behaviour)
Python environment variables are ignored.
As you can see, this rather terse text documents all the command line options.
If you need more of an explanation, read the rest of this page...
Some of the command line options are the same as the normal Python command line
options. (Type python -h at the command line to see what I mean.) This
means that programs which call sys.executable, followed by come command
line options will still usually work. IDLE does this for example.
The following options are intended to have the same effect (more-or-less) as
when used with normal Python.
For a discussion of Python command line options that are unsupported see
Unsupported Python Command Line Options.
This section lists all of the Movable Python command line options and
explains what they do. They are basically all accesible through the
GUI, but you will still need to know what they do.
If you use these options at the command line, the letters must be prefixed with
a -. For example :
movpy -p -f -i filename
This runs the Python programme filename, in its directory, with psyco on,
and switches to interactive mode after running.
This option allows you to pass a small program as a string. It is executed and
then Movable Python terminates. IDLE uses this option to launch its
subprocess.
Run the program in its directory. A lot of scripts expect the current
directory to be the directory in which they are located.
Prints the help message.
Enter the interactive interpreter mode after running the program. This allows
you to inspect the objects used by the program.
If IPOFF and p are both off, then the interpreter used will be
IPython.
The b option is ignored if i is set.
A console box is always used if you specify i.
Run a module as a script. The module must be somewhere on sys.path.
Run with unbuffered stdout and stderr. This is simulated on the Python
level, and so doesn't apply to the underlying C stdout and stderr.
Print the version of Movable Python and the version of Python.
Skip the first line of the script.
If IPOFF is set then IPython isn't used as the interactive
interpreter.
Switch psyco the specializing compiler on. (Do a psyco.full()). This
accelerates most Python programs.
It is not compatible with IPython, so using p with i will switch
IPython off.
Pause for <enter> after running the script. This is useful if the script is a
command line script that terminates immediately after running. Selecting b
allows you to see the results before the console window vanishes.
Override config.txt. You can save default options in the Special File
config.txt. (Also through the GUI.) o allows you to run a program
without using the default options.
If the GUI is launched with the die option, it will exit immediately it
runs a program.
If koff is selected, then movpy.exe will not launch programs with a
console box.
If k is selected, then movpyw.exe will launch programs with a console
box.
- forces Movable Python straight into interactive interpreter mode.
Whether IPython is used depends on the IPOFF option.
All Movpy command line options other than IPOFF (and o) are ignored when
going straight into interactive interpreter mode. (And a console box is forced
even if you didn't specify one !)
Additional command line options after the - are passed to IPython.
Note
The option pylab is special cased, see below.
Additionally, the option libdir is set by Movable Python, other
than that you can customize the behaviour of IPython by passing in command
line arguments.
Movable Python has built in support for matplotlib.
If you have the matplotlib files
in your lib/ directory you can run the following at the command line :
movpy.exe - -pylab
This should drop you straight into a IPython session, with pylab enabled.
Because of a limitation in IPython, the pylab session will run in its own
namespace. This means that although customize.py will run, you won't have
access to the namespace it ran in. You can still import movpy of course.
Log the output of the file, in append mode.
Log the output of the file, in write mode.
The config option allows you to specify a directory for config files. ~ will be expanded to the users home directory :
movpy --config ~/movpy
Movable Python supports most of the Python Command Line Options. There
are still a few it doesn't support. Some of the currently unsupported ones it
would be possible to implement, and others it would be very hard.
If Movable Python is launched with an unsupported command line option, then
a warning is printed to sys.stderr, and the option is ignored.
Currently unsupported ones are :
-d : debug output from parser
-E : ignore environment variables
Because we don't have access to the Python environment variables, we are
effectively running in E mode already.
Because we don't have access to the Python environment variables, we are
effectively running in E mode already.
-O and -OO : optimize generated bytecode
Movable Python is created using py2exe. You
specify the level of optimisation at the point that you create the
application.
To support these command line options would probably involve a lot of work
hacking py2exe.
Movable Python is created using py2exe. You
specify the level of optimisation at the point that you create the
application.
To support these command line options would probably involve a lot of work
hacking py2exe.
-Q arg : division options: -Qold (default), -Qwarn, -Qwarnall, -Qnew
-S : don't imply 'import site' on initialization
We don't import site - but we do run customize.py. We could allow this
to be disabled.
We don't import site - but we do run customize.py. We could allow this
to be disabled.
-t and -tt : issue warnings about inconsistent tab usage (-tt:
issue errors)
-v : verbose (trace import statements)
-W arg : warning control (arg is action:message:category:module:lineno)
If you have a particular need for any of these options to be implemented, then
make a case for it on the Mailing List.
It may be possible.
Python allows its behaviour to be affected by several environment variables.
In order to isolate Movable Python [1] from the current Python install
(which may be a different version of Python), these environment variables
aren't accessible. It would be possible to create an alternative set of
environment variables specific to Movable Python. As Movable Python is
designed to be a portable distribution, it doesn't seem worthwhile. (In other
words, no-one would use them.)
The environment variables supported by a normal Python installation are :
Has the same effect as the -v command line option.
Has the same effect as the -i command line option.
Has the same effect as the -d command line option.
PYTHONUNBUFFERED
Has the same effect as the -u command line option.
PYTHONOPTIMIZE
Has the same effect as the -O and -OO command line options.
PYTHONSTARTUP
File executed on interactive startup (no default)
PYTHONPATH [2]
';'-separated list of directories prefixed to the default module search
path. The result is sys.path.
Alternate <prefix> directory (or <prefix>;<exec_prefix>). The default
module search path uses <prefix>lib.
Ignore case in 'import' statements (Windows).
It would be possible to implement options or environment variables to support
some of these, if there was a particular need. Again, make your case on the
Mailing List.
Return to Top
Part of the Movable Python Docs
Page last modified Wed Jul 16 02:31:30 2008.
Download Movable Python | http://www.voidspace.org.uk/python/movpy/reference/command_line.html | CC-MAIN-2014-52 | en | refinedweb |
EventLog.DeleteEventSource Method (String, String)
Assembly: System (in system.dll)
Parameters
- source
The name by which the application is registered in the event log system.
- machineName
The name of the computer to remove the registration from, or "." for the local computer.
Use this overload to remove the registration of a Source from a remote computer. DeleteEventSource accesses the registry on the computer specified by machineName and removes the registration of your application as a valid source of events.
You can remove your component as a valid source of events if you no longer need it to write entries to that log. For example, you might do this if you need to change your component from one log to another. Because a source can only be registered to one log at a time, changing the log requires you to remove the current registration.
DeleteEventSource removes only the source registered to a log. If you want to remove the log itself, call Delete. If you only want to delete the log entries, call Clear. Delete and DeleteEventSource are static methods, so they can be called on the class itself. It is not necessary to create an instance of EventLog to call either method.
Deleting a log through a call to Delete automatically deletes the sources registered to that log. This can make other applications using that log inoperative.
The following example deletes a source from the specified computer. The example determines the log from its source, and then deletes the log.
using System; using System.Diagnostics; using System.Threading; class MySample{ public static void Main(){."); } } }
import System.*; import System.Diagnostics.*; import System.Threading.*; class MySample { public static void main(String[] args) {."); } } //main } //MySample
- EventLogPermission for administering event log information on the computer. Associated enumeration: EventLogPermissionAccess.Administer. | http://msdn.microsoft.com/en-US/library/e3b4zfb5(v=vs.80).aspx | CC-MAIN-2014-52 | en | refinedweb |
Book excerpt: Creating graphical output using the .NET Compact Framework
This chapter shows how to take advantage of the graphics capabilities of the .NET Compact Framework, providing opportunities to build far richer user interface client-tier applications than any traditional HTML-based application could ever do.
.NET Compact Framework Programming with C# shows developers how their existing skills and code bases to create applications for the Pocket PC 2003 and other mobile devices. Authors Paul Yao and David Durant cover topics such as the differences between the standard .NET Framework and the .NET Compact Framework, programming with ADO.NET data classes, data binding with the DataGrid controls and using the WinForms Designer to build custom...
Continue Reading This Article
Enjoy this article as well as all of our content, including E-Guides, news, tips and more.
By submitting you agree to receive email communications from TechTarget and its partners. Privacy Policy Terms of Use.
controls.
Chapter 15, .NET Compact Framework Graphics, first shows C# developers how to leverage the 85 or so graphical functions in Windows CE and the six namespaces in the .NET Compact Framework that support those graphical output classes. Then it demonstrates how to draw text, raster and vector graphics on the display screen. The chapter ends with instructions for building a sample application.
Read the excerpt in this PDF file.
Excerpted from .NET Compact Framework Programming with C# (ISBN: 0321174038) by Paul Yao and David Durant. Published as part of the Microsoft .NET Development Series.
Copyright © 2004. Published by Addison-Wesley Professional, and available at your favorite book seller. Reprinted with permission. | http://searchwindevelopment.techtarget.com/tip/Book-excerpt-Creating-graphical-output-using-the-NET-Compact-Framework | CC-MAIN-2014-52 | en | refinedweb |
I am having a very difficult time decrypting the string in Perl. I am not sure if the problem is with the Java, the Perl, or both. So I am presenting both programs below.
For the Java encryption, the IV is set to all 64-bit zero and the padding is PKCS5Padding. Of course, I tried to match this on the Perl side. C::CBC defaults to PKCS5Padding and I set the IV to "\0\0\0\0\0\0\0\0".
The key is 64 random hexadecimal characters (0-9 and A-F). I think the handling of the key in the Java code could be the cause of the problem, but I am not sure. The Java program prints the encrypted string to the screen, I then copy and paste it to the Perl script, so I can try to decrypt it. (That is how I am testing it.) Here is my Perl code:
my $key = "8326554161EB30EFBC6BF34CC3C832E7CF8135C1999603D4022C031FAEE
+D5C40";
my $vector = "\0\0\0\0\0\0\0\0";
my $cipher = Crypt::CBC->new({
'key' => $key,
'iv' => $vector,
'prepend_iv' => 0,
'cipher' => 'Blowfish',
});
my $plaintext = $cipher->decrypt($encrypted);
[download]
import javax.crypto.Cipher;
import javax.crypto.spec.SecretKeySpec;
import javax.crypto.spec.IvParameterSpec;
import java.security.Key;
public class CryptoMain {
public static void main(String[] args) throws Exception {
String mode = "Blowfish/CBC/PKCS5Padding";
String algorithm = "Blowfish";
String secretStr = "8326554161EB30EFBC6BF34CC3C832E7CF8135C1999603
+D4022C031FAEED5C40";
byte secret[] = fromString(secretStr);
Cipher encCipher = null;
Cipher decCipher = null;
byte[] encoded = null;
byte[] decoded = null;
encCipher = Cipher.getInstance(mode);
decCipher = Cipher.getInstance(mode);
Key key = new SecretKeySpec(secret, algorithm);
byte[] ivBytes = new byte[] { 00, 00, 00, 00, 00, 00, 00, 00 };
IvParameterSpec iv = new IvParameterSpec(ivBytes);
encCipher.init(Cipher.ENCRYPT_MODE, key, iv);
decCipher.init(Cipher.DECRYPT_MODE, key, iv);
encoded = encCipher.doFinal(new byte[] {1, 2, 3, 4, 5});
// THIS IS THE ENCODED STRING I USE IN THE PERL SCRIPT
System.out.println("encoded: " + toString(encoded));
decoded = decCipher.doFinal(encoded);
System.out.println("decoded: " + toString(decoded));
encoded = encCipher.doFinal(new byte[] {1, 2, 3, 4, 5, 6, 7, 8, 9,
+ 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20});
System.out.println("encoded: " + toString(encoded));
decoded = decCipher.doFinal(encoded);
System.out.println("decoded: " + toString(decoded));
}
///////// some hex utilities below....
private static final char[] hexDigits = {
'0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F'
};
/**
* Returns a string of hexadecimal digits from a byte array. Each
* byte is converted to 2 hex symbols.
* <p>
* If offset and length are omitted, the complete array is used.
*/
public static String toString(byte[] ba, int offset, int length) {
char[] buf = new char[length * 2];
int j = 0;
int k;
for (int i = offset; i < offset + length; i++) {
k = ba[i];
buf[j++] = hexDigits[(k >>> 4) & 0x0F];
buf[j++] = hexDigits[ k & 0x0F];
}
return new String(buf);
}
public static String toString(byte[] ba) {
return toString(ba, 0, ba.length);
}
/**
* Returns the number from 0 to 15 corresponding to the hex digit <i
+>ch</i>.
*/
private static int fromDigit(char ch) {
if (ch >= '0' && ch <= '9')
return ch - '0';
if (ch >= 'A' && ch <= 'F')
return ch - 'A' + 10;
if (ch >= 'a' && ch <= 'f')
return ch - 'a' + 10;
throw new IllegalArgumentException("invalid hex digit '" + ch + "'
+");
}
[download]!ch!)
--roboticus
byte[] b = (new BigInteger("98a7ba97b8a",16)).toByteArray()
[download]
You may view the original node and the consideration vote tally.
Hell yes!
Definitely not
I guess so
I guess not
Results (47 votes),
past polls | http://www.perlmonks.org/?node_id=551481 | CC-MAIN-2014-52 | en | refinedweb |
The nsid (namespace identifier) type creates a context in which namespace identifiers can be bound.
creates the nsid context for the site alameda.bldg-5 and permits the creation of subcontexts such as service/. Continuing with this example, you could then execute the command
to create the service context for alameda.bldg-5.
The nsid context created is owned by the administrator who ran the fncreate command. | http://docs.oracle.com/cd/E19455-01/806-1387/af2ctxel-15749/index.html | CC-MAIN-2014-52 | en | refinedweb |
I often code user interfaces that have some sort of cancel button on them. For example, in my upcoming ‘Decoupling Workflow’ presentation, I have the following screen:
Notice the nice cancel button on the form. The trick to this situation is that I need to have my workflow code understand whether or not I clicked Next or clicked Cancel. Depending on the button that was clicked, I need to do something different in the workflow. If I click cancel, throw away all of the data that was entered on the form. If I click next, though, I need to store all of the data and continue on to the next screen.
The Result<T> and ServiceResult
In the past, I’ve handled these types of buttons in many, many different ways. I’ve returned null from the form, I’ve checked the DialogResult of the form, I’ve done out parameters for methods, and I’ve done specific properties on the form or the form’s presenter to tell me the status vs the data. Recently, though, I’ve begun to settle into a nice little Result<T> class that does two things for me:
- Provides a result status – for example, a ServiceResult enum with Ok and Cancel as the two options
- Provides a data object (the <T> generic in Result<T>) for the values I need, if I need them
Here is the code for my ServiceResult and my Result<T> object.
public enum ServiceResult
{
Ok = 0,
Cancel = 1
}
public class Result<T>
{
public ServiceResult ServiceResult { get; private set; }
public T Data { get; private set; }
public Result(ServiceResult serviceResult): this(serviceResult, default(T)){}
public Result(ServiceResult serviceResult, T data)
{
ServiceResult = serviceResult;
Data = data;
}
}
Putting Result<T> To Work
With this simple little solution, I can create very concise and clear workflow objects that know how to handle the cancel button versus the next button. The code becomes easier to read and understand, and makes the real workflow that much easier to see. The workflow code that runs the “Add New Employee” process for the screen shot above, is this:
public void Run()
{
Result<EmployeeInfo> result = GetNewEmployeeInfo.Get();
if (result.ServiceResult == ServiceResult.Ok)
{
EmployeeInfo info = result.Data;
Employee employee = new Employee(info.FirstName, info.LastName, info.Email);
Employee manager = GetEmployeeManager.GetManagerFor(employee);
manager.Employees.Add(employee);
}
}
Notice the use of Result<EmployeeInfo> in this code. I’m checking to see if the result.ServiceResult is Ok before moving on to the use of the data. The GetNewEmployeeInfo class return a Result<EmployeeInfo> object from the .Get() method. The EmployeeInfo object contains the first name, last name, and email address of the employee as simple string values (and in the “real world”, the EmployeeInfo object would probably contain the input validation for these).
Because Result<T> is a generics class and returns <T> from the .Data property, I can specify any data value that I need and it returned from the presenter in question. This is where the real flexibility of the Result<T> object comes into play. When I have verified that the user clicked OK, via the result.ServiceResult property, I can then grab the real EmployeeInfo object out of the result.Data parameter which isstrongly typed to my EmployeeInfo class. Once I have this data in hand, I can do what I need with it and move on to the next step if there are any.
Conclusion
Having tried many different approaches to workflow code, I’m fairly well settled into this pattern right now. That doesn’t mean it won’t evolve, though. The basic implementation would cover most of what I need right now, but could easily be extended to include different “status” values instead of just the ServiceResults of OK and Cancel. Overall, though, this simple Result<T> class is saving me a lot of headache and heartache trying to figure out what to return from a method so that a workflow can figure out if the user is continuing, cancelling, or whatever.
Get The Best JavaScript Secrets!
Get the trade secrets of JavaScript pros, and the insider info you need to advance your career!
Post Footer automatically generated by Add Post Footer Plugin for wordpress. | http://lostechies.com/derickbailey/2009/05/19/result-lt-t-gt-directing-workflow-with-a-return-status-and-value/ | CC-MAIN-2014-52 | en | refinedweb |
Can't add my own method to my Analyzer subclass - Metaclass behaviour?
- Richard O'Regan last edited by Richard O'Regan
Hey guys,
So I'm working on another Analyzer to calculate basic statistics. It works great, just when I move some of the code out of notify_trade() to a new method I defined called 'calculate_statistics'
def calculate_statistics(self): # Stats for winning trades.. self.rets.won.closedTrades = np.size(self._won_pnl_list) ...
When I call
self.calculate_statistics()
I get an error
AttributeError: 'BasicTradeStats' object has no attribute 'calculate_statistics'
I've checked obvious things, tried defining other functions e.g.
def hello(self): print('Hello')
in other Analyzer code , like sqn.py e.t.c.. and I get same error trying to call.
I don't know about meta classes and assuming there is some restriction deliberately in place? Or am I just missing a trick somewhere?!
Thanks
- backtrader administrators last edited by
Adding methods to a subclass of analyzer is of course possible.
The problem here is that understanding and diagnosing your problem is close to impossible. See:
- Title: "... metaclass behavior" but there is nothing about metaclasses in the message
- There is no indication as to where
def calculate_statistics(self)is actually being declared and where it is being called
- There is no hint as to how the subclass is actually being created
- Richard O'Regan last edited by Richard O'Regan
Hey Backtrader, back to coding after partying all weekend.
I didn't put my whole code up because it was in prototype mode and messy - though in hindsight I didn't give nearly enough information. Apologies for my lack of clarity.
I loaded up everything just now, and it all works fine?? I think must have been something to do with reloading modules and Atom editor Beta I'm using..
I subclassed Analyzer (which uses metaclasses), I honestly could not use any functions I defined, only 'notify_trade' etc, I figured there must be some restrictions with metaclasses (which I have never used before) but I couldnt find anything on Google so was confused..
All sorted now - funny how restarting everything can often fix problem. Hopefully will have a new useful Analyzer up later today/tomorrow. Stay tuned
Thank-you
- backtrader administrators last edited by
Although metaclasses can be used to enforce restrictions, it's not the usage pattern in backtrader. They are in place to make things lighter (in the humble opinion of the author) both for the internals and the externals
- Richard O'Regan last edited by
@backtrader ah ok cool. All noted, thank-you | https://community.backtrader.com/topic/631/can-t-add-my-own-method-to-my-analyzer-subclass-metaclass-behaviour/1 | CC-MAIN-2021-25 | en | refinedweb |
NetBeans File Template Module Tutorial
This tutorial demonstrates how to create a NetBeans module providing.
Introduction to FreeMarker
Since NetBeans IDE 6.0, you have been able.
Creating the Module Project
We begin by going through the New Module Project wizard, which will create a source structure, with all the minimum requirements, for our new module.
Choose File > New Project (Ctrl+Shift+N). Under Categories, select NetBeans Modules. Under Projects, select Module. Click Next.
In the Name and Location panel, type
AdditionalFileTemplatesin the Project Name field. Change the Project Location to any directory on your computer. Click Next.
In the Basic Module Configuration panel, type
org.myorg.additionalfiletemplatesin Code Name Base. Click Finish.
The IDE creates the
AdditionalFileTem).
Creating the File Template.
Creating the Template File
The template file defines the content that the template will generate for the user.
Right-click the
AdditionalFileTemplatesnode and choose New > Other. In the New File wizard, under Categories, choose Other and under File Types, choose HTML. Click Next.
1.
Type
HTML in File Name. Click Browse and browse to
src/org/myorg/additionalfiletemplates . Click Select Folder. Click Finish. A new HTML file, named
HTML.html , opens in the Source Editor, containing the standard HTML file’s content shown below:
<!DOCTYPE html> <html> <head> <title></title> <meta http- </head> <body> TODO write content </body> </html>
Change the HTML file according to your needs. You can add the following predefined variables, if needed:
${date} inserts the current date, in this format: Feb 16, 2008
${encoding} inserts the default encoding, such as: UTF-8
${name} inserts the name of the file.
${nameAndExt} inserts the name of the file, together with its extension.
${package} inserts the name of the package where the file is created.
${time} inserts the current time, in this format: 7:37:58 PM
${user} inserts the user name.
In addition to the predefined variables, you can provide additional variables to your users, via your module. This is explained later in this tutorial. The full list of FreeMarker directives can also be used to add logic to the template:
#assign
#else
#elseif
#end
#foreach
#if
#include
#list
#macro
#parse
#set
#stop
As an example, look at the definition of the Java class template:
<#assign <#assign <#assign <#include "../Licenses/license-${project.license}.txt"> <#if package?? && package != ""> package ${package}; </#if> /** * * @author ${user} */ public class ${name} { }
For information on the #assign directive, see Providing a Project License. For a full description of FreeMarker template language, see the FreeMarker Manual, in particular, the Directives chapter.
Creating the Description File
The description file is an HTML file displayed in the New File dialog for the template.
Right-click the
org.myorg.additionalfiletemplatesnode and choose New > Other. Under Categories, choose Other. Under File Types, choose HTML File. Click Next. Type
Descriptionin File Name. Click Browse and browse to
src/org/myorg/additionalfiletemplates. Click Select Folder. Click Finish. An empty HTML file opens in the Source Editor and its node appears in the Projects window.
Type "
Creates a new HTML file." (without the quotation marks) between the
<body>tags, so that the file looks as follows:
<!DOCTYPE html> <html> <head> <title></title> <meta http- </head> <body> Creates a new HTML file. </body> </html>
Getting an Icon
The icon accompanies the file template in the New File wizard. It identifies it and distinguishes it from other file templates. The icon must have a dimension of 16x16 pixels.
Name the icon, for example,
icon.png. Below, the name "Datasource.gif" is used.
Paste it in the
org.myorg.additionalfiletemplatespackage.
Registering the File Template
Once you have defined the file template, the description file, and the icon, you register them in the NetBeans virtual filesystem. The
layer.xml file is made for this purpose. The file is automatically created and populated via the @TemplateRegistration annotation used in the steps below.
Right-click the module in the Projects window, choose Properties, and use the Libraries tab to add dependencies on Datasystems API and Utilities API.
1.
Create a new Java class named
package-info.java and define its content as follows:
@TemplateRegistration( folder = "Other", iconBase="org/myorg/additionalfiletemplates/Datasource.gif", displayName = "#HTMLtemplate_displayName", content = "HTML.html", description = "Description.html", scriptEngine="freemarker") @Messages(value = "HTMLtemplate_displayName=Empty HTML file") package org.myorg.additionalfiletemplates; import org.netbeans.api.templates.TemplateRegistration; import org.openide.util.NbBundle.Messages;
Make sure that the structure of the module is as follows:
Building and Installing the Module
The IDE uses an Ant build script to build and install your module. The build script is created for you when you create the module project.
In the Projects window, right-click the project and choose Run. The module is built and installed in a new instance of the development IDE.
Choose File > New Project (Ctrl-Shift-N) and create a new project.
1. Right-click the project and choose New > Other. The New File dialog opens and displays the new file template. It should look something like this, although your icon will probably be different:
Select the new file template and complete the wizard. When you click Finish, the Source Editor displays the newly created template.
Providing Additional Variables.
Providing a Project License:
Go to the Tools menu. Choose Templates. Open the Java | Java Class template in the editor:
The template above, and the ramifications of defining it in FreeMarker, have been discussed above. However, let’s look specifically at the first four lines:
<.
Next, let’s look at the license itself. Notice this line in the templates above:
<#include "../Licenses/license-${project.license}.txt">
In particular, notice this part:
$.
In summary, since NetBeans IDE 6.0, you are able to let, is possible now. For further reading about licensing, especially the comments at the end of it, see this blog entry.
Next Steps
For more information about creating and developing NetBeans Module, see the following resources: | https://netbeans.apache.org/tutorials/74/nbm-filetemplates.html | CC-MAIN-2021-25 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.