text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
Class Configuration A configuration of interprocedural data flow analysis. This defines sources, sinks, and any other configurable aspect of the analysis. Each use of the global data flow library must define its own unique extension of this abstract class. To create a configuration, extend this class with a subclass whose characteristic predicate is a unique singleton string. For example, write class MyAnalysisConfiguration extends DataFlow::Configuration { MyAnalysisConfiguration() { this = "MyAnalysisConfiguration" } // Override `isSource` and `isSink`. // Optionally override `isBarrier`. // Optionally override `isAdditionalFlowStep`. } Conceptually, this defines a graph where the nodes are DataFlow::Nodes and the edges are those data-flow steps that preserve the value of the node along with any additional edges defined by isAdditionalFlowStep. Specifying nodes in isBarrier will remove those nodes from the graph, and specifying nodes in isBarrierIn and/or isBarrierOut will remove in-going and/or out-going edges from those nodes, respectively. Then, to query whether there is flow between some source and sink, write exists(MyAnalysisConfiguration cfg | cfg.hasFlow(source, sink)) Multiple configurations can coexist, but two classes extending DataFlow::Configuration should never depend on each other. One of them should instead depend on a DataFlow2::Configuration, a DataFlow3::Configuration, or a DataFlow4::Configuration. Import path import semmle.code.cpp.ir.dataflow.internal.DataFlowImpl4
https://help.semmle.com/qldoc/cpp/semmle/code/cpp/ir/dataflow/internal/DataFlowImpl4.qll/type.DataFlowImpl4$Configuration.html
CC-MAIN-2020-24
en
refinedweb
Hi, thanks for this great hg extension. It is really usefull. How do I contribute to it? Here is a patch that would have saved me 1 or 2 hours digging in the source to find why their was nothing that was cloned. check that we have some revision to clone. Otherwise, print a warning with a possible explaination: their is no trunk, tags and branches subdirectory in the specified url. This mean we can't clone a partial svn repository. diff -r 91db8fc049b0 svnwrap/svn_swig_wrapper.py --- a/svnwrap/svn_swig_wrapper.py Tue Feb 24 14:30:21 2009 -0600 +++ b/svnwrap/svn_swig_wrapper.py Wed Feb 25 11:47:07 2009 -0500 @@ -253,6 +253,7 @@ # start=start) # this does the same thing, but at the repo root + filtering. It's # kind of tough cookies, sadly. + have_yielded = False for r in self.fetch_history_at_paths([''], start=start, chunk_size=chunk_size): should_yield = False @@ -266,8 +267,12 @@ i += 1 if should_yield: yield r + have_yielded = True - if not have_yielded: print "Warning: We didn't found any revision to fetch! The path you try to clone should includ the subdirectory trunk, tags and branches. This mean you can't clone a partial svn repository." def fetch_history_at_paths(self, paths, start=None, stop=None, chunk_size=1000): revisions = [] Sorry, but your patch got horribly mangled when you pasted it here. Could you try submitting it via a a fork or mq on bitbucket or by patchbombing the google group? Thanks!
https://bitbucket.org/durin42/hgsubversion/issues/54/patch-warn-if-their-is-no-revision-to
CC-MAIN-2020-24
en
refinedweb
Inbound SAML using Passport.js# Overivew# Inbound SAML enables you to support user authentication at an external SAML IDP. Inbound SAML is a frequent requirement for B2B SaaS providers that need to allow users from enterprise customers to authenticate at their home IDP for access to the SaaS resources. The user flow is typically similar to social login, but instead of giving users the option to login at a consumer service like Facebook or GitHub, you can send them to their home organization's SAML IDP for login. The Gluu Server uses the Passport.js authentication middleware and the SAML IDP MultiAuthn interception script to support inbound SAML SSO. Post-authentication, if a local account does not already exist for the user, the script performs just-in-time provisioning to add the user to the Gluu OpenLDAP server. In this way, the Gluu SAML and OpenID Connect providers can gather claims and maintain SSO as normal. Note Previous versions of the Gluu Server used Asimba for inbound SAML. Documentation for Asimba can be found here. For all new inbound SAML requirements, we now recommend using Passport.js and following the docs below. About Passport# Passport is an MIT licensed Express-based web application. We've modified it to call oxTrust APIs for its non-static configuration. Because its configuration is stored centrally in LDAP, you can scale Passport even in clustered topologies. Prerequisites# - A Gluu Server with Passport.js installed during setup (Installation Instructions); - IDP MultiAuthn interception script. Sequence Diagram# Below is a sequence diagram to help clarify the workflow for user authentication and provisioning. User-Agent calls gluu for Authentication with provided IDP name as base64 encoded json in state param like state= base64({"salt":"<SALTVALUE>","provider":"<idp_name>"}); Gluu Sever multiauthn script checks the IDP name; Gluu server calls Node-Passport server for a JWT token; Node-Passport server generates a JWT token and provides it in response to Gluu server; Gluu Server multiauthn script prepares the URL for passport server with provided IDP; Gluu server make a request to the Node-Passport server with the JWT token to authenticate the user for IDP provider; Node-Passport server redirects the user to the external IDP provider; After successful user authentication, the IDP will callback the Node-Passport server along with user details and access token; Node-Passport server will redirect back to Gluu Server with the user details and access token; The multiauthn interception script will check if the user exists in Gluu's OpenLDAP server. a. If the user exists then the user will be logged into the system. b. If the user does not exist, the interception script will create a new user with the required details in the Gluu OpenLDAP and log the user into the system. Configure Gluu Server# Make sure you have deployed Passport.js during installation of your Gluu Server. Then follow the next steps: Navigate to Configuration> Manage Custom Scripts; In the Person Authenticationtab, find and enable the existing Passport script; - Update the existing content in the Script field with the IDP MultiAuthn interception script; Note Rather than replacing the existing script, you can also add a new strategy by scrolling to the bottom of the page. - Click on updateat the end of the page. Now navigate to Configuration> Manage Authentication> Default Authenticaion Set the Passport Supportfield to enabled; In /etc/gluu/confadd configuration json file passport-saml-config.jsoncontaining IDP information; Once the configuration and settings have been entered, restart the passport service by following the below instructions: a. Login to chroot. b. Enter the following command to stop: service passport stop c. Enter the following command to start: service passport start Warning Strategies names and field names are case sensitive. Configure Passport# You can configure passport with either the setup script (beta) or manually. Note If you have made any modifications to your passport server, we recommend using the manual steps. The script will override your changes and replace them with fresh code. Setup script configuration# 1) Download project zip file; 2) Copy setup-script directory/folder in side gluu server's chroot (the command will be like: cp -a <path to downloaded repo>/setup-script /opt/gluu-server-3.1.1/root/); 3) Login to gluu-server's chroot: service gluu-server-3.1.1 login; 4) Navigate inside the setup-script directory: cd setup-script; 5) Run passport-setup.py (it may take some time depending on your Internet speed and machine configurations because script also run commands like npm install); 6) Follow console instructions to restart passport and oxAuth server or simply just restart the Gluu Server; 7) You might need to run chmod 777 -R /opt/gluu/node/passport/ after running this script to reset the file permissions . Manual configuration# We can manually configure Passport using the following steps: su - node export PATH=$PATH:/opt/node/bin cd /opt/gluu/node/passport npm install passport-saml --save In /opt/gluu/node/passport/server/app.js add configs for saml: global.saml_config = require('/etc/gluu/conf/passport-saml-config.json') In /opt/gluu/node/passport/server/routes/index.js add the route for saml: var passportSAML = require('../auth/saml').passport; var fs = require('fs'); //===================saml ==================== var entitiesJSON = global.saml_config; for (key in entitiesJSON) { //with out cert param in saml_config it will not work if (entitiesJSON[key].cert && entitiesJSON[key].cert.length > 5) { router.post('/auth/saml/' + key + '/callback', passportSAML.authenticate(key, { failureRedirect: '/passport/login' }), callbackResponse); router.get('/auth/saml/' + key + '/:token', validateToken, passportSAML.authenticate(key)); } else { router.get('/auth/saml/' + key + '/:token', validateToken, function (req, res) { err = { message:"cert param is required to validate signature of saml assertions response" }; logger.log('error', 'Cert Error: ' + JSON.stringify(err)); logger.sendMQMessage('Cert Error: ' + JSON.stringify(err)); res.status(400).send("Internal Error"); }); } } Expose the metadata through a global url router.get('/auth/meta/idp/:idp', function (req, res) { var idp = req.params.idp; logger.info(idp); fs.readFile(__dirname + '/../idp-metadata/' + idp + '.xml', (e, data) => { if (e) res.status(404).send("Internal Error"); else res.status(200).set('Content-Type', 'text/xml').send(String(data)); }); }); In /opt/gluu/node/passport/server/auth/configureStrategies.js add support for SAML: var SamlStrategy = require('./saml'); //add this line in SamlStrategy.setCredentials(); Put the SAML file name, saml.js, from the gluu-passport repo on path /opt/gluu/node/passport/server/auth/ Next we need to customize passportpostlogin.xml to use this project with the Gluu Server 3.1.1. Note This will be added to the defaults in the next version, Gluu Server 3.1.2. Copy the contents of passportpostlogin.xhtml and paste to opt/gluu/jetty/oxauth/custom/pages/auth/passport (you need to create missing directories ( /auth/passport)) Now restart passport service. service passport stop service passport start Onboarding new IDPs# Add new IDP configurations in the /etc/gluu/conf/passport-saml-config.json file. A sample IDP configuration is provided below: {"idp1": {"entryPoint": "", "issuer": "urn:test:example", "identifierFormat": "urn:oasis:names:tc:SAML:2.0:nameid-format:transient", "authnRequestBinding": "HTTP-POST", "additionalAuthorizeParams": "<Some additinal params json>", "skipRequestCompression": "true", "cert":"MIIDbDCCAlQCCQCuwqx2PNP...........YsMw==",//single line with out space and \n (importatnt) "reverseMapping": { "email" : "email", "username": "urn:oid:0.9.2342.19200300.100.1.1", "displayName": "urn:oid:2.16.840.1.113730.3.1.241", "id": "urn:oid:0.9.2342.19200300.100.1.1", "name": "urn:oid:2.5.4.42", "givenName": "urn:oid:2.5.4.42", "familyName": "urn:oid:2.5.4.4", "provider" :"issuer" } } } In the above snippet replace with the URL of your IDP. It has the following keys: - `entryPoint` is mandatory field which is identity provider entry point is the address to authenticate through SAML SSO. - `issuer` is mandatory field which is issuer string supply to identity provider. - `identifierFormat` if true, name identifier format to request from identity provider. - `authnRequestBinding` if set to HTTP-POST, will request authentication from IDP via HTTP POST binding, otherwise defaults to HTTP Redirect. - `additionalAuthorizeParams` dictionary of additional query params to add to 'authorize' requests. - `skipRequestCompression` if set to true, the SAML request from the service provider won't be compressed. - `cert` Identity Provider's public PEM-encoded X.509 certificate with The `BEGIN CERTIFICATE` and `END CERTIFICATE` lines should be stripped out and the certificate should be provided on a single line.All \n must be removed from string. - `reverseMapping` is IDP representation of user fields - `email` is the user email - `username` is username of user - `displayName` is Display Name of user - `id` is userid for user. - `name` is full name of user - `givenName` is first name of user - `familyName` is last name of user Note If you used the setup script, the passport-saml-config.json file will be created by the script. You just need to modify the configurations as needed. Gathering SAML Metadata# We also need SAML metadata from the external IDPs to register them with our IDP. Passport will generate SAML IDP metadata for each IDP listed in the passport-saml-config.json file. It can be accessed at the Passport endpoint: https://<hostname>/passport/auth/meta/idp/<your-IDP-name-from-passport-saml-config.json> We can also get metadata as an XML file at the following path: ...<path to gluu server >/opt/gluu/node/passport/server/idp-metadata Demo Server Config# We are going to follow this sequence diagram for this demo. Steps# We need an OpenID connect client to send an Authentication request via an interception script. a. We assume that you know how to create OpenID connect client in gluu server. For more details you can follow this Client registration doc. b. If you have not create new separate strategy, in your created client set passportas arc_valueor if you have created separate script than set acr_valueto the title of your script. If you followed our guide and created a strategy with the name passportsaml, your acr_valueshould be set to passportsaml. c. set redirect_urias per your project requirements. Now we will use the client created in step 1 for authentication requests; a. We need to call standard gluu GET Authentication request using created clientIDand acr_value; b. Follow Gluu openid-connect-api to create an authentication request; c. Additionally we need to add stateand nonceas query params with created authentication request; d. state -> base64 of json {"salt":" ","provider":" "}; e. Nonce -> String value used to associate a Client session with an ID Token, and to mitigate replay attacks. The value is passed through unmodified from the Authorization Request to the ID Token. Sufficient entropy MUST be present in the nonce values used to prevent attackers from guessing values.3) Open the generated links to initiate the //Example for generating getAuthentication request in java import com.google.common.collect.Lists; import org.xdi.oxauth.client.AuthorizationRequest; import org.xdi.oxauth.model.common.ResponseType; import java.util.Random; public class OpenIdGenerator { static String clientid = "your_client_id"; static String redirect_uri = "redirect_uril"; static String host = "your_glue host"; static String acr_valur = "acr_value"; public static void main(String[] args) throws Exception { String nounce = String.valueOf(randInt(100000000, 999999999)); AuthorizationRequest authorizationRequest = new AuthorizationRequest(Lists.newArrayList(ResponseType.CODE, ResponseType.ID_TOKEN) , clientid , Lists.newArrayList("openid", "profile") , redirect_uri, String.valueOf(randInt(100000000, 999999999))); authorizationRequest.setRedirectUri(redirect_uri); authorizationRequest.setState("You state value"); //base64 of json {"salt":"<salt_value>","provider":"<idp_name>"} authorizationRequest.setAcrValues(Lists.newArrayList(acr_valur)); String queryString = "https://" + host + "/oxauth/authorize?" + authorizationRequest.getQueryString(); System.out.println(queryString); } public static int randInt(int min, int max) { // Usually this can be a field rather than a method variable Random rand = new Random(); // nextInt is normally exclusive of the top value, // so add 1 to make it inclusive int randomNum = rand.nextInt((max - min) + 1) + min; return randomNum; } } //output will be like this :- SAML IDP MultiAuthentiocationflow. Demo Client Config# Proxy-client is the demo node.js application to test Passport Inbound SSO. The project requires latest version of node-js to be installed on your machine Steps# - Download project zip file; - Register a new OIDC client in your gluu server with redirect uri copy clientIDand secret; - Open client-config.jsonand add details like ClientID, clientSecret, and hostname; - Copy the passport-saml-config.jsonwhich you used in setting up Passport Inbound SSO](#onboarding-new-idps) - Open terminal and navigate to the project directory; - Execute following commands: a. npm install b. node server.js - In a browser, navigate to http:localhost:3000and click on one of the IDP links to test your configuration. It will redirect you to your configured IDP using SAML SSO. After login, you might be asked to authorize the release of your personal data. On allowing from Authorization page Server will redirect to Proxy-client (Demo application) with Query params like ...../profile/response_type=code&scope=openid&client_id=s6BhdRkqt3&state=af0ifjsldkj&redirect_uri=https%3A%2F%2Fclient.example.og%2Fcb using Information from query params of redirect uri demo Application will fetch the user information and display it on profile page!
https://gluu.org/docs/gluu-server/3.1.1/authn-guide/inbound-saml-passport/
CC-MAIN-2020-24
en
refinedweb
Programming Massively Parallel Processors Programming Massively Parallel Processors A Hands-on Approach Third Edition David B. Kirk Wen-mei W. Hwu Morgan Kaufmann is an imprint of Elsevier 50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States Copyright © 2017, 2013, 2010 David B. Kirk/NVIDIA Corporation and Wen-mei W. Hwu. fr; om-811986-0 For Information on all Morgan Kaufmann publications visit our website at Publisher: Katey Birtcher Acquisition Editor: Stephen Merken Developmental Editor: Nate McFadden Production Project Manager: Sujatha Thirugnana Sambandam Cover Designer: Greg Harris Typeset by MPS Limited, Chennai, India To Caroline, Rose, Leo, Sabrina, Amanda, Bryan, and Carissa For enduring our absence while working on the course and the book—once again! Preface We are proud to introduce to you the third edition of Programming Massively Parallel Processors: A Hands-on Approach. Mass-market computing systems that combine multi-core CPUs and many-thread GPUs have brought terascale computing to laptops and petascale computing to clusters. Armed with such computing power, we are at the dawn of pervasive use of computational experiments for science, engineering, health, and business disciplines. Many will be able to achieve breakthroughs in their disciplines using computational experiments that are of unprecedented level of scale, accuracy, controllability and observability. This book provides a critical ingredient for the vision: teaching parallel programming to millions of graduate and undergraduate students so that computational thinking and parallel programming skills will be as pervasive as calculus. Since the second edition came out in 2012, we have received numerous comments from our readers and instructors. Many told us about the existing features they value. Others gave us ideas about how we should expand its contents to make the book even more valuable. Furthermore, the hardware and software technology for heterogeneous parallel computing has advanced tremendously since then. In the hardware arena, two more generations of GPU computing architectures, Maxwell and Pascal, have been introduced since the first edition. In the software domain, CUDA 6.0 through CUDA 8.0 have allowed programmers to access the new hardware features of Maxwell and Pascal. New algorithms have also been developed. Accordingly, we added five new chapters and completely rewrote more than half of the existing chapters. Broadly speaking, we aim for three major improvements in the third edition while preserving the most valued features of the first two editions. The improvements are (1) adding new Chapter 9, Parallel patterns—parallel histogram computation (histogram); Chapter 11, Parallel patterns: merge sort (merge sort); and Chapter 12, Parallel patterns: graph search (graph search) that introduce frequently used parallel algorithm patterns; (2) adding new Chapter 16, Application case study—machine learning on deep learning as an application case study; and (3) adding a chapter to clarify the evolution of advanced features of CUDA. These additions are designed to further enrich the learning experience of our readers. As we made these improvements, we preserved the features of the previous editions that contributed to the book’s popularity. First, we’ve kept the book as concise as possible. While it is tempting to keep adding material, we wanted to minimize the number of pages a reader needs to go through in order to learn all the key concepts. We accomplished this by moving some of the second edition chapters into appendices. Second, we have kept our explanations as intuitive as possible. While it is tempting to formalize some of the concepts, especially when we cover basic parallel algorithms, we have strived to keep all our explanations intuitive and practical. xv xvi Preface TARGET AUDIENCE The target audience of this book are the many graduate and undergraduate students from all science and engineering disciplines where computational thinking and parallel programming skills are needed to achieve breakthroughs. We assume that the reader has at least some basic C programming experience. We especially target computational scientists in fields such as computational financing, data analytics, cognitive computing, mechanical engineering, civil engineering, electrical engineering, bio-engineering, physics, chemistry, astronomy, and geography, all of whom use computation to further their field of research. As such, these scientists are both experts in their domain as well as programmers. The book takes the approach of teaching parallel programming by building up an intuitive understanding of the techniques. We use CUDA C, a parallel programming environment that is supported on NVIDIA GPUs. There are nearly 1 billion of these processors in the hands of consumers and professionals, and more than 4,00,000 programmers actively using CUDA. The applications that you develop as part of the learning experience will be used and run by a very large user community. HOW TO USE THE BOOK We would like to offer some of our experience in teaching courses with this book. Since 2006, we have taught multiple types of courses: in one-semester format and in one-week intensive format. The original ECE498AL course has become a permanent course known as ECE408 or CS483 of the University of Illinois at UrbanaChampaign. We started to write up some early chapters of this book when we offered ECE498AL the second time. The first four chapters were also tested in an MIT class taught by Nicolas Pinto in spring 2009. Since then, we have used the book for numerous offerings of ECE408 as well as the Coursera Heterogeneous Parallel Programming course, and the VSCSE and PUMPS summer schools. A THREE-PHASED APPROACH In ECE408, the lectures and programming assignments are balanced with each other and organized into three phases: Phase 1: One lecture based on Chapter 2, Data parallel computing is dedicated to teaching the basic CUDA memory/threading model, the CUDA extensions to the C language, and the basic programming/debugging tools. After the lecture, students can write a simple vector addition code in a couple of hours. This is followed by a series of four-to-six lectures that give students the conceptual understanding of the CUDA memory model, the CUDA thread execution model, GPU hardware performance features, and modern computer system architecture. These lectures are based on Chapter 3, Scalable parallel execution; Chapter 4, Preface Memory and data locality; and Chapter 5, Performance considerations. The performance of their matrix multiplication codes increases by about 10 times through this period. Phase 2: A series of lectures cover floating-point considerations in parallel computing and common data-parallel programming patterns needed to develop a high-performance parallel application. These lectures are based on Chapter 7, Parallel patterns: convolution; Chapter 8, Parallel patterns: prefix sum; Chapter 9, Parallel patterns—parallel histogram computation; Chapter 10, Parallel patterns: sparse matrix computation; Chapter 11, Parallel patterns: merge sort; and Chapter 12, Parallel patterns: graph search. The students complete assignments on convolution, vector reduction, prefix-sum, histogram, sparse matrix-vector multiplication, merge sort, and graph search through this period. We typically leave two or three of the more advanced patterns for a graduate level course. Phase 3: Once the students have established solid CUDA programming skills, the remaining lectures cover application case studies, computational thinking, a broader range of parallel execution models, and parallel programming principles. These lectures are based on Chapter 13, CUDA dynamic parallelism; Chapter 14, Application case study—non-Cartesian magnetic resonance imaging; Chapter 15, Application case study—molecular visualization and analysis; Chapter 16, Application case study—machine learning; Chapter 17, Parallel programming and computational thinking; Chapter 18, Programming a heterogeneous computing cluster; Chapter 19, Parallel programing with OpenACC; and Chapter 20, More on CUDA and graphics processing unit computing. (The voice and video recordings of these lectures are available as part of the Illinois–NVIDIA GPU Teaching Kit.) TYING IT ALL TOGETHER: THE FINAL PROJECT While the lectures, labs, and chapters of this book help lay the intellectual foundation for the students, what brings the learning experience together is the final project, which is so important to the full-semester course that it is prominently positioned in the course and commands nearly 2 months’ focus. It incorporates five innovative aspects: mentoring, workshop, clinic, final report, and symposium. (While much of the information about the final project is available in the Illinois–NVIDIA GPU Teaching Kit, we would like to offer the thinking that was behind the design of these aspects.) Students are encouraged to base their final projects on problems that represent current challenges in the research community. To seed the process, the instructors should recruit several computational science research groups to propose problems and serve as mentors. The mentors are asked to contribute a one-to-two-page project specification sheet that briefly describes the significance of the application, what the mentor would like to accomplish with the student teams on the application, the technical skills (particular type of math, physics, and chemistry courses) required to xvii xviii Preface understand and work on the application, and a list of Web and traditional resources that students can draw upon for technical background, general information, and building blocks, along with specific URLs or ftp paths to particular implementations and coding examples. These project specification sheets also provide students with learning experiences in defining their own research projects later in their careers. (Several examples are available in the Illinois–NVIDIA GPU Teaching Kit.) Students are also encouraged to contact their potential mentors during their project selection process. Once the students and the mentors agree on a project, they enter into a collaborative relationship, featuring frequent consultation and project reporting. We, the instructors, attempt to facilitate the collaborative relationship between students and their mentors, making it a very valuable experience for both mentors and students. The project workshop The project workshop is the primary vehicle that enables the entire class to contribute to each other’s final project ideas. We usually dedicate six of the lecture slots to project workshops. The workshops are designed for students’ benefit. For example, if a student has identified a project, the workshop serves as a venue to present preliminary thinking, get feedback, and recruit teammates. If a student has not identified a project, he/she can simply attend the presentations, participate in the discussions, and join one of the project teams. Students are not graded during the workshops in order to keep the atmosphere nonthreatening and to enable them to focus on a meaningful dialog with the instructor(s), teaching assistants, and the rest of the class. The workshop schedule is designed for the instructor(s) and teaching assistants to take some time to provide feedback to the project teams so that students can ask questions. Presentations are limited to 10 minutes to provide time for feedback and questions during the class period. This limits the class size to about 24 presenters, assuming 90-minute lecture slots. All presentations are pre-loaded into a PC in order to control the schedule strictly and maximize feedback time. Since not all students present at the workshop, we have been able to accommodate up to 50 students in each class, with extra workshop time available as needed. At the University of Illinois, the high demand for ECE408 has propelled the size of the classes significantly beyond the ideal size for project workshops. We will comment on this issue at the end of the section. The instructor(s) and TAs must make a commitment to attend all the presentations and to give useful feedback. Students typically need most help in answering the following questions. First, are the projects too big or too small for the amount of time available? Second, is there existing work in the field that the project can benefit from? Third, are the computations being targeted for parallel execution appropriate for the CUDA programming model? The design document Once the students decide on a project and form a team, they are required to submit a design document for the project. This helps them to think through the project steps Preface before they jump into it. The ability to do such planning will be important to their later career success. The design document should discuss the background and motivation for the project, application-level objectives and potential impact, main features of the end application, an overview of their design, an implementation plan, their performance goals, a verification plan and acceptance test, and a project schedule. The teaching assistants hold a project clinic for final project teams during the week before the class symposium. This clinic helps ensure that students are on track and that they have identified the potential roadblocks early in the process. Student teams are asked to come to the clinic with an initial draft of the following three versions of their application: (1) The best CPU sequential code in terms of performance, preferably with AVX and other optimizations that establish a strong serial base of the code for their speedup comparisons and (2) The best CDUA parallel code in terms of performance. This version is the main output of the project. This version is used by the students to characterize the parallel algorithm overhead in terms of extra computations involved. Student teams are asked to be prepared to discuss the key ideas used in each version of the code, any numerical stability issues, any comparison against previous results on the application, and the potential impact on the field if they achieve tremendous speedup. From our experience, the optimal schedule for the clinic is 1 week before the class symposium. An earlier time typically results in less mature projects and less meaningful sessions. A later time will not give students sufficient time to revise their projects according to the feedback. The project report Students are required to submit a project report on their team’s key findings. We recommend a whole-day class symposium. During the symposium, students use presentation slots proportional to the size of the teams. During the presentation, the students highlight the best parts of their project report for the benefit of the whole class. The presentation accounts for a significant part of students’ grades. Each student must answer questions directed to him/her as individuals so that different grades can be assigned to individuals in the same team. The symposium is an opportunity for students to learn to produce a concise presentation that motivates their peers to read a full paper. After their presentation, the students also submit a full report on their final project. CLASS COMPETITION In 2016, the enrollment level of ECE408 far exceeded the level that can be accommodated by the final project process. As a result, we moved from the final project to class competition. At the middle of the semester, we announce a competition challenge problem. We use one lecture to explain the competition challenge problem and the rules that will be used for ranking the teams. The students work in teams to solve the competition with their parallel solution. The final ranking of each team is determined by the execution time, correctness, and clarity of their parallel code. The xix xx Preface students do a demo of their solution at the end of the semester and submit a final report. This is a compromise that preserves some of the benefits of final projects when the class size makes final projects infeasible. ILLINOIS–NVIDIA GPU TEACHING KIT The Illinois–NVIDIA GPU Teaching Kit is a publicly available resource that contains lecture, lab assignments, final project guidelines, and sample project specifications for instructors who use this book for their classes. While this book provides the intellectual contents for these classes, the additional material will be crucial in achieving the overall education goals. It can be accessed at. Finally, we encourage you to submit your feedback. We would like to hear from you if you have any ideas for improving this book. We would like to know how we can improve the supplementary on-line material. Finally, we would like to know what you liked about the book. We look forward to hearing from you. ONLINE SUPPLEMENTS The lab assignments, final project guidelines, and sample project specifications are available to instructors who use this book for their classes. While this book provides the intellectual contents for these classes, the additional material will be crucial in achieving the overall education goals. We would like to invite you to take advantage of the online material that accompanies this book, which is available at. David B. Kirk and Wen-mei W. Hwu Acknowledgements There are so many people who have made special contributions to this third edition. First of all, we would like to thank the contributing authors of the new chapters: David Luebke, Mark Ebersole, Liwen Chang, Juan Gomez-Luna, Jie Lv, Izzat El Hajj, John Stone, Boris Ginsburg, Isaac Gelado, Jeff Larkin, and Mark Harris. Their names are listed in the chapters to which they made special contributions. Their expertise made a tremendous difference in the technical contents of this new edition. Without the contribution of these individuals, we would not have been able to cover the topics with the level of insight that we wanted to provide to our readers. We would like to give special thanks to Izzat El Hajj, who tirelessly helped to verify the code examples and improved the quality of illustrations and exercises. We would like to especially acknowledge Ian Buck, the father of CUDA and John Nickolls, the lead architect of Tesla GPU Computing Architecture. Their teams laid an excellent infrastructure for this course. John passed away while we were working on the second edition. We miss him dearly. We would like to thank the NVIDIA reviewers Barton Fiske, Isaac Gelado, Javier Cabezas, Luke Durant, Boris Ginsburg, Branislav Kisacanin, Kartik Mankad, Alison Lowndes, Michael Wolfe, Jeff Larkin, Cliff Woolley, Joe Bungo, B. Bill Bean, Simon Green, Mark Harris, Nadeem Mohammad, Brent Oster, Peter Shirley, Eric Young, Urs Muller, and Cyril Zeller, all of whom provided valuable comments and corrections to the manuscript. Our external reviewers spent numerous hours of their precious time to give us insightful feedback on the third edition: Bedrich Benes (Purdue University, West Lafayette, IN, United States); Kevin Farrell (Institute of Technology Blanchardstown, Dublin, Ireland); Lahouari Ghouti (King Fahd University of Petroleum and Minerals, Saudi Arabia); Marisa Gil, (Universitat Politecnica de Catalunya, Barcelona, Spain); Greg Peterson (The University of Tennessee-Knoxville, Knoxville, TN, United States); José L. Sánchez (University of Castilla-La Mancha, Real, Spain); and Jan Verschelde (University of Illinois at Chicago, Chicago, IL, United States). Their comments helped us to significantly improve the readability of the book. Todd Green, Nate McFadden, and their staff at Elsevier worked tirelessly on this project. We would like to especially thank Jensen Huang for providing a great amount of financial and human resources for developing the course that laid the foundation for this book. We would like to acknowledge Dick Blahut, who challenged us to embark on the project. Beth Katsinas arranged a meeting between Dick Blahut and NVIDIA Vice President Dan Vivoli. Through that gathering, Blahut was introduced to David and challenged David to come to Illinois and create the original ECE498AL course with Wen-mei. xxi xxii Acknowledgements We would like to especially thank our colleagues who have taken the time to share their insight with us over the years: Kurt Akeley, Al Aho, Arvind, Dick Blahut, Randy Bryant, Bob Colwell, Bill Dally, Ed Davidson, Mike Flynn, John Hennessy, Pat Hanrahan, Nick Holonyak, Dick Karp, Kurt Keutzer, Dave Liu, Dave Kuck, Nacho Navarro, Yale Patt, David Patterson, Bob Rao, Burton Smith, Jim Smith, and Mateo Valero. We are humbled by the generosity and enthusiasm of all the great people who contributed to the course and the book. David B. Kirk and Wen-mei W. Hwu CHAPTER Introduction 1 CHAPTER OUTLINE 1.1 Heterogeneous Parallel Computing..........................................................................2 1.2 Architecture of a Modern GPU..................................................................................6 1.3 Why More Speed or Parallelism?.............................................................................8 1.4 Speeding Up Real Applications..............................................................................10 1.5 Challenges in Parallel Programming......................................................................12 1.6 Parallel Programming Languages and Models.........................................................12 1.7 Overarching Goals.................................................................................................14 1.8 Organization of the Book.......................................................................................15 References....................................................................................................... 18 Microprocessors based on a single central processing unit (CPU), such as those in the Intel Pentium family and the AMD Opteron family, drove rapid performance increases and cost reductions in computer applications for more than two decades. These microprocessors brought giga floating-point operations per second (GFLOPS, or Giga (109) Floating-Point Operations per Second), to the desktop and tera floating-point operations per second (TFLOPS, or Tera (1012) Floating-Point Operations per Second) to datacenters. 2005]. Traditionally, the vast majority of software applications are written as sequential programs that are executed by processors whose design was envisioned by von Neumann in his seminal report in 1945 [vonNeumann 1945]. The execution of these Programming Massively Parallel Processors. DOI: Copyright © 2017 David B. Kirk/NVIDIA Corporation and Wen-mei W. Hwu. Published by Elsevier Inc. All rights reserved 1 2 CHAPTER 1 Introduction programs can be understood by a human sequentially stepping through the code. Historically, most software developers have relied on the advances in hardware to increase the speed of their sequential applications under the hood; the same software simply runs faster as each new processor generation from generation to generation. Without performance improvement, application developers will no longer be able to introduce new features and capabilities into their software as new microprocessors are introduced, reducing the growth opportunities of the entire computer industry. Rather, the applications software that will continue to enjoy significant performance improvement with each new generation of microprocessors will be parallel programs, in which multiple threads of execution cooperate to complete the work faster. This new, dramatically escalated incentive for parallel program development has been referred to as the concurrency revolution [Sutter 2005]. The practice of parallel programming is by no means new. The high-performance computing community has been developing parallel programs for decades. These programs typically ran on large scale, expensive computers. Only a few elite applications could 2008]. The multicore trajectory seeks to maintain the execution speed of sequential programs while moving into multiple cores. The multicores began with two-core processors with the number of cores increasing with each semiconductor process generation. A current exemplar is a recent Intel multicore microprocessor with up to 12 processor cores, each of which is an out-oforder, multiple instruction issue processor implementing the full X86 instruction set, supporting hyper-threading with two hardware threads, designed to maximize the execution speed of sequential programs. For more discussion of CPUs, see https:// en.wikipedia.org/wiki/Central_processing_unit. In contrast, the many-thread trajectory focuses more on the execution throughput of parallel applications. The many-threads began with a large number of threads and once again, the number of threads increases with each generation. A current exemplar is the NVIDIA Tesla P100 graphics processing unit (GPU) with 10 s of 1000 s of threads, executing in a large number of simple, in order pipelines. Many-thread processors, especially the GPUs, have led the race of floating-point performance 1.1 Heterogeneous parallel computing since 2003. As of 2016, the ratio of peak floating-point calculation throughput between many-thread GPUs and multicore CPUs is about 10, and this ratio has been roughly constant for the past several years. These are not necessarily application speeds, but are merely the raw speed that the execution resources can potentially support in these chips. For more discussion of GPUs, see Graphics_processing_unit. Such a large performance gap between parallel and sequential execution has amounted to a significant “electrical potential” build-up, and at some point, something will have to give. We have reached that point. To date, this large performance gap has already motivated many applications developers to move the computationally intensive parts of their software to GPU for execution. Not surprisingly, these computationally intensive parts are also the prime target of parallel programming— when there is more work to do, there is more opportunity to divide the work among cooperating parallel workers. One might ask why there is such a large peak throughput gap between manythreaded GPUs and general-purpose multicore CPUs. The answer lies in the differences in the fundamental design philosophies between the two types of processors, as illustrated in Fig. throughput. As of 2016, the high-end general-purpose multicore microprocessors typically have eight or more large processor cores and many 10x the memory bandwidth of contemporaneously available CPU chips. A GPU must be capable of moving extremely large amounts of data in and out of its main Dynamic Random Control CPU ALU ALU ALU ALU GPU Cache DRAM DRAM FIGURE 1.1 CPUs and GPUs have fundamentally different design philosophies. 3 4 CHAPTER 1 Introduction Access Memory (DRAM) because of graphics frame buffer requirements. In contrast, general-purpose processors have to satisfy requirements from legacy operating systems, applications, and I/O devices that make memory bandwidth more difficult to increase. As a result, we expect that CPUs will continue to be at a disadvantage in terms of memory bandwidth for some time. The design philosophy of the GPUs has been. An important observation is that reducing latency is much more expensive than increasing throughput in terms of power and chip area. Therefore, for these GPUs aslatency 1.1 Heterogeneous parallel computing CPU and numerically intensive parts on the GPUs. This is why the CUDA programming model, introduced by NVIDIA in 2007, is designed to support joint CPU–GPU execution of an application.1 The demand for supporting joint CPU–GPU execution is further reflected in more recent programming models such as OpenCL (Appendix A), OpenACC (see chapter: Parallel programming with OpenACC), and C++AMP (Appendix D). It is also important to note that speed is not the only decision factor when application developers choose the processors for running their applications. Several other factors can be even more important. First and foremost, the processors of choice must have a very large presence in the market place, referred to as the installed base of the processor. The reason is very simple. The cost of software development is best justified by a very large customer population. Applications that run on a processor with a small market presence will not have a large customer base. This has been a major problem with traditional parallel computing systems that have negligible market presence compared to general-purpose microprocessors. Only a few elite applications funded by government and large corporations have been successfully developed on these traditional parallel computing systems. This has changed with many-thread GPUs. Due to their popularity in the PC market, GPUs have been sold by the hundreds of millions. Virtually all PCs have GPUs in them. There are nearly 1 billion CUDA enabled GPUs in use to date. Such a large market presence has made these GPUs economically attractive targets for application developers. Another important decision factor is practical form factors and easy accessibility. Until 2006, parallel software applications usually ran on data center servers or departmental clusters. But such execution environments tend to limit the use of these applications. For example, in an application such as medical imaging, it is fine to publish a paper based on a 64-node cluster machine. However, real-world clinical applications on MRI machines utilize some combination of a PC and special hardware accelerators. The simple reason is that manufacturers such as GE and Siemens cannot sell MRIs with racks of computer server boxes into clinical settings, while this is common in academic departmental settings. In fact, NIH refused to fund parallel programming projects for some time; they felt that the impact of parallel software would be limited because huge cluster-based machines would not work in the clinical setting. Today, many companies ship MRI products with GPUs, and NIH funds research using GPU computing. Yet another important consideration in selecting a processor for executing numeric computing applications is the level of support for IEEE Floating-Point Standard. The standard enables predictable results across processors from different vendors. While the support for the IEEE Floating-Point Standard was not strong in early GPUs, this has also changed for new generations of GPUs since 2006. As we will discuss in Chapter 6, Numerical considerations, GPU support for the IEEE Floating-Point Standard has become comparable with that of the CPUs. As a result, one can expect 1 See Appendix A for more background on the evolution of GPU computing and the creation of CUDA. 5 6 CHAPTER 1 Introduction that more numerical applications will be ported to GPUs and yield comparable result values as the CPUs. Up to 2009, a major barrier was that the GPU floating-point arithmetic units were primarily single precision. Applications that truly require double precision floating-point were not suitable for GPU execution. However, this has changed with the recent GPUs whose double precision execution speed approaches about half that of single precision, a level that only high-end CPU cores achieve. This makes the GPUs suitable for even more numerical applications. In addition, GPUs support Fused Multiply-Add, which reduces errors due to multiple rounding operations. Until 2006, graphics chips were very difficult to use because programmers had to use the equivalent of graphics application programming interface (API) functions to access the processing units, meaning that OpenGL or Direct3D techniques were needed to program these chips. Stated more simply, a computation must be expressed as a function that paints a pixel in some way in order to execute on these early GPUs. This technique was called GPGPU, for General-Purpose Programming using a GPU. Even with a higher level programming environment, the underlying code still needs to fit into the APIs that are designed to paint pixels. These APIs limit the kinds of applications that one can actually write for early GPUs. Consequently, it did not become a widespread programming phenomenon. Nonetheless, this technology was sufficiently exciting to inspire some heroic efforts and excellent research results. But everything changed in 2007 with the release of CUDA [NVIDIA 2007]. NVIDIA actually devoted silicon area to facilitate the ease of parallel programming, so this did not represent software changes alone; additional hardware was added to the chip. In the G80 and its successor chips for parallel computing, CUDA programs no longer go through the graphics interface at all. Instead, a new general-purpose parallel programming interface on the silicon chip serves the requests of CUDA programs. The general-purpose programming interface greatly expands the types of applications that one can easily develop for GPUs. Moreover, all the other software layers were redone as well, so that the programmers can use the familiar C/C++ programming tools. Some of our students tried to do their lab assignments using the old OpenGL-based programming interface, and their experience helped them to greatly appreciate the improvements that eliminated the need for using the graphics APIs for general-purpose computing applications. 1.2 ARCHITECTURE OF A MODERN GPU Fig. 1.2 shows a high level view of the architecture of a typical CUDA-capable GPU. It is organized into an array of highly threaded streaming multiprocessors (SMs). In Fig. 1.2, two SMs form a building block. However, the number of SMs in a building block can vary from one generation to another. Also, in Fig. 1.2, each SM has a number of streaming processors (SPs) that share control logic and instruction cache. Each GPU currently comes with gigabytes of Graphics Double Data Rate (GDDR), Synchronous DRAM (SDRAM), referred to as Global Memory in Fig. 1.2. These Host Input assembler Thread execution manager Cache Cache Cache Cache Cache Cache Cache Cache Texture Texture Texture Texture Texture Texture Texture Texture Load/store Load/store Load/store Global memory FIGURE 1.2 Architecture of a CUDA-capable GPU. Load/store Load/store Load/store 8 CHAPTER 1 Introduction GDDR SDRAMs differ from the system DRAMs on the CPU motherboard in that they are essentially the frame buffer memory that is used for graphics. For graphics applications, they hold video images and texture information for 3D rendering. For computing, they function as very high-bandwidth off-chip memory, though with somewhat longer latency than typical system memory. For massively parallel applications, the higher bandwidth makes up for the longer latency. More recent products, such as NVIDIA’s Pascal architecture, may use High-Bandwidth Memory (HBM) or HBM2 architecture. For brevity, we will simply refer to all of these types of memory as DRAM for the rest of the book. The G80 introduced the CUDA architecture and had a communication link to the CPU core logic over a PCI-Express Generation 2 (Gen2) interface. Over PCI-E Gen2, a CUDA application can transfer data from the system memory to the global memory at 4 GB/S, and at the same time upload data back to the system memory at 4 GB/S. Altogether, there is a combined total of 8 GB/S. More recent GPUs use PCI-E Gen3 or Gen4, which supports 8–16 GB/s in each direction. The Pascal family of GPUs also supports NVLINK, a CPU–GPU and GPU–GPU interconnect that allows transfers of up to 40 GB/s per channel. As the size of GPU memory grows, applications increasingly keep their data in the global memory and only occasionally use the PCI-E or NVLINK to communicate with the CPU system memory if there is need for using a library that is only available on the CPUs. The communication bandwidth is also expected to grow as the CPU bus bandwidth of the system memory grows in the future. A good application typically runs 5000 to 12,000 threads simultaneously on this chip. For those who are used to multithreading in CPUs, note that Intel CPUs support 2 or 4 threads, depending on the machine model, per core. CPUs, however, are increasingly using Single Instruction Multiple Data (SIMD) instructions for high numerical performance. The level of parallelism supported by both GPU hardware and CPU hardware is increasing quickly. It is therefore very important to strive for high levels of parallelism when developing computing applications. 1.3 WHY MORE SPEED OR PARALLELISM? As we stated in Section 1.1, the main motivation for massively parallel programming is for applications to enjoy continued speed increase in future hardware generations. One might question if applications will continue to demand increased speed. Many applications that we have today seem to be running fast enough. As we will discuss in the case study chapters (see chapters: Application case study—non-Cartesian MRI, Application case study—molecular visualization and analysis, and Application case study—machine learning), when an application is suitable for parallel execution, a good implementation on a GPU can achieve more than 100 times (100x) speedup over sequential execution on a single CPU core. If the application contains what we call “data parallelism,” it is often possible to achieve a 10x speedup with just a few hours of work. For anything beyond that, we invite you to keep reading! 1.3 Why more speed or parallelism? Despite the myriad of computing applications in today’s world, many exciting mass market applications of the future are what we previously consider “supercomputing applications,” or super-applications. For example, the biology research community is moving more and more into the molecular-level. Microscopes, arguably the most important instrument in molecular biology, used to rely on optics or electronic instrumentation. But there are limitations to the molecular-level observations that we can make with these instruments. These limitations can be effectively addressed by incorporating a computational model to simulate the underlying molecular activities with boundary conditions set by traditional instrumentation. With simulation we can measure even more details and test more hypotheses than can ever be imagined with traditional instrumentation alone. These simulations will continue to benefit from the increasing computing speed in the foreseeable future in terms of the size of the biological system that can be modeled and the length of reaction time that can be simulated within a tolerable response time. These enhancements will have tremendous implications for science and medicine. For applications such as video and audio coding and manipulation, consider our satisfaction with digital high-definition (HD) TV vs. older NTSC TV. Once we experience the level of details in an HDTV, it is very hard to go back to older technology. But consider all the processing needed for that HDTV. It is a very parallel process, as are 3D imaging and visualization. In the future, new functionalities such as view synthesis and high-resolution display of low resolution videos will demand more computing power in the TV. At the consumer level, we will begin to have an increasing number of video and image processing applications that improve the focus, lighting, and other key aspects of the pictures and videos. User interfaces can also be improved by improved computing speeds. Modern smart phone users enjoy a more natural interface with high-resolution touch screens that rival that of large-screen televisions. Undoubtedly future versions of these devices will incorporate sensors and displays with three-dimensional perspectives, applications that combine virtual and physical space information for enhanced usability, and voice and computer vision-based interfaces, requiring even more computing speed. Similar developments are underway in consumer electronic gaming. In the past, driving a car in a game was in fact simply a prearranged set of scenes. If the player’s car collided with obstacles, the behavior of the car did not change to reflect the damage. Only the game score changes—and the score determines the winner. The car would drive the same—despite the fact that the wheels should be bent or damaged. With increased computing speed, the races can actually proceed according to simulation instead of approximate scores and scripted sequences. We can expect to see more of these realistic effects in the future: collisions will damage your wheels and the player’s driving experience will be much more realistic. Realistic modeling and simulation of physics effects are known to demand very large amounts of computing power. All the new applications that we mentioned involve simulating a physical, concurrent world in different ways and at different levels, with tremendous amounts of data being processed. In fact, the problem of handling massive amounts of data is 9 10 CHAPTER 1 Introduction so prevalent that the term “Big Data” has become a household phrase. And with this huge quantity of data, much of the computation can be done on different parts of the data in parallel, although they will have to be reconciled at some point. In most cases, effective management of data delivery can have a major impact on the achievable speed of a parallel application. While techniques for doing so are often well known to a few experts who work with such applications on a daily basis, the vast majority of application developers can benefit from more intuitive understanding and practical working knowledge of these techniques. We aim to present the data management techniques in an intuitive way to application developers whose formal education may not be in computer science or computer engineering. We also aim to provide many practical code examples and hands-on exercises that help the reader to acquire working knowledge, which requires a practical programming model that facilitates parallel implementation and supports proper management of data delivery. CUDA offers such a programming model and has been well tested by a large developer community. 1.4 SPEEDING UP REAL APPLICATIONS What kind of speedup can we expect from parallelizing an application? It depends on the portion of the application that can be parallelized. If the percentage of time spent in the part that can be parallelized is 30%, a 100X speedup of the parallel portion will reduce the execution time by no more than 29.7%. The speedup for the entire application will be only about 1.4X. In fact, even infinite amount of speedup in the parallel portion can only slash 30% off execution time, achieving no more than 1.43X speedup. The fact that the level of speedup one can achieve through parallel execution can be severely limited by the parallelizable portion of the application is referred to as Amdahl’s Law. On the other hand, if 99% of the execution time is in the parallel portion, a 100X speedup of the parallel portion will reduce the application execution to 1.99% of the original time. This gives the entire application a 50X speedup. Therefore, it is very important that an application has the vast majority of its execution in the parallel portion for a massively parallel processor to effectively speed up its execution. Researchers have achieved speedups of more than 100X for some applications. However, this is typically achieved only after extensive optimization and tuning after the algorithms have been enhanced so that more than 99.9% of the application execution time is in parallel execution. In practice, straightforward parallelization of applications often saturates the memory (DRAM) bandwidth, resulting in only about a 10X speedup. The trick is to figure out how to get around memory bandwidth limitations, which involves doing one of many transformations to utilize specialized GPU on-chip memories to drastically reduce the number of accesses to the DRAM. One must, however, further optimize the code to get around limitations such as limited on-chip memory capacity. An important goal of this book is to help the reader to fully understand these optimizations and become skilled in them. 1.4 Speeding up real applications Keep in mind that the level of speedup achieved over single core CPU execution can also reflect the suitability of the CPU to the application: in some applications, CPUs perform very well, making it harder to speed up performance using a GPU. Most applications have portions that can be much better executed by the CPU. Thus, one must give the CPU a fair chance to perform and make sure that code is written so that GPUs complement CPU execution, thus properly exploiting the heterogeneous parallel computing capabilities of the combined CPU/GPU system. Fig. 1.3 illustrates the main parts of a typical application. Much of a real application’s code tends to be sequential. These sequential parts are illustrated as the “pit” area of the peach: trying to apply parallel computing techniques to these portions is like biting into the peach pit—not a good feeling! These portions are very hard to parallelize. CPUs are pretty good with these portions. The good news is that these portions, although they can take up a large portion of the code, tend to account for only a small portion of the execution time of super-applications. The rest is what we call the “peach meat” portions. These portions are easy to parallelize, as are some early graphics applications. Parallel programming in heterogeneous computing systems can drastically improve the speed of these applications. As illustrated in Fig. 1.3 early GPGPUs cover only a small portion of the meat section, which is analogous to a small portion of the most exciting applications. As we will see, the CUDA programming model is designed to cover a much larger section of the peach meat portions of exciting applications. In fact, as we will discuss in Chapter 20, More on CUDA and GPU computing, these programming models and their underlying hardware are still evolving at a fast pace in order to enable efficient parallelization of even larger sections of applications. Sequential portions Traditional CPU coverage Data parallel portions Obstacles GPGPU coverage FIGURE 1.3 Coverage of sequential and parallel application portions. 11 12 CHAPTER 1 Introduction 1.5 CHALLENGES IN PARALLEL PROGRAMMING What makes parallel programming hard? Someone once said that if you don’t care about performance, parallel programming is very easy. You can literally write a parallel program in an hour. But then why bother to write a parallel program if you do not care about performance? This book addresses several challenges in achieving high-performance in parallel programming. First and foremost, it can be challenging to design parallel algorithms with the same level of algorithmic (computational) complexity as sequential algorithms. Some parallel algorithms can add large overheads over their sequential counter parts so much that they can even end up running slower for larger input data sets. Second, the execution speed of many applications is limited by memory access speed. We refer to these applications as memory-bound, as opposed to compute bound, which are limited by the number of instructions performed per byte of data. Achieving high-performance parallel execution in memory-bound applications often requires novel methods for improving memory access speed. Third, the execution speed of parallel programs is often more sensitive to the input data characteristics than their sequential counter parts. Many real world applications need to deal with inputs with widely varying characteristics, such as erratic or unpredictable data rates, and very high data rates. The performance of parallel programs can sometimes vary dramatically with these characteristics. Fourth, many real world problems are most naturally described with mathematical recurrences. Parallelizing these problems often requires nonintuitive ways of thinking about the problem and may require redundant work during execution. Fortunately, most of these challenges have been addressed by researchers in the past. There are also common patterns across application domains that allow us to apply solutions derived from one domain to others. This is the primary reason why we will be presenting key techniques for addressing these challenges in the context of important parallel computation patterns. 1.6 PARALLEL PROGRAMMING LANGUAGES AND MODELS Many parallel programming languages and models have been proposed in the past several decades [Mattson, 2004]. The ones that are the most widely used are message passing interface (MPI) [MPI 2009] for scalable cluster computing, and OpenMP [Open 2005] for shared memory multiprocessor systems. Both have become standardized programming interfaces supported by major computer vendors. An OpenMP implementation consists of a compiler and a runtime. A programmer specifies directives (commands) and pragmas (hints) about a loop to the OpenMP compiler. With these directives and pragmas, OpenMP compilers generate parallel code. The runtime system supports the execution of the parallel code by managing parallel threads and resources. OpenMP was originally designed for CPU execution. More recently, a variation called OpenACC (see chapter: Parallel programming with OpenACC) 1.6 Parallel programming languages and models has been proposed and supported by multiple computer vendors for programming heterogeneous computing systems. The major advantage of OpenACC is that it provides compiler automation and runtime support for abstracting away many parallel programming details from programmers. Such automation and abstraction can help make the application code more portable across systems produced by different vendors, as well as different generations of systems from the same vendor. We can refer to this property as “performance portability.” This is why we teach OpenACC programming in Chapter 19, Parallel programming with OpenACC. However, effective programming in OpenACC still requires the programmers to understand all the detailed parallel programming concepts involved. Because CUDA gives programmers explicit control of these parallel programming details, it is an excellent learning vehicle even for someone who would like to use OpenMP and OpenACC as their primary programming interface. Furthermore, from our experience, OpenACC compilers are still evolving and improving. Many programmers will likely need to use CUDA style interfaces for parts where OpenACC compilers fall short. MPI is a model where computing nodes in a cluster do not share memory [MPI 2009]. All data sharing and interaction must be done through explicit message passing. MPI has been successful in high-performance computing (HPC). Applications written in MPI have run successfully on cluster computing systems with more than 100,000 nodes. Today, many HPC clusters employ heterogeneous CPU/GPU nodes. While CUDA is an effective interface with each node, most application developers need to use MPI to program at the cluster level. It is therefore important that a parallel programmer in HPC understands how to do joint MPI/CUDA programming, which is presented in Chapter 18, Programming a Heterogeneous Computing Cluster. The amount of effort needed to port an application into MPI, however, can be quite high due to lack of shared memory across computing nodes. The programmer needs to perform domain decomposition to partition the input and output data into cluster nodes. Based on the domain decomposition, the programmer also needs to call message sending and receiving functions to manage the data exchange between nodes. CUDA, on the other hand, provides shared memory for parallel execution in the GPU to address this difficulty. As for CPU and GPU communication, CUDA previously provided very limited shared memory capability between the CPU and the GPU. The programmers needed to manage the data transfer between CPU and GPU in a manner similar to the “one-sided” message passing. New runtime support for global address space and automated data transfer in heterogeneous computing systems, such as GMAC [GCN 2010], are now available. With such support, a CUDA programmer can declare variables and data structures as shared between CPU and GPU. The runtime hardware and software transparently maintains coherence by automatically performing optimized data transfer operations on behalf of the programmer as needed. Such support significantly reduces the programming complexity involved in overlapping data transfer with computation and I/O activities. As will be discussed later in Chapter 20, More on CUDA and GPU Computing, the Pascal architecture supports both a unified global address space and memory. 13 14 CHAPTER 1 Introduction In 2009, several major industry players, including Apple, Intel, AMD/ATI, NVIDIA jointly developed a standardized programming model called Open Computing Language (OpenCL) [Khronos 2009]. Similar to CUDA, the OpenCL programming model defines language extensions and runtime APIs to allow programmers to manage parallelism and data delivery in massively parallel processors. In comparison to CUDA, OpenCL relies more on APIs and less on language extensions. This allows vendors to quickly adapt their existing compilers and tools to handle OpenCL programs. OpenCL is a standardized programming model in that applications developed in OpenCL can run correctly without modification on all processors that support the OpenCL language extensions and API. However, one will likely need to modify the applications in order to achieve high-performance for a new processor. Those who are familiar with both OpenCL and CUDA know that there is a remarkable similarity between the key concepts and features of OpenCL and those of CUDA. That is, a CUDA programmer can learn OpenCL programming with minimal effort. More importantly, virtually all techniques learned using CUDA can be easily applied to OpenCL programming. Therefore, we introduce OpenCL in Appendix A and explain how one can apply the key concepts in this book to OpenCL programming. 1.7 OVERARCHING GOALS Our primary goal is to teach you, the reader, how to program massively parallel processors to achieve high-performance, and our approach will not require a great deal of hardware expertise. Therefore, we are going to dedicate many pages to techniques for developing high-performance parallel programs. And, we believe that it will become easy once you develop the right insight and go about it the right way. In particular, we will focus on computational thinking [Wing 2006] techniques that will enable you to think about problems in ways that are amenable to high-performance parallel computing. Note that hardware architecture features still have constraints and limitations. High-performance parallel programming on most processors will require some knowledge of how the hardware works. It will probably take ten or more years before we can build tools and machines so that most programmers can work without this knowledge. Even if we have such tools, we suspect that programmers with more knowledge of the hardware will be able to use the tools in a much more effective way than those who do not. However, we will not be teaching computer architecture as a separate topic. Instead, we will teach the essential computer architecture knowledge as part of our discussions on high-performance parallel programming techniques. Our second goal is to teach parallel programming for correct functionality and reliability, which constitutes a subtle issue in parallel computing. Those who have worked on parallel systems in the past know that achieving initial performance is not enough. The challenge is to achieve it in such a way that you can debug the code and 1.8 Organization of the book support users. The CUDA programming model encourages the use of simple forms of barrier synchronization, memory consistency, and atomicity for managing parallelism. In addition, it provides an array of powerful tools that allow one to debug not only the functional aspects but also the performance bottlenecks. We will show that by focusing on data parallelism, one can achieve high performance without sacrificing the reliability of their applications. Our third goal is scalability across future hardware generations by exploring approaches to parallel programming such that future machines, which will be more and more parallel, can run your code faster than today’s machines. We want to help you to master parallel programming so that your programs can scale up to the level of performance of new generations of machines. The key to such scalability is to regularize and localize memory data accesses to minimize consumption of critical resources and conflicts in accessing and updating data structures. Still, much technical knowledge will be required to achieve these goals, so we will cover quite a few principles and patterns [Mattson 2004] of parallel programming in this book. We will not be teaching these principles and patterns in a vacuum. We will teach them in the context of parallelizing useful applications. We cannot cover all of them, however, we have selected what we found to be the most useful and well-proven techniques to cover in detail. To complement your knowledge and expertise, we include a list of recommended literature. We are now ready to give you a quick overview of the rest of the book. 1.8 ORGANIZATION OF THE BOOK Chapter 2, Data parallel computing, introduces data parallelism and CUDA C programming. This chapter expects the reader to have had previous experience with C programming. It first introduces CUDA C as a simple, small extension to C that supports heterogeneous CPU/GPU joint computing and the widely used single program multiple data (SPMD) parallel programming model. It then covers the thought process involved in (1) identifying the part of application programs to be parallelized, (2) isolating the data to be used by the parallelized code, using an API function to allocate memory on the parallel computing device, (3) using an API function to transfer data to the parallel computing device, (4) developing a kernel function that will be executed by threads in the parallelized part, (5) launching a kernel function for execution by parallel threads, and (6) eventually transferring the data back to the host processor with an API function call. While the objective of Chapter 2, Data parallel computing, is to teach enough concepts of the CUDA C programming model so that the students can write a simple parallel CUDA C program, it actually covers several basic skills needed to develop a parallel application based on any parallel programming model. We use a running example of vector addition to illustrate these concepts. In the later part of the book, we also compare CUDA with other parallel programming models including OpenMP, OpenACC, and OpenCL. 15 16 CHAPTER 1 Introduction Chapter 3, Scalable parallel execution, presents more details of the parallel execution model of CUDA. It gives enough insight into the creation, organization, resource binding, data binding, and scheduling of threads to enable the reader to implement sophisticated computation using CUDA C and reason about the performance behavior of their CUDA code. Chapter 4, Memory and data locality, is dedicated to the special memories that can be used to hold CUDA variables for managing data delivery and improving program execution speed. We introduce the CUDA language features that allocate and use these memories. Appropriate use of these memories can drastically improve the data access throughput and help to alleviate the traffic congestion in the memory system. Chapter 5, Performance considerations, presents several important performance considerations in current CUDA hardware. In particular, it gives more details in desirable patterns of thread execution, memory data accesses, and resource allocation. These details form the conceptual basis for programmers to reason about the consequence of their decisions on organizing their computation and data. Chapter 6, Numerical considerations, introduces the concepts of IEEE-754 floating-point number format, precision, and accuracy. It shows why different parallel execution arrangements can result in different output values. It also teaches the concept of numerical stability and practical techniques for maintaining numerical stability in parallel algorithms. Chapters 7, Parallel patterns: convolution, Chapter 8, Parallel patterns: prefix sum, Chapter 9, Parallel patterns—parallel histogram computation, Chapter 10, Parallel patterns: sparse matrix computation, Chapter 11, Parallel patterns: merge sort, Chapter 12, Parallel patterns: graph search, present six important parallel computation patterns that give the readers more insight into parallel programming techniques and parallel execution mechanisms. Chapter 7, Parallel patterns: convolution, presents convolution and stencil, frequently used parallel computing patterns that require careful management of data access locality. We also use this pattern to introduce constant memory and caching in modern GPUs. Chapter 8, Parallel patterns: prefix sum, presents reduction tree and prefix sum, or scan, an important parallel computing pattern that converts sequential computation into parallel computation. We also use this pattern to introduce the concept of work-efficiency in parallel algorithms. Chapter 9, Parallel patterns—parallel histogram computation, covers histogram, a pattern widely used in pattern recognition in large data sets. We also cover merge operation, a widely used pattern in divide-and-concur work partitioning strategies. Chapter 10, Parallel patterns: sparse matrix computation, presents sparse matrix computation, a pattern used for processing very large data sets. This chapter introduces the reader to the concepts of rearranging data for more efficient parallel access: data compression, padding, sorting, transposition, and regularization. Chapter 11, Parallel patterns: merge sort, introduces merge sort, and dynamic input data identification and organization. Chapter 12, Parallel patterns: graph search, introduces graph algorithms and how graph search can be efficiently implemented in GPU programming. 1.8 Organization of the book While these chapters are based on CUDA, they help the readers build-up the foundation for parallel programming in general. We believe that humans understand best when they learn from concrete examples. That is, we must first learn the concepts in the context of a particular programming model, which provides us with solid footing to allow applying our knowledge to other programming models. As we do so, we can draw on our concrete experience from the CUDA model. An in-depth experience with the CUDA model also enables us to gain maturity, which will help us learn concepts that may not even be pertinent to the CUDA model. Chapter 13, CUDA dynamic parallelism, covers dynamic parallelism. This is the ability of the GPU to dynamically create work for itself based on the data or program structure, rather than waiting for the CPU to launch kernels exclusively. Chapters 14, Application case study—non-Cartesian MRI, Chapter 15, Application case study—molecular visualization and analysis, Chapter 16, Application case study—machine learning, are case studies of three real applications, which take the readers through the thought process of parallelizing and optimizing their applications for significant speedups. For each application, we start by identifying alternative ways of formulating the basic structure of the parallel execution and follow up with reasoning about the advantages and disadvantages of each alternative. We then go through the steps of code transformation needed to achieve high-performance. These three chapters help the readers put all the materials from the previous chapters together and prepare for their own application development projects. Chapter 14, Application case study—non-Cartesian MRI, covers non-Cartesian MRI reconstruction, and how the irregular data affects the program. Chapter 15, Application case study—molecular visualization and analysis, covers molecular visualization and analysis. Chapter 16, Application case study—machine learning, covers Deep Learning, which is becoming an extremely important area for GPU computing. We provide an introduction, and leave more in-depth discussion to other sources. Chapter 17, Parallel programming and computational thinking, introduces computational thinking. It does so by covering the concept of organizing the computation tasks of a program so that they can be done in parallel. We start by discussing the translational process of organizing abstract scientific concepts into computational tasks, which is an important first step in producing quality application software, serial or parallel. It then discusses parallel algorithm structures and their effects on application performance, which is grounded in the performance tuning experience with CUDA. Although we do not go into these alternative parallel programming styles, we expect that the readers will be able to learn to program in any of them with the foundation they gain in this book. We also present a high level case study to show the opportunities that can be seen through creative computational thinking. Chapter 18, Programming a heterogeneous computing cluster, covers CUDA programming on heterogeneous clusters where each compute node consists of both CPU and GPU. We discuss the use of MPI alongside CUDA to integrate both inter-node computing and intra-node computing, and the resulting communication issues and practices. Chapter 19, Parallel programming with OpenACC, covers Parallel Programming with OpenACC. OpenACC is a directive-based high level programming approach 17 18 CHAPTER 1 Introduction which allows the programmer to identify and specify areas of code that can be subsequently parallelized by the compiler and/or other tools. OpenACC is an easy way for a parallel programmer to get started. Chapter 20, More on CUDA and GPU computing and Chapter 21, Conclusion and outlook, offer concluding remarks and an outlook for the future of massively parallel programming. We first revisit our goals and summarize how the chapters fit together to help achieve the goals. We then present a brief survey of the major trends in the architecture of massively parallel processors and how these trends will likely impact parallel programming in the future. We conclude with a prediction that these fast advances in massively parallel computing will make it one of the most exciting areas in the coming decade. REFERENCES Gelado, I., Cabezas, J., Navarro, N., Stone, J.E., Patel, S.J., Hwu, W.W. (2010). An asynchronous distributed shared memory model for heterogeneous parallel systems. International conference on architectural support for programming languages and operating systems. Hwu, W. W., Keutzer, K., & Mattson, T. (2008). The concurrency challenge. IEEE Design and Test of Computers, 25, 312–320. Mattson, T. G., Sanders, B. A., & Massingill, B. L. (2004). Patterns of parallel programming. Boston, MA: Addison-Wesley Professional. Message Passing Interface Forum. MPI – A Message Passing Interface Standard Version 2.2., September 4, 2009. NVIDIA Corporation. CUDA Programming Guide. February 2007. OpenMP Architecture Review Board, “OpenMP application program interface,” May 2005. Sutter, H., & Larus, J. (September 2005). Software and the concurrency revolution. ACM Queue, 3(7), 54–62. The Khronos Group. The OpenCL Specification version 1.0. cl/specs/opencl-1.0.29.pdf. von Neumann, J. (1972). First draft of a report on the EDVAC. In H. H. Goldstine (Ed.), The computer: from Pascal to von Neumann. Princeton, NJ: Princeton University Press. ISBN 0-691-02367-0. Wing, J. (March 2006). Computational thinking. Communications of the ACM, 49(3), 33–35. CHAPTER Data parallel computing 2 David Luebke CHAPTER OUTLINE 2.1 2.2 2.3 2.4 2.5 2.6 2.7 Data Parallelism...................................................................................................20 CUDA C Program Structure.....................................................................................22 A Vector Addition Kernel.......................................................................................25 Device Global Memory and Data Transfer................................................................27 Kernel Functions and Threading.............................................................................32 Kernel Launch.......................................................................................................37 Summary..............................................................................................................38 Function Declarations................................................................................. 38 Kernel Launch........................................................................................... 38 Built-in (Predefined) Variables..................................................................... 39 Run-time API............................................................................................. 39 2.8 Exercises..............................................................................................................39 References..................................................................................................................41 Many code examples will be used to illustrate the key concepts in writing scalable parallel programs. For this we need a simple language that supports massive parallelism and heterogeneous computing, and we have chosen CUDA C for our code examples and exercises. CUDA C extends the popular C programming language with minimal new syntax and interfaces to let programmers target heterogeneous computing systems containing both CPU cores and massively parallel GPUs. As the name implies, CUDA C is built on NVIDIA’s CUDA platform. CUDA is currently the most mature framework for massively parallel computing. It is broadly used in the high performance computing industry, with sophisticated tools such as compilers, debuggers, and profilers available on the most common operating systems. An important point: while our examples will mostly use CUDA C for its simplicity and ubiquity, the CUDA platform supports many languages and application programming interfaces (APIs) including C++, Python, Fortran, OpenCL, OpenACC, OpenMP, and more. CUDA is really an architecture that supports a set Programming Massively Parallel Processors. DOI: Copyright © 2017 David B. Kirk/NVIDIA Corporation and Wen-mei W. Hwu. Published by Elsevier Inc. All rights reserved 19 20 CHAPTER 2 Data parallel computing of concepts for organizing and expressing massively parallel computation. It is those concepts that we teach. For the benefit of developers working in other languages (C++, FORTRAN, Python, OpenCL, etc.) we provide appendices that show how the concepts can be applied to these languages. 2.1 DATA PARALLELISM When modern software applications run slowly, the problem is usually having too much data to be processed. Consumer applications manipulate images or videos, with millions to trillions of pixels. Scientific applications model fluid dynamics using billions of grid cells. Molecular dynamics applications must simulate interactions between thousands to millions of atoms. Airline scheduling deals with thousands of flights, crews, and airport gates. Importantly, most of these pixels, particles, cells, interactions, flights, and so on can be dealt with largely independently. Converting a color pixel to a greyscale requires only the data of that pixel. Blurring an image averages each pixel’s color with the colors of nearby pixels, requiring only the data of that small neighborhood of pixels. Even a seemingly global operation, such as finding the average brightness of all pixels in an image, can be broken down into many smaller computations that can be executed independently. Such independent evaluation is the basis of data parallelism: (re)organize the computation around the data, such that we can execute the resulting independent computations in parallel to complete the overall job faster, often much faster. TASK PARALLELISM VS. DATA PARALLELISM Data parallelism is not the only type of parallelism used in parallel programming. Task parallelism has also been used extensively in parallel programming. Task parallelism is typically exposed through task decomposition of applications. For example, a simple application may need to do a vector addition and a matrix-vector multiplication. Each of these would be a task. Task parallelism exists if the two tasks can be done independently. I/O and data transfers are also common sources of tasks. In large applications, there are usually a larger number of independent tasks and therefore larger amount of task parallelism. For example, in a molecular dynamics simulator, the list of natural tasks include vibrational forces, rotational forces, neighbor identification for nonbonding forces, nonbonding forces, velocity and position, and other physical properties based on velocity and position. In general, data parallelism is the main source of scalability for parallel programs. With large data sets, one can often find abundant data parallelism to be able to utilize massively parallel processors and allow application performance to grow with each generation of hardware that has more execution resources. Nevertheless, task parallelism can also play an important role in achieving performance goals. We will be covering task parallelism later when we introduce streams. 2.1 Data parallelism FIGURE 2.1 Conversion of a color image to a greyscale image. We will use image processing as a source of running examples in the next chapters. Let us illustrate the concept of data parallelism with the color-to-greyscale conversion example mentioned above. Fig. 2.1 shows a color image (left side) consisting of many pixels, each containing a red, green, and blue fractional value (r, g, b) varying from 0 (black) to 1 (full intensity). RGB COLOR IMAGE REPRESENTATION In an RGB representation, each pixel in an image is stored as a tuple of (r, g, b) values. The format of an image’s row is (r g b) (r g b) … (r g b), as illustrated in the following conceptual picture. Each tuple specifies a mixture of red (R), green (G) and blue (B). That is, for each pixel, the r, g, and b values represent the intensity (0 being dark and 1 being full intensity) of the red, green, and blue light sources when the pixel is rendered. The actual allowable mixtures of these three colors vary across industryspecified color spaces. Here, the valid combinations of the three colors in the AdobeRGB color space are shown as the interior of the triangle. The vertical coordinate (y value) and horizontal coordinate (x value) of each mixture show the fraction of the pixel intensity that should be G and R. The remaining fraction (1 − y–x) of the pixel intensity that should be assigned to B. To render an image, the r, g, b values of each pixel are used to calculate both the total intensity (luminance) of the pixel as well as the mixture coefficients (x, y, 1 − y − x). 21 22 CHAPTER 2 Data parallel computing FIGURE 2.2 The pixels can be calculated independently of each other during color to greyscale conversion. To convert the color image (left side of Fig. 2.1) to greyscale (right side) we compute the luminance value L for each pixel by applying the following weighted sum formula: L = r * 0.21 + g * 0.72 + b * 0.07 If we consider the input to be an image organized as an array I of RGB values and the output to be a corresponding array O of luminance values, we get the simple computation structure shown in Fig. 2.2. For example, O[0] is generated by calculating the weighted sum of the RGB values in I[0] according to the formula above; O[1] by calculating the weighted sum of the RGB values in I[1], O[2] by calculating the weighted sum of the RGB values in I[2], and so on. None of these per-pixel computations depends on each other; all of them can be performed independently. Clearly the color-to-greyscale conversion exhibits a rich amount of data parallelism. Of course, data parallelism in complete applications can be more complex and much of this book is devoted to teaching the “parallel thinking” necessary to find and exploit data parallelism. 2.2 CUDA C PROGRAM STRUCTURE We are now ready to learn to write a CUDA C program to exploit data parallelism for faster execution. The structure of a CUDA C program reflects the coexistence of a host (CPU) and one or more devices (GPUs) in the computer. Each CUDA source file can have a mixture of both host and device code. By default, any traditional C program is a CUDA program that contains only host code. One can add device functions and data declarations into any source file. The functions or data declarations for device are clearly marked with special CUDA C keywords. These are typically functions that exhibit rich amount of data parallelism. 2.2 Cuda C program structure Integrated C programs with CUDA extensions NVCC Compiler Host Code Host C preprocessor, compiler/ linker Device Code (PTX) Device just-in-time compiler Heterogeneous Computing Platform with CPUs, GPUs FIGURE 2.3 Overview of the compilation process of a CUDA C Program. Once device functions and data declarations are added to a source file, it is no longer acceptable to a traditional C compiler. The code needs to be compiled by a compiler that recognizes and understands these additional declarations. We will be using a CUDA C compiler called NVCC (NVIDIA C Compiler). As shown at the top of Fig. 2.3, the NVCC compiler processes a CUDA C program, using the CUDA keywords to separate the host code and device code. The host code is straight ANSI C code, which is further compiled with the host's standard C/C++ compilers and is run as a traditional CPU process. The device code is marked with CUDA keywords for data parallel functions, called kernels, and their associated helper functions and data structures. The device code is further compiled by a run-time component of NVCC and executed on a GPU device. In situations where there is no hardware device available or a kernel can be appropriately executed on a CPU, one can also choose to execute the kernel on a CPU using tools like MCUDA [SSH 2008]. The execution of a CUDA program is illustrated in Fig. 2.4. The execution starts with host code (CPU serial code). When a kernel function (parallel device code) is called, or launched, it is executed by a large number of threads on a device. All the threads that are generated by a kernel launch are collectively called a grid. These threads are the primary vehicle of parallel execution in a CUDA platform. Fig. 2.4 shows the execution of two grids of threads. We will discuss how these grids are organized soon. When all threads of a kernel complete their execution, the corresponding grid terminates, the execution continues on the host until another kernel is launched. Note that Fig. 2.4 shows a simplified model where the CPU execution and the GPU execution do not overlap. Many heterogeneous computing applications actually manage overlapped CPU and GPU execution to take advantage of both CPUs and GPUs. 23 24 CHAPTER 2 Data parallel computing CPU serial code Device parallel kernel KernelA<<< nBlK, nTid >>>(args); ... CPU serial code Device parallel kernel KernelB<<< nBlK, nTid >>>(args); ... FIGURE 2.4 Execution of a CUDA program. Launching a kernel typically generates a large number of threads to exploit data parallelism. In the color-to-greyscale conversion example, each thread could be used to compute one pixel of the output array O. In this case, the number of threads that will be generated by the kernel is equal to the number of pixels in the image. For large images, a large number of threads will be generated. In practice, each thread may process multiple pixels for efficiency. CUDA programmers can assume that these threads take very few clock cycles to generate and schedule due to efficient hardware support. This is in contrast with traditional CPU threads that typically take thousands of clock cycles to generate and schedule. THREADS A thread is a simplified view of how a processor executes a sequential program in modern computers. A thread consists of the code of the program, the particular point in the code that is being executed, and the values of its variables and data structures. The execution of a thread is sequential as far as a user is concerned. One can use a source-level debugger to monitor the progress of a thread by executing one statement at a time, looking at the statement that will be executed next and checking the values of the variables and data structures as the execution progresses. Threads have been used in programming for many years. If a programmer wants to start parallel execution in an application, he/she creates and manages multiple threads using thread libraries or special languages. In CUDA, the execution of each thread is sequential as well. A CUDA program initiates parallel execution by launching kernel functions, which causes the underlying run-time mechanisms to create many threads that process different parts of the data in parallel. 2.3 A vector addition kernel 2.3 A VECTOR ADDITION KERNEL We now use vector addition to illustrate the CUDA C program structure. Vector addition is arguably the simplest possible data parallel computation, the parallel equivalent of “Hello World” from sequential programming. Before we show the kernel code for vector addition, it is helpful to first review how a conventional vector addition (host code) function works. Fig. 2.5 shows a simple traditional C program that consists of a main function and a vector addition function. In all our examples, whenever there is a need to distinguish between host and device data, we will prefix the names of variables that are processed by the host with “h_” and those of variables that are processed by a device “d_” to remind ourselves the intended usage of these variables. Since we only have host code in Fig. 2.5, we see only “h_” variables. Assume that the vectors to be added are stored in arrays A and B that are allocated and initialized in the main program. The output vector is in array C, which is also allocated in the main program. For brevity, we do not show the details of how A, B, and C are allocated or initialized in the main function. The pointers (see sidebar below) to these arrays are passed to the vecAdd function, along with the variable N that contains the length of the vectors. Note that the formal parameters of the vectorAdd function are prefixed with “h_” to emphasize that these are processed by the host. This naming convention will be helpful when we introduce device code in the next few steps. The vecAdd function in Fig. 2.5 uses a for-loop to iterate through the vector elements. In the ith iteration, output element h_C[i] receives the sum of h_A[i] and h_B[i]. The vector length parameter n is used to control the loop so that the number of iterations matches the length of the vectors. The formal parameters h_A, h_B and h_C are passed by reference so the function reads the elements of h_A, h_B and writes the elements of h_C through the argument pointers A, B, and C. When the // Compute vector sum h_C = h_A+h_B void vecAdd(float* h_A, float* h_B, float* h_C, int n) { for (int i = 0; i < n; i++) h_C[i] = h_A[i] + h_B[i]; } int main() { // Memory allocation for h_A, h_B, and h_C // I/O to read h_A and h_B, N elements each … vecAdd(h_A, h_B, h_C, N); } FIGURE 2.5 A simple traditional vector addition C code example. 25 26 CHAPTER 2 Data parallel computing #include <cuda.h> … void vecAdd(float* A, float* B, float* C, int n) { int size = n* sizeof(float); float *d_A *d_B, *d_C; … 1. // Allocate device memory for A, B, and C // copy A and B to device memory 2. // Kernel launch code – to have the device // to perform the actual vector addition Part 1 Host memory Device memory GPU (Part 2) CPU Part 3 3. // copy C from the device memory // Free device vectors } FIGURE 2.6 Outline of a revised vecAdd function that moves the work to a device. vecAdd function returns, the subsequent statements in the main function can access the new contents of C. A straightforward way to execute vector addition in parallel is to modify the vecAdd function and move its calculations to a device. The structure of such a modified vecAdd function is shown in Fig. 2.6. At the beginning of the file, we need to add a C preprocessor directive to include the cuda.h header file. This file defines the CUDA API functions and built-in variables (see sidebar below) that we will be introducing soon. Part 1 of the function allocates space in the device (GPU) memory to hold copies of the A, B, and C vectors and copies the vectors from the host memory to the device memory. Part 2 launches parallel execution of the actual vector addition kernel on the device. Part 3 copies the sum vector C from the device memory back to the host memory and frees the vectors in device memory. POINTERS IN THE C LANGUAGE The function arguments A, B, and C in Fig. 2.4 are pointers. In the C language, a pointer can be used to access variables and data structures. While a floating-point variable V can be declared with: float V; a pointer variable P can be declared with: float *P; By assigning the address of V to P with the statement P=&V, we make P “point to” V. *P becomes a synonym for V. For example U=*P assigns the value of V to U. For another example, *P=3 changes the value of V to 3. 2.4 Device global memory and data transfer An array in a C program can be accessed through a pointer that points to its 0th element. For example, the statement P=&(A[0]) makes P point to the 0th element of array A. P[i] becomes a synonym for A[i]. In fact, the array name A is in itself a pointer to its 0th element. In Fig. 2.5, passing an array name A as the first argument to function call to vecAdd makes the function’s first parameter h_A point to the 0th element of A. We say that A is passed by reference to vecAdd. As a result, h_A[i] in the function body can be used to access A[i]. See Patt&Patel [Patt] for an easy-to-follow explanation of the detailed usage of pointers in C. Note that the revised vecAdd function is essentially an outsourcing agent that ships input data to a device, activates the calculation on the device, and collects the results from the device. The agent does so in such a way that the main program does not need to even be aware that the vector addition is now actually done on a device. In practice, such “transparent” outsourcing model can be very inefficient because of all the copying of data back and forth. One would often keep important bulk data structures on the device and simply invocate device functions on them from the host code. For now, we will stay with the simplified transparent model for the purpose of introducing the basic CUDA C program structure. The details of the revised function, as well as the way to compose the kernel function, will be shown in the rest of this chapter. 2.4 DEVICE GLOBAL MEMORY AND DATA TRANSFER In current CUDA systems, devices are often hardware cards that come with their own dynamic random access memory (DRAM). For example, the NVIDIA GTX1080 comes with up to 8 GB1 of DRAM, called global memory. We will use the terms global memory and device memory interchangeably. In order to execute a kernel on a device, the programmer needs to allocate global memory on the device and transfer pertinent data from the host memory to the allocated device memory. This corresponds to Part 1 of Fig. 2.6. Similarly, after device execution, the programmer needs to transfer result data from the device memory back to the host memory and free up the device memory that is no longer needed. This corresponds to Part 3 of Fig. 2.6. The CUDA run-time system provides API functions to perform these activities on behalf of the programmer. From this point on, we will simply say that 1 There is a trend to integrate the address space of CPUs and GPUs into a unified memory space (Chapter 20). There are new programming frameworks such as GMAC that take advantage of the unified memory space and eliminate data copying cost. 27 28 CHAPTER 2 Data parallel computing Host Device Host memory Global memory FIGURE 2.7 Host memory and device global memory. a piece of data is transferred from host to device as shorthand for saying that the data is copied from the host memory to the device memory. The same holds for the opposite direction. Fig. 2.7 shows a high level picture of the CUDA host memory and device memory model for programmers to reason about the allocation of device memory and movement of data between host and device. The device global memory can be accessed by the host to transfer data to and from the device, as illustrated by the bidirectional arrows between these memories and the host in Fig. 2.7. There are more device memory types than shown in Fig. 2.7. Constant memory can be accessed in a read-only manner by device functions, which will be described in Chapter 7, Parallel patterns: convolution. We will also discuss the use of registers and shared memory in Chapter 4, Memory and data locality. Interested readers can also see the CUDA programming guide for the functionality of texture memory. For now, we will focus on the use of global memory. BUILT-IN VARIABLES Many programming languages have built-in variables. These variables have special meaning and purpose. The values of these variables are often preinitialized by the run-time system and are typically read-only in the program. The programmers should refrain from using these variables for any other purposes. In Fig. 2.6, Part 1 and Part 3 of the vecAdd function need to use the CUDA API functions to allocate device memory for A, B, and C, transfer A and B from host memory to device memory, transfer C from device memory to host memory at the end of the vector addition, and free the device memory for A, B, and C. We will explain the memory allocation and free functions first. Fig. 2.8 shows two API functions for allocating and freeing device global memory. The cudaMalloc function can be called from the host code to allocate a piece of device global memory for an object. The reader should notice the striking similarity between cudaMalloc and the standard C run-time library malloc function. This is intentional; CUDA is C with minimal extensions. CUDA uses the standard C 2.4 Device global memory and data transfer cudaMalloc() • Allocates object in the device global memory • Two parameters ° Address of a pointer to the allocated object ° Size of allocated object in terms of bytes cudaFree() • Frees object from device global memory ° Pointer to freed object FIGURE 2.8 CUDA API functions for managing device global memory. run-time library malloc function to manage the host memory and adds cudaMalloc as an extension to the C run-time library. By keeping the interface as close to the original C run-time libraries as possible, CUDA minimizes the time that a C programmer spends to relearn the use of these extensions. The first parameter to the cudaMalloc function is the address of a pointer variable that will be set to point to the allocated object. The address of the pointer variable should be cast to (void **) because the function expects a generic pointer; the memory allocation function is a generic function that is not restricted to any particular type of objects.2 This parameter allows the cudaMalloc function to write the address of the allocated memory into the pointer variable.3 The host code to launch kernels passes this pointer value to the kernels that need to access the allocated memory object. The second parameter to the cudaMalloc function gives the size of the data to be allocated, in number of bytes. The usage of this second parameter is consistent with the size parameter to the C malloc function. We now use a simple code example to illustrate the use of cudaMalloc. This is a continuation of the example in Fig. 2.6. For clarity, we will start a pointer variable with letter “d_” to indicate that it points to an object in the device memory. The program passes the address of pointer d_A (i.e., &d_A) as the first parameter after casting it to a void pointer. That is, d_A will point to the device memory region allocated for the A vector. The size of the allocated region will be n times the size of a single-precision floating number, which is 4 bytes in most computers today. After the computation, cudaFree is called with pointer d_A as input to free the storage space for the A vector from the device global memory. Note that cudaFree does not need to 2 The fact that cudaMalloc returns a generic object makes the use of dynamically allocated multidimensional arrays more complex. We will address this issue in Section 3.2. 3 Note that cudaMalloc has a different format from the C malloc function. The C malloc function returns a pointer to the allocated object. It takes only one parameter that specifies the size of the allocated object. The cudaMalloc function writes to the pointer variable whose address is given as the first parameter. As a result, the cudaMalloc function takes two parameters. The two-parameter format of cudaMalloc allows it to use the return value to report any errors in the same way as other CUDA API functions. 29 30 CHAPTER 2 Data parallel computing change the content of pointer variable d_A; it only needs to use the value of d_A to enter the allocated memory back into the available pool. Thus only the value, not the address of d_A, is passed as the argument. float *d_A; int size=n * sizeof(float); cudaMalloc((void**)&d_A, size); ... cudaFree(d_A); The addresses in d_A, d_B, and d_C are addresses in the device memory. These addresses should not b
https://it.b-ok.org/book/3616642/0fcbcb?dsource=recommend
CC-MAIN-2020-24
en
refinedweb
Thread overview Almost all operating systems support running multiple tasks at the same time. A task is usually a program, and each running program is a process. When a program is running, it may contain multiple sequential execution flows, each of which is a thread. Threads and processes Almost all operating systems support the concept of process. All running tasks usually correspond to a process. When a program enters memory for running, it becomes a process. A process is a program in the running process and has certain independent performance. A process is an independent unit for resource allocation and scheduling of the system. A process contains the following three characteristics. - Independence: a process is an independent entity in the system. It can have its own independent resources. Each process has its own private address space. Without the permission of the process itself, a user process cannot directly access the address space of other threads. - Dynamic: the difference between a process and a program is that a program is only a static set of instructions, while a process is an active set of instructions in the system. The concept of time is added to a process. A process has its own life cycle and various states. These concepts are not available in a program. Concurrency: multiple processes can be executed concurrently on a single processor without mutual influence. Concurrency and parallelism are two concepts. Parallelism refers to the simultaneous execution of multiple instructions on multiple processors at the same time, and the simultaneous execution of only one instruction at the same time. However, multiple process instructions are executed in rapid rotation, which makes the macro system have The effect of multiple processes executing at the same time. Advantages of multithreading Threads are independent and concurrent in the program. Compared with the divided process, threads in the process have less isolation. They share memory, file handle and other application states of each process. Because the partition size of threads is smaller than that of processes, the concurrency of multi threads is high. In the process of execution, processes have independent memory units, and multiple threads share memory, which greatly improves the running efficiency of the program. Threads have higher performance than processes. This is because threads in the same process have common features. Multiple threads share the same process virtual space. The environment shared by threads includes: process code block, process public data, etc. with these shared data, threads can easily communicate with each other. In summary, threads have the following points. - Memory cannot be shared between processes, but memory can be shared between threads. - When the system creates a process, it needs to reallocate system resources for the process, but the cost of creating a thread is much smaller, so it is more efficient to use multithreading to achieve multitasking concurrency than multiprocessing. Thread creation starts The steps to create a Thread class by inheriting the Thread class are as follows: - Define the subclass of the Thread class and re run() method of the class. The method body of the run() method represents the tasks that the Thread needs to complete. Therefore, the run() method is called the Thread execution body. - The Thread object is created by calling an instance of the Thread subclass. - Call the start() method of the thread object to start the thread. //Create a Thread class by inheriting the Thread class public class FirstThread extends Thread { private int i; // The method body of the run() method is the thread execution body @Override public void run() { for (int i = 0; i < 100; i++) { // When the Thread class inherits the Thread, you can directly use this to get the current Thread // getName() of Thread object returns the name of the current Thread // So you can directly use the getName() method to return the name of the current thread System.out.println(getName() + " " + i); } } public static void main(String[] args) { for (int i = 0; i < 100; i++) { // Call the currentThread() method of Thread to get the current Thread System.out.println(Thread.currentThread().getName() + " " + i); if (i == 20) { // Create first thread new FirstThread().start(); // Create a second thread new FirstThread().start(); } } } } Execution results "D:\Program Files\Java\jdk1.8.0_151\bin\java" "-javaagent:D:\Program Files\JetBrains\IntelliJ IDEA 2017.1.4\lib\idea_rt.jar=55909:D:\Program Files\JetBrains\IntelliJ IDEA 2017.1.4\bin" -Dfile.encoding=UTF-8 -classpath "D:\Program Files\Java\jdk1.8.0_151\jre\lib\charsets.jar;D:\Program Files\Java\jdk1.8.0_151\jre\lib\deploy.jar;D:\Program Files\Java\jdk1.8.0_151\jre\lib\ext\access-bridge-64.jar;D:\Program Files\Java\jdk1.8.0_151\jre\lib\ext\cldrdata.jar;D:\Program Files\Java\jdk1.8.0_151\jre\lib\ext\dnsns.jar;D:\Program Files\Java\jdk1.8.0_151\jre\lib\ext\jaccess.jar;D:\Program Files\Java\jdk1.8.0_151\jre\lib\ext\jfxrt.jar;D:\Program Files\Java\jdk1.8.0_151\jre\lib\ext\localedata.jar;D:\Program Files\Java\jdk1.8.0_151\jre\lib\ext\nashorn.jar;D:\Program Files\Java\jdk1.8.0_151\jre\lib\ext\sunec.jar;D:\Program Files\Java\jdk1.8.0_151\jre\lib\ext\sunjce_provider.jar;D:\Program Files\Java\jdk1.8.0_151\jre\lib\ext\sunmscapi.jar;D:\Program Files\Java\jdk1.8.0_151\jre\lib\ext\sunpkcs11.jar;D:\Program Files\Java\jdk1.8.0_151\jre\lib\ext\zipfs.jar;D:\Program Files\Java\jdk1.8.0_151\jre\lib\javaws.jar;D:\Program Files\Java\jdk1.8.0_151\jre\lib\jce.jar;D:\Program Files\Java\jdk1.8.0_151\jre\lib\jfr.jar;D:\Program Files\Java\jdk1.8.0_151\jre\lib\jfxswt.jar;D:\Program Files\Java\jdk1.8.0_151\jre\lib\jsse.jar;D:\Program Files\Java\jdk1.8.0_151\jre\lib\management-agent.jar;D:\Program Files\Java\jdk1.8.0_151\jre\lib\plugin.jar;D:\Program Files\Java\jdk1.8.0_151\jre\lib\resources.jar;D:\Program Files\Java\jdk1.8.0_151\jre\lib\rt.jar;D:\github_project\Demo\javamaven\target\classes;E:\repository\org\yaml\snakeyaml\1.17\snakeyaml-1.17.jar;E:\repository\javax\validation\validation-api\1.1.0.Final\validation-api-1.1.0.Final.jar;E:\repository\com\fasterxml\jackson\core\jackson-databind\2.8.1\jackson-databind-2.8.1.jar;E:\repository\com\fasterxml\jackson\core\jackson-annotations\2.8.1\jackson-annotations-2.8.1.jar;E:\repository\com\fasterxml\jackson\core\jackson-core\2.8.1\jackson-core-2.8.1.jar;E:\repository\org\slf4j\slf4j-api\1.7.21\slf4j-api-1.7.21.jar" com.thread.FirstThread Thread-0 0 Thread-0 1 Thread-0 2 Thread-0 3 Thread-0 4 Thread-0 5 Thread-0 6 Thread-0 7 Thread-0 8 Thread-0 9 Thread-0 10 Thread-0 11 Thread-0 12 Thread-0 13 Thread-0 14 Thread-0 15 Thread-0 16 Thread-0 17 Thread-0 18 Thread-0 19 Thread-0 20 Thread-0 21 Thread-0 22 Thread-0 23 Thread-0 24 Thread-0 25 Thread-0 26 Thread-0 27 Thread-0 28 Thread-0 29 Thread-0 30 Thread-0 31 Thread-0 32 Thread-0 33 Thread-0 34 Thread-0 35 Thread-0 36 Thread-0 37 Thread-0 38 Thread-0 39 Thread-0 40 Thread-0 41 Thread-0 42 Thread-0 43 Thread-0 44 Thread-0 45 Thread-0 46 Thread-0 47 Thread-0 48 Thread-0 49 Thread-0 50 Thread-0 51 Thread-0 52 Thread-0 53 Thread-0 54 Thread-0 55 Thread-0 56 Thread-0 57 Thread-0 58 Thread-0 59 Thread-0 60 Thread-0 61 Thread-0 62 Thread-1 0 Thread-1 1 Thread-1 2 Thread-1 3 Thread-1 4 Thread-1 5 Thread-1 6 Thread-0 63 Thread-0 64 Thread-0 65 Thread-0 66 Thread-0 67 Thread-0 68 Thread-0 69 Thread-0 70 Thread-0 71 Thread-0 72 Thread-0 73 Thread-0 74 Thread-0 75 Thread-0 76 Thread-0 77 Thread-0 78 Thread-0 79 Thread-0 80 Thread-0 81 Thread-0 82 Thread-0 83 Thread-0 84 Thread-0 85 Thread-0 86 Thread-0 87 Thread-0 88 Thread-0 89 Thread-0 90 Thread-0 91 Thread-0 92 Thread-0 93 Thread-0 94 Thread-0 95 Thread-0 96 Thread-0 97 Thread-0 98 Thread-0 99 Thread-1 7 Thread-1 8 Thread-1 9 Thread-1 10 Thread-1 11 Thread-1 12 Thread-1 13 Thread-1 14 Thread-1 15 Thread-1 16 Thread-1 17 Thread-1 18 Thread-1 19 Thread-1 20 Thread-1 21 Thread-1 22 Thread-1 23 Thread-1 24 Thread-1 25 Thread-1 26 Thread-1 27 Thread-1 28 Thread-1 29 Thread-1 30 Thread-1 31 Thread-1 32 Thread-1 33 Thread-1 34 Thread-1 35 Thread-1 36 Thread-1 37 Thread-1 38 Thread-1 39 Thread-1 40 Thread-1 41 Thread-1 42 Thread-1 43 Thread-1 44 Thread-1 45 Thread-1 46 Thread-1 47 Thread-1 48 Thread-1 49 Thread-1 50 Thread-1 51 Thread-1 52 Thread-1 53 Thread-1 54 Thread-1 55 Thread-1 56 Thread-1 57 Thread-1 58 Thread-1 59 Thread-1 60 Thread-1 61 Thread-1 62 Thread-1 63 Thread-1 64 Thread-1 65 Thread-1 66 Thread-1 67 Thread-1 68 Thread-1 69 Thread-1 70 Thread-1 71 Thread-1 72 Thread-1 73 Thread-1 74 Thread-1 75 Thread-1 76 Thread-1 77 Thread-1 78 Thread-1 79 Thread-1 80 Thread-1 81 Thread-1 82 Thread-1 83 Thread-1 84 Thread-1 85 Thread-1 86 Thread-1 87 Thread-1 88 Thread-1 89 Thread-1 90 Thread-1 91 Thread-1 92 Thread-1 93 Thread-1 94 Thread-1 95 Thread-1 96 Thread-1 97 Thread-1 98 Thread-1 99 Process finished with exit code 0 The FirstThread class of the above program inherits the Thread class and implements the Run() method, as shown in the first bold code in the program. The code execution flow in the Run() method is the task that the Thread needs to complete. The main method of the program also contains a loop. When the loop variable i is equal to 20, two new threads are created and started, Running the above program, although the above program only shows the creation and startup of two threads, there are actually three threads in the program, that is, the program explicitly creates two sub threads and one main Thread. When the Java program starts to run, the program will create at least one main Thread, and the Thread execution body of the main Thread is not determined by the Run() method, but by the main() method, mai The body of the n () method represents the Thread body of the main Thread.
https://programmer.group/java-multithreading-overview.html
CC-MAIN-2020-24
en
refinedweb
Codewind 0.8.0 Thursday 23 January 2020 ✨ New Features and Highlights for 0.8.0 ✨ Appsody - The codewind-appsody-extensionsupports passing environment options to the Appsody CLI using --docker-options. Che - The codewind-che-pluginsupports Helm 3, and the insecure non-default tiller was removed. IDEs (Both Eclipse and VS Code) VS Code - Push registry is no longer required unless a Codewind-style template source is enabled. - If you switch push registries, a warning appears that your namespace will be lost. - You can cancel image registry creation during the namespacestep, and you can use the Esckey to cancel the whole process. - You can navigate the remote connection settings page with keyboard shortcuts. List of Fixes - Updated Node.js Express server generator to 4.2.2 in Eclipse and VS Code - Updated Appsody CLI to v0.5.4 for the codewind-appsopdy-extension - Image Registry Manager push registry buttons are now toggles. - In Codewind 0.7.0, an incorrect network error appeared when attempting to connect to an uninstalled or failed remote Codewind instance: Authentication service unavailable. However, this network error is actually a connection error. This inaccuracy is corrected. - The remote settings page in VS Code is more responsive when the window is resized. - VS Code no longer stores the remote connection in extension data. VS Code now recognizes remote connections created in the Eclipse IDE.
https://www.eclipse.org/codewind/news08.html
CC-MAIN-2020-24
en
refinedweb
React is a popular JavaScript library for building user interfaces, and in this tutorial I'm going to show you how to use it to build a simple interactive music player. We're going to be working in CodePen, and you could also write this project offline inside a React app, since all the components can be ported to a repo pretty easily. We're going to explore props, state and how components work together and communicate with each other inside the React ecosystem. We're also using Font Awesome, so make sure that's included inside your CodePen CSS panel. To get you up and running with React very quickly, I've put together a collection for you on CodePen, and split it into stages so you can jump in at any point, fork the step and go forward from there. I have also written the CSS for you, so you can just focus on React, and how it's all working. Create the React UI Let's get started! First, we have to create some components in React. We're going to take the code from Step 1 in the accompanying Pen, and convert it into components. Let's begin by creating the main component that we'll put everything else inside. We'll call this component Player. The code to create a component looks like this: let Player = React.createClass({ render: function() { return ( <div className="Player"> <ChildComponent /> // This is a child component, nested inside. </div> )} }); Note that you have to use className because class is reserved in JavaScript. Go through the CodePen provided and convert the basic HTML you find there into React components. Next we'll focus on two more awesome concepts in React: state and props. You won't see anything yet, because we haven't rendered our app. Rendering, State, Props In order to render our React awesomeness, we need to tell the tool where to place itself in the DOM. To do this we use ReactDOM.render(). You'll find the code for this in Step 2 on CodePen. ReactDOM.render(<Player />, document.querySelector('body')); If you've done everything correctly, you should see the player appear. Next we're going to build our props object. props is short for properties, and these are pieces of information you pass to your components for them to use. We need to give the player some information for the track, so we'll do that now. Your props object is stored inside getDefaultProps, and should look like this: getDefaultProps: function() { return { track: { name: "We Were Young", artist: "Odesza", album: "Summer's Gone", year: 2012, artwork: "", duration: 192, source: "" }} } We also need to create a state object to store the current time of the song and the play/pause state: getInitialState: function() { return { playStatus: 'play', currentTime: 0 } } Your app's state changes constantly, and is stored in the state object. This is important to remember when you're writing React because the components that rely on that state as a property will change if the state does. What makes React so great to work with is that it calculates the changes for you and updates the DOM efficiently when the page changes. Everything stays in sync. Passing props and state Now we're going to pass props and state values into our components (Step 3). Our Player component should now look like this: render: function() { return (<div className="Player"> <div className="Background" style={{'backgroundImage': 'url(' + this.props.track.artwork + ')'}}></div> <div className="Header"><div className="Title">Now playing</div></div> <div className="Artwork" style={{'backgroundImage': 'url(' + this.props.track.artwork + ')'}}></div> <TrackInformation track={this.props.track} /> <Scrubber /><Controls /> <Timestamps duration={this.props.track.duration} currentTime={this.state.currentTime} /> <audio><source src={this.props.track.source} /></audio> </div> )} We can then pick these values up inside our child components. For example: var Timestamps = React.createClass({ render: function() { return ( <div className="Timestamps"> <div className="Time Time--current">{this.props.currentTime}</div> <div className="Time Time--total">{this.props.duration}</div> </div> )} }); Look through step 4 on CodePen to see how all the props are passed down and used in the child components. Duration calculation The timestamps right now are just plain numbers. We need to convert them to timestamps before they can be used in our app. We will do this by writing a function inside our component: convertTime: function(timestamp) { let minutes = Math.floor(timestamp / 60); let seconds = timestamp - (minutes * 60); if (seconds < 10) { seconds = '0' + seconds; } timestamp = minutes + ':' + seconds; return timestamp; } We can then use this in our Timestamps component: {this.convertTime(this.props.currentTime)}. Play and pause We're going to bind a function to the onClick event of the Play/Pause button and pass it back up to the main component: <Controls isPlaying={this.state.playStatus} onClick={this.togglePlay} />. Our Toggle looks like this: togglePlay: function() { let status = this.state.playStatus; let audio = document.getElementById('audio'); if(status === 'play') { status = 'pause'; audio.play(); } else { status = 'play'; audio.pause(); } this.setState({ playStatus: status }); } We also need to add some code inside the render function of the Controls component to toggle the icon from 'Play' to 'Pause', and another function to update the timestamps when the song is playing. render: function() { let classNames; if (this.props.isPlaying == 'pause') { classNames = 'fa fa-fw fa-pause'; } else { classNames = 'fa fa-fw fa-play'; } return {...} } We need to write a function to handle the updating of our timestamps from before. It's best to keep this code separate, in case we want to use it for something else later. updateTime: function(timestamp) { timestamp = Math.floor(timestamp); this.setState({ currentTime: timestamp }); } Finally, we need to update our playToggle function to call the updateTime function on a setInterval. ... audio.play(); let _this = this; setInterval(function() { ..... _this.updateScrubber(percent); _this.updateTime(currentTime); }, 100); ... Moving forward So now you should have a shiny working music player. You could go further here and add features for scrubbing through the song with e.pageX, or add Playlist functionality by storing upcoming track IDs in an array, next and previous buttons. If you get stuck, reach out to @mrjackolai – I'll be happy to help out! Have fun, and good luck. This article originally appeared in net magazine issue 289; buy it here! Related articles:
https://www.creativebloq.com/how-to/build-a-simple-music-player-with-react
CC-MAIN-2020-05
en
refinedweb
#include <itkOptimizerParameters.h> Class to hold and manage different parameter types used during optimization. Definition at line 34 of file itkOptimizerParameters.h. Definition at line 42 of file itkOptimizerParameters.h. Helper class for managing different types of parameter data. Definition at line 48 of file itkOptimizerParameters.h. Definition at line 40 of file itkOptimizerParameters.h. Definition at line 44 of file itkOptimizerParameters.h. Definition at line 41 of file itkOptimizerParameters.h. The element type stored at each location in the Array. Definition at line 39 of file itkOptimizerParameters.h. Definition at line 43 of file itkOptimizerParameters.h. Default constructor. It is created with an empty array it has to be allocated later by assignment Copy constructor. Uses VNL copy construtor with correct setting for memory management. The vnl vector copy constructor creates new memory no matter the setting of let array manage memory of rhs. Get the helper in use. Definition at line 91 of file itkOptimizerParameters.h. Initialize. Initialization called by constructors. Set a new data pointer for the parameter data, pointing it to a different memory block. The size of the new memory block must equal the current size, in elements of TValue. This call is passed to the assigned OptimizerParametersHelper. Assign a helper. OptimizerParameters manages the helper once its been assigned. The generic helper, OptimizerParametersHelper, is set in constructor. Classes that need a specialized helper should allocate one themselves and assign it with this method. Set an object that holds the parameters. Used by the helper of derived classes that use an object other than itkArray to hold parameter data. The helper class must check that the object is the correct type. The call is passed to the assigned OptimizerParametersHelper. Definition at line 108 of file itkOptimizerParameters.h. Referenced by itk::OptimizerParameters< TInternalComputationValueType >::GetHelper().
https://itk.org/Doxygen48/html/classitk_1_1OptimizerParameters.html
CC-MAIN-2020-05
en
refinedweb
Python. Requests allow you to send HTTP/1.1 requests without the need to manually add query strings to your URLs, or form-encode your POST data. With the requests library, you can perform a lot of functions including: - adding form data, - adding multipart files, - and accessing the response data of Python MAKING REQUESTS The first you need to do is to install the library and it's as simple as: pip install requests To test if the installation has been successful, you can do a very easy test in your python interpreter by simply typing: import requests If the installation has been successful, there will be no errors. HTTP requests include: - GET - PUT - DELETE - OPTIONS - HEAD Making a GET request Making requests is very easy as illustrated below. import requests req = requests.get(“”) The above command will get the google web page and store the information in the req variable. We can then go on to get other attributes as well. For instance, to know if fetching the google web page was successful, we will query the status_code. import requests req = requests.get(“") req.status_code 200 # 200 means a successful request What if we want to find out the encoding type of the Google web page? req.encoding ISO-8859–1 You might also want to know the contents of the response. req.text This is just a truncated content of the response. '<!doctype html><html itemscope="" itemtype="" lang="en "><head><meta content="Search the world\'s information, including webpages, imag es, videos and more. Google has many special features to help you find exactly w hat you\'re looking for." name="description"><meta content="noodp" name="robots" ><meta content="text/html; charset=UTF-8" http-<meta conten<title>Google</title><script>(function(){window.google={kEI:\'_Oq7WZT-LIf28QWv Making a POST Request In simple terms, a POST request used to create or update data. This is especially used in the submission of forms. Let's assume you have a registration form that takes an email address and password as input data, when you click on the submit button for registration, the post request will be as shown below. data = {"email":"[email protected]", "password":"12345") req = requests.post(“, params = data) Making a PUT Request A PUT request is similar to a POST request. Its used to update data.For instance, the API below shows how to do a PUT request. data= {"name":"tutsplus", "telephone":"12345") r.put(", params= data") Making a DELETE Request A DELETE request, like the name suggests, is used to delete data. Below is an example of a DELETE request data= {'name':'Tutsplus'} url = "") response = requests.delete(url, params= data) urllib Package urllib is a package that collects several modules for working with URLs namely: urllib.requestfor opening and reading URLs. urllib.errorcontaining the exceptions raised by urllib.request urllib.parsefor parsing URLs. urllib.robotparserfor parsing robots.txtfiles. urllib.request offers a very simple interface, in the form of the urlopen function capable of fetching URLs using a variety of different protocols. It also offers a slightly more complex interface for handling basic authentication, cookies, proxies e.t. c. How to Fetch URLs With urllib The simplest way to use urllib.request is as follows: import urllib.request with urllib.request.urlopen('') as response: html = response.read() If you wish to retrieve an internet resource and store it, you can do so via the urlretrieve() function. import urllib.request filename, headers = urllib.request.urlretrieve('') html = open(filename) Downloading Images With Python In this example, we want to download the image available on this link using both the request llibrary and urllib module. url = '' # downloading with urllib # imported the urllib library import urllib # Copy a network object to a local file urllib.urlretrieve(url, "python.png") # downloading with requests # import the requests library import requests # download the url contents in binary format r = requests.get(url) # open method to open a file on your system and write the contents with open("python1.png", "wb") as code: code.write(r.content) Download PDF Files With Python In this example, we will download a pdf about google trends from this link. url = '' # downloading with urllib # import the urllib package import urllib # Copy a network object to a local file urllib.urlretrieve(url, "tutorial.pdf") # downloading with requests # import the requests library import requests # download the file contents in binary format r = requests.get(url) # open method to open a file on your system and write the contents with open("tutorial1.pdf", "wb") as code: code.write(r.content) Download Zip Files With Python In this example, we are going to download the contents of a GitHub repository found in this link and store the file locally. url = '' # downloading with requests # import the requests library import requests # download the file contents in binary format r = requests.get(url) # open method to open a file on your system and write the contents with open("minemaster1.zip", "wb") as code: code.write(r.content) # downloading with urllib # import the urllib library import urllib # Copy a network object to a local file urllib.urlretrieve(url, "minemaster.zip") Download Videos With Python In this example, we want to download the video lecture available on this page url = '' video_name = url.split('/')[-1] # using requests # imported the requests library import requests print "Downloading file:%s" % video_name # download the url contents in binary format r = requests.get(url) # open method to open a file on your system and write the contents with open('tutorial.mp4', 'wb') as f: f.write(r.content) # using urllib # imported the urllib library import urllib print "Downloading file:%s" % video_name # Copy a network object to a local file urllib.urlretrieve(url, "tutorial2.mp4") Conclusion This tutorial has covered the most commonly used methods to download files as well as the most common file formats. Even though you will write less code when using the urllib module, the requests module is preferred due to its simplicity, popularity and a wide array of features including: - Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/how-to-download-files-in-python--cms-30099
CC-MAIN-2020-05
en
refinedweb
How to use external Python packages Using external Python package ActiveGate plugins are standard Python scripts running in an embedded Python engine inside Remote Plugin Module. Every standard Python package can be used in a plugin. For example, you can use url.request for a simple import like this: urllib import code import urllib.request from ruxit.api.base_plugin import RemoteBasePlugin class MyPlugin (RemoteBasePlugin): def query(self, **kwargs): response = urllib.request.urlopen('') response_data = response.read() Sometimes you may find it necessary to use an external package, for example Python driver for Apache Cassandra. The installation of such a package requires two steps: - Development - prepare the environment for testing a plugin in ActiveGate Plugin Simulator. - Deployment - prepare the plugin package for an upload to Dynatrace Server and the ActiveGate. Development environment with external package Let's create a plugin using driver for Apache Cassandra: Cassandra plugin code from cassandra.cluster import Cluster from ruxit.api.base_plugin import RemoteBasePlugin class MyCassandraPlugin(RemoteBasePlugin): def query(self, **kwargs): cluster = Cluster(self.config['cluster'].split(',')) session = cluster.connect(self.config['space']) In the deployment stage, the plugin should be tested with ActiveGate Plugin Simulator. Add the plugin.json definition for Python script: Cassandra plugin json { "name": "custom.remote.python.MyCassandraPlugin", "version": "1.0", "type": "python", "entity": "CUSTOM_DEVICE", "processTypeNames": ["PYTHON"], "technologies": ["CASSANDRA"], "source": { "package": "my_cassandra_plugin", "className": "MyCassandraPlugin", "activation": "Remote" }, "configUI": { "displayName": "My Cassandra Active Plugin", "properties": [{ "key": "cluster", "displayName": "Cluster IPs", "displayHint": "list of IP addresses" }, { "key": "space", "displayName": "Cassandra space" }] }, "properties": [{ "key": "cluster", "type": "String" }, { "key": "space", "type": "String" }] } Install the Cassandra driver in your development environment. Every package should contain its documentation with instructions on installation process. For example, to install the Python driver for Apache Cassandra, execute the following command: pip3 install cassandra-driver This is sufficient for ActiveGate Plugin Simulator to run a plugin using an external package. Plugin deployment with external package In the deployment stage, you upload the developed plugin to Dynatrace Server and production Environment ActiveGate. For the deployment to work, the plugin.json has to refer to the Cassandra driver. Update the plugin.json definition with install_requires field in the source object. You may add version condition if necessary e.g. cassadra_driver>=1.0.0: Cassandra plugin json { "name": "custom.remote.python.MyCassandraPlugin", "version": "1.0", "type": "python", "entity": "CUSTOM_DEVICE", "processTypeNames": ["PYTHON"], "technologies": ["CASSANDRA"], "source": { "package": "my_cassandra_plugin", "className": "MyCassandraPlugin", "activation": "Remote", "install_requires": ["cassandra-driver>=1.0.0"], }, "configUI": { "displayName": "My Cassandra Active Plugin", "properties": [{ "key": "cluster", "displayName": "Cluster IPs", "displayHint": "list of IP addresses" }, { "key": "space", "displayName": "Cassandra space" }] }, "properties": [{ "key": "cluster", "type": "String" }, { "key": "space", "type": "String" }] } The command plugin_sdk build_plugin builds the plugin zip archive and uploads it to Dynatrace Server. The zip file is located in /opt/dynatrace/remotepluginmodule/plugin_deployment/custom.remote.python.MyCassandraPlugin.zip on Linux or c:\Program Files\dynatrace\remotepluginmodule\plugin_deployment\custom.remote.python.MyCassandraPlugin.zip on Windows The command plugin_sdk build_plugin uses pip package manager to download and prepare any package needed by a plugin. Building plugins with external packages The command plugin_sdk build_plugin automatically prepares a plugin with the external libraries that are defined in the install_requires field of plugin.json. You don't need to place an external package in the source plugin directory. The plugin zip file is located in /opt/dynatrace/remotepluginmodule/plugin_deployment on Linux or c:\Program Files\dynatrace\remotepluginmodule\plugin_deployment on Windows Built-in packages There are a few external packages already used by the plugin module. These aren't included in plugin.json: You can find these libraries in the THIRDPARTYLICENSEREADME.html file located in /opt/dynatrace/remotepluginmodule/agent on Linux or c:\Program Files\dynatrace\remotepluginmodule\agent on Windows Native (C-compiled) components of Python packages If the python package that you want to use contains any native dependencies these are resolved by pip automatically so there are are additional steps required during plugin deployment phase. Note the you must build a plugin on the same system (Linux or Windows) and hardware platform (ActiveGate supports only 64-bit systems) as the destination production Environment ActiveGate. Native components of the Python package are not compatible between different systems and hardware platforms. If you encounter any obstacles in installing a package using pip, copy the package folder together with the native libraries, but without the dist-info directory, to the plugin directory. Keep in mind the order in which the libraries are loaded: - Engine libraries (Docker client, request, urllib3, etc.) - Built-in plugin libraries (in alphabetical order of directories) - Custom plugin libraries (in alphabetical order of directories) Custom and built-in external packages may be of a different version. Loading of the newest version isn't guaranteed.
https://www.dynatrace.com/support/help/extend-dynatrace/plugins/activegate-plugins/development/activegate-plugins-libraries/
CC-MAIN-2020-05
en
refinedweb
public class FlutterEngine extends Object The FlutterEngine is the container through which Dart code can be run in an Android application. Dart code in a FlutterEngine can execute in the background, or it can be render to the screen by using the accompanying FlutterRenderer and Dart code using the Flutter framework on the Dart side. Rendering can be started and stopped, thus allowing a FlutterEngine to move from UI interaction to data-only processing and then back to UI interaction. Multiple FlutterEngines may exist, execute Dart code, and render UIs within a single Android app. To start running Dart and/or Flutter within this FlutterEngine, get a reference to this engine's DartExecutor and then use DartExecutor.executeDartEntrypoint(DartExecutor.DartEntrypoint). The DartExecutor.executeDartEntrypoint(DartExecutor.DartEntrypoint) method must not be invoked twice on the same FlutterEngine. To start rendering Flutter content to the screen, use getRenderer() to obtain a FlutterRenderer and then attach a RenderSurface. Consider using a FlutterView as a RenderSurface. Instatiating the first FlutterEngine per process will also load the Flutter engine's native library and start the Dart VM. Subsequent FlutterEngines will run on the same VM instance but will have their own Dart Isolate when the DartExecutor is run. Each Isolate is a self-contained Dart environment and cannot communicate with each other except via Isolate ports. clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait public FlutterEngine(@NonNull Context context) FlutterEngine. A new FlutterEngine does not execute any Dart code automatically. See getDartExecutor() and DartExecutor.executeDartEntrypoint(DartExecutor.DartEntrypoint) to begin executing Dart code within this FlutterEngine. A new FlutterEngine will not display any UI until a RenderSurface is registered. See getRenderer() and FlutterRenderer#startRenderingToSurface(RenderSurface). A new FlutterEngine does not come with any Flutter plugins attached. To attach plugins, see getPlugins(). A new FlutterEngine does come with all default system channels attached. The first FlutterEngine instance constructed per process will also load the Flutter native library and start a Dart VM. In order to pass Dart VM initialization arguments (see FlutterShellArgs) when creating the VM, manually set the initialization arguments by calling FlutterLoader.startInitialization(Context) and FlutterLoader.ensureInitializationComplete(Context, String[]). public FlutterEngine(@NonNull Context context, @Nullable String[] dartVmArgs) FlutterEngine(Context)with added support for passing Dart VM arguments. If the Dart VM has already started, the given arguments will have no effect. public FlutterEngine(@NonNull Context context, @NonNull FlutterLoader flutterLoader, @NonNull FlutterJNI flutterJNI) #FlutterEngine(Context, FlutterLoader, FlutterJNI, String[])but with no Dart VM flags. flutterJNI should be a new instance that has never been attached to an engine before. public FlutterEngine(@NonNull Context context, @NonNull FlutterLoader flutterLoader, @NonNull FlutterJNI flutterJNI, @Nullable String[] dartVmArgs, boolean automaticallyRegisterPlugins) FlutterEngine(Context, FlutterLoader, FlutterJNI), plus Dart VM flags in dartVmArgs, and control over whether plugins are automatically registered with this FlutterEnginein automaticallyRegisterPlugins. If plugins are automatically registered, then they are registered during the execution of this constructor. public void destroy() FlutterEngineand destroys the associated Dart Isolate. All state held by the Dart Isolate, such as the Flutter Elements tree, is lost. This FlutterEngine instance should be discarded after invoking this method. public void addEngineLifecycleListener(@NonNull FlutterEngine.EngineLifecycleListener listener) listenerto be notified of Flutter engine lifecycle events, e.g., onPreEngineStart(). public void removeEngineLifecycleListener(@NonNull FlutterEngine.EngineLifecycleListener listener) listenerthat was previously added with addEngineLifecycleListener(EngineLifecycleListener). @NonNull public DartExecutor getDartExecutor() FlutterEngine. The DartExecutor can be used to start executing Dart code from a given entrypoint. See DartExecutor.executeDartEntrypoint(DartExecutor.DartEntrypoint). Use the DartExecutor to connect any desired message channels and method channels to facilitate communication between Android and Dart/Flutter. @NonNull public FlutterRenderer getRenderer() FlutterEngine. To render a Flutter UI that is produced by this FlutterEngine's Dart code, attach a RenderSurface to this FlutterRenderer. @NonNull public AccessibilityChannel getAccessibilityChannel() @NonNull public KeyEventChannel getKeyEventChannel() @NonNull public LifecycleChannel getLifecycleChannel() @NonNull public LocalizationChannel getLocalizationChannel() @NonNull public NavigationChannel getNavigationChannel() @NonNull public PlatformChannel getPlatformChannel() @NonNull public SettingsChannel getSettingsChannel() @NonNull public SystemChannel getSystemChannel() @NonNull public TextInputChannel getTextInputChannel() @NonNull public PluginRegistry getPlugins() FlutterEngine. @NonNull public PlatformViewsController getPlatformViewsController() PlatformViewsController, which controls all platform views running within this FlutterEngine. @NonNull public ActivityControlSurface getActivityControlSurface() @NonNull public ServiceControlSurface getServiceControlSurface() @NonNull public BroadcastReceiverControlSurface getBroadcastReceiverControlSurface() @NonNull public ContentProviderControlSurface getContentProviderControlSurface()
https://api.flutter.dev/javadoc/io/flutter/embedding/engine/FlutterEngine.html
CC-MAIN-2020-05
en
refinedweb
import "go.chromium.org/luci/logdog/server/collector" Package collector implements the LogDog Collector daemon's log parsing and registration logic. The LogDog Collector is responsible for ingesting logs from the Butler sent via transport, stashing them in intermediate storage, and registering them with the LogDog Coordinator service. const ( // DefaultMaxMessageWorkers is the default number of concurrent worker // goroutones to employ for a single message. DefaultMaxMessageWorkers = 4 ) type Collector struct { // Coordinator is used to interface with the Coordinator client. // // On production systems, this should wrapped with a caching client (see // the stateCache sub-package) to avoid overwhelming the server. Coordinator coordinator.Coordinator // Storage is the intermediate storage instance to use. Storage storage.Storage // StreamStateCacheExpire is the maximum amount of time that a cached stream // state entry is valid. If zero, DefaultStreamStateCacheExpire will be used. StreamStateCacheExpire time.Duration // MaxMessageWorkers is the maximum number of concurrent workers to employ // for any given message. If <= 0, DefaultMaxMessageWorkers will be applied. MaxMessageWorkers int } Collector is a stateful object responsible for ingesting LogDog logs, registering them with a Coordinator, and stowing them in short-term storage for streaming and processing. A Collector's Close should be called when finished to release any internal resources. Close releases any internal resources and blocks pending the completion of any outstanding operations. After Close, no new Process calls may be made. Process ingests an encoded ButlerLogBundle message, registering it with the LogDog Coordinator and stowing it in a temporary Storage for streaming retrieval. If a transient error occurs during ingest, Process will return an error. If no error occurred, or if there was an error with the input data, no error will be returned. Package collector imports 18 packages (graph). Updated 2020-01-18. Refresh now. Tools for package owners.
https://godoc.org/go.chromium.org/luci/logdog/server/collector
CC-MAIN-2020-05
en
refinedweb
direct.showutil.TexMemWatcher¶ from direct.showutil.TexMemWatcher import TexMemWatcher, TexPlacement, TexRecord Inheritance diagram - class TexMemWatcher(gsg=None, limit=None)[source]¶ Bases: direct.showbase.DirectObject.DirectObject This class creates a separate graphics window that displays an approximation of the current texture memory, showing the textures that are resident and/or active, and an approximation of the amount of texture memory consumed by each one. It’s intended as a useful tool to help determine where texture memory is being spent. Although it represents the textures visually in a 2-d space, it doesn’t actually have any idea how textures are physically laid out in memory–but it has to lay them out somehow, so it makes something up. It occasionally rearranges the texture display when it feels it needs to, without regard to what the graphics card is actually doing. This tool can’t be used to research texture memory fragmentation issues. findAvailableHoles(self, area, w=None, h=None)[source]¶ Finds a list of available holes, of at least the indicated area. Returns a list of tuples, where each tuple is of the form (area, tp). If w and h are non-None, this will short-circuit on the first hole it finds that fits w x h, and return just that hole in a singleton list. findEmptyRuns(self, bm)[source]¶ Separates a bitmask into a list of (l, r) tuples, corresponding to the empty regions in the row between 0 and self.w. findHole(self, area, w, h)[source]¶ Searches for a rectangular hole that is at least area square units big, regardless of its shape, but attempt to find one that comes close to the right shape, at least. If one is found, returns an appropriate TexPlacement; otherwise, returns None. findHolePieces(self, area)[source]¶ Returns a list of holes whose net area sums to the given area, or None if there are not enough holes. findOverflowHole(self, area, w, h)[source]¶ Searches for a hole large enough for (w, h), in the overflow space. Since the overflow space is infinite, this will always succeed. isolateTexture(self, tr)[source]¶ Isolates the indicated texture onscreen, or None to restore normal mode. mouseClick(self)[source]¶ Received a mouse-click within the window. This isolates the currently-highlighted texture into a full-window presentation. setLimit(self, limit=None)[source]¶ Indicates the texture memory limit. If limit is None or unspecified, the limit is taken from the GSG, if any; or there is no limit. setRollover(self, tr, pi)[source]¶ Sets the highlighted texture (due to mouse rollover) to the indicated texture, or None to clear it. setupCanvas(self)[source]¶ Creates the “canvas”, which is the checkerboard area where texture memory is laid out. The canvas has its own DisplayRegion. updateTextures(self, task)[source]¶ Gets the current list of resident textures and adds new textures or removes old ones from the onscreen display, as necessary. - class TexPlacement(l, r, b, t)[source]¶ Bases: object clearBitmasks(self, bitmasks)[source]¶ Clears all of the appropriate bits to indicate this region is available. hasOverlap(self, bitmasks)[source]¶ Returns true if there is an overlap with this region and any other region, false otherwise.
https://docs.panda3d.org/1.10/python/reference/direct.showutil.TexMemWatcher
CC-MAIN-2020-05
en
refinedweb
Banish Merge Conflicts With Semantic Merge. They’re a hassle. I know the data backs me up here. When I started at GitHub, I worked on a Git client. If you can avoid it, never work on a Git client. It’s painful. The folks that build these things are true heroes in my book. Every one of them. Anyways, the most frequent complaint we heard from our users had to do with merge conflicts. It trips up so many developers, whether new or experienced. We ran some surveys and we’d often hear things along the lines of this… When I run into a merge conflict on GitHub, I flip my desk, set it all on fire, and git reset HEAD --hardand just start over. Conflict Reduction Here’s the dirty little secret of Git. Git has no idea what you’re doing. As far as Git is concerned, you just tappety tap a bunch of random characters into a file. Ok, that’s not fair to Git. It does understand a little bit about the structure of text and code. But not a lot. If it did understand the structure and semantics of code, it could reduce the number of merge conflicts by a significant amount. Let me provide a few examples. We’ll assume two developers are collaborating on each example, Alice and Bob. Bob only works on master and Alice works in branches. Be like Alice. In each of these examples, I try to keep them as simple as possible. They’re all single file, though the concepts work if you work in separate files too. Method Move Situation In this scenario, Bob creates an interface for a socket server. He just jams everything into a single interface. Bob: Initial Commit on master +public interface ISocket +{ + string GetHostName(IPAddress address); + void Listen(); + void Connect(IPAddress address); + int Send(byte[] buffer); + int Receive(byte[] buffer); +} Alice works with Bob on this code. She decides to separate this interface into two interfaces - an interface for clients and another for servers. So she creates branch separate-client-server and creates the IServerSocket interface. She then renames ISocket to IClientSocket. She also moves the methods Listen and Receive into the IServerSocket interface. Alice: Commit on separate-client-server -public interface ISocket +public interface IClientSocket { string GetHostName(IPAddress address); - void Listen(); void Connect(IPAddress address); int Send(byte[] buffer); - int Receive(byte[] buffer); } + +public interface IServerSocket +{ + void Listen(); + int Receive(byte[] buffer); +} Meanwhile, back on the master branch. Bob moves GetHostName into a new interface, IDns public interface ISocket { - string GetHostName(IPAddress address); void Listen(); void Connect(IPAddress address); int Send(byte[] buffer); int Receive(byte[] buffer); } + +public interface IDns +{ + string GetHostName(IPAddress address); +} Now Bob attempts to merge the separate-client-server branch into master. Git loses its shit and reports a merge conflict. Boo hoo. using System.Net; -public interface ISocket +public interface IClientSocket { +<<<<<<< HEAD void Listen(); +======= + string GetHostName(IPAddress address); +>>>>>>> separate-client-server void Connect(IPAddress address); int Send(byte[] buffer); - int Receive(byte[] buffer); } +<<<<<<< HEAD public interface IDns { string GetHostName(IPAddress address); } +======= +public interface IServerSocket +{ + void Listen(); + int Receive(byte[] buffer); +} +>>>>>>> separate-client-server All Git knows is that both developers changed some text in the same place. It has no idea that Alice and Bob are extracting interfaces and moving methods around. But what if it did? This is where semantic diff and semantic merge come into play. I’m an advisor to Códice Software who are deep in this space. One of their products, gmaster is a Git client. This client includes their Semantic Merge technology. Here’s what happens when I run into this situation with gmaster. The UI is a bit busy and confusing at first, but it’s very powerful and you get used to it. - First, gmaster recognizes that Git reports a merge conflict. It doesn’t resolve it automatically. This is by design. Merge resolution is as intentional act. There’s probably a setting to allow it to automatically resolve conflicts it understands. - Down below, gmaster displays a semantic diff. The diff shows that the method moved to a new interface. It knows what’s going on here. - Click the “Launch Merge Tool” to see the magic happen. This launches the semantic merge tool. - As you can see, the tool was able to automatically resolve the conflict. No manual intervention necessary. - All you have to do is click Commit to complete the merge commit. With Git and any other diff/merge tool, you would have to manually resolve the conflict. If you’ve resolved large conflicts, you know what a pain it is. Any tool that can reduce the number of conflicts you need to worry about is valuable. And on a real-world repository, this tool makes a big impact. I’ll cover that in a future post! Summary I’ll be honest, my favorite Git client is still GitHub Desktop. I appreciate its clean design, usability, and how it fits my workflow. Along with the command line, Desktop is my primary Git client. But I added gmaster to my toolbelt. It comes in handy when I run into merge conflicts. I’d rather let it handle conflicts than do it all by hand. Gmaster is unfortunately only available on Windows, but you can’t beat the price, free! I plan to write another post or two about merge conflict scenarios and how semantic approaches can help save developers a lot of time. DISCLAIMER: I am a paid advisor to the makers of gmaster, but the content on my blog is my own. They did not pay for this post, in the same way all my previous employers did not pay for any content on my blog. 6 responses
https://haacked.com/archive/2019/06/17/semantic-merge/
CC-MAIN-2020-05
en
refinedweb
Far be it from me to predict the death of PHP, arguably one of the most popular languages for building web applications. It is practically guaranteed to be installed on any web host imaginable, is well documented online, and has a strong following of developers. Such factors would normally ensure the continued long life of a language. However, PHP is now officially way behind the curve and is risking becoming obsolete. IBM Developer Works recently previewed the new features in PHP 6.0. Chiefly among them is the late introduction of name spaces. A fundamental OO feature, name spaces protect software developers from the trouble of co-ordinating the names of objects and variables in the software they develop together. The concept has been around since the late 1960’s and is present in every respectable language. Without it, managing large software projects becomes very difficult and aggravating. Consider the following example code: // I'm not sure why I would implement my own XMLWriter, but at least // the name of this one won't collide with the one built in to PHP namespace NathanAGood; class XMLWriter { // Implementation here... } $writer = new NathanAGood::XMLWriter(); The grammar and syntax used is essentially lifted straight from Perl 4! It uses a keyword and argument to define a name space. It then uses the double-colon syntax for referencing names within the space (ie: “NathanAGood::XMLWriter()”). The only major difference is the keyword (Perl uses the “package” keyword). Other differences may surface in the implementation, but superficially it appears the same. The reason why I am claiming the death of PHP is that it is introducing very fundamental language features many years behind the competition. In the technology world, you can get away with being a few months behind or maybe a year at most. However, it’s 2008 and PHP is only now introducing name spaces. I might have stayed with the development of PHP had it been baked with name spaces from the onset. It’s too late to be thinking about this stuff now when it is so common place and robust else where. PHP as a language still has a long way to go. The only thing keeping it going may simply be the footholds it has managed to entrench itself in. Only time will tell how long it can hold on. This article is nearly 10 years old. Guess what? PHP is still here! Surprise! Here is a more up-to-date survey of the PHP landscape. Hope you find it enlightening. Also when I converted this blog away from Wordpress using a custom script to convert it all to Markdown I seem to have broken many links. If you can find the original DeveloperWorks link to the PHP 6 announcement that would be awesome.
https://agentultra.com/blog/php-is-obsolete/
CC-MAIN-2020-05
en
refinedweb
Programming ESP8266 ESP-12E NodeMCU V1.0 With Arduino IDE Into Wireless Temperature Logger Introduction: Programming ESP8266 ESP-12E NodeMCU V1.0 With Arduino IDE Into Wireless Temperature Logger . Hi I am trying this project but I get a Error compiling for board NodeMCU 1.0 (ESP-12E Module), Message when verifying the sketch I have tested the Print_IP_Address sketch it works OK but if I add the line #include <OneWire.h> or #include <DallasTemperature.h> I get the error could someone tell me how to fix this. #include <DallasTemperature.h> buenas noches, empeze a utizar la tarjeta hace poco, el problema es que cuando la utilizo en mi pc me funciona bien , pero cuando la conecto a una fuente externa y trato de enviar el programa me sale error. lo publque aqui por que es lo unico que encontre de la nodemcu1,0 gracias por responder has encontrado algo al respecto? Hi, Great project. is it possible to read and display from several sensors and send over wifi ? thanks getting error In file included from D:\nodemcu\programs_arduin0\temperature_logger\temperature_logger.ino:6:0: C:\Users\WTC\Documents\Arduino\libraries\OneWire/OneWire.h:108:2: error: #error "Please define I/O register types here" #error "Please define I/O register types here" ^ exit status 1 Error compiling for board NodeMCU 1.0 (ESP-12E Module). same here. how can we fix this error? Neat project. Went together pretty smoothly. Took me a few goes to set up the Arduino IDE with ESP8266. Now to add more DS18B20 probes and install them in my solar hot water! congratulations! looking forward to see your solar hot water system. hook me up to the URL when it is ready. Hi, man This is my first big project. I want try this but Arduino Ide said: 'connectWifi' was not declared in this scope ....please help me, what i should do? :) if you have arduino ide version 1.6.7 cut & paste.... void setup() void loop() to the bottem of the sketch. works great, thanks!!! I couldnt find the arduino sketch!!!!!!!!!!!!! Is there a way to connect a 16X2 LCD to it? with or without a LCD "backpack"?
http://www.instructables.com/id/ESP8266-NodeMCU-v10-ESP12-E-with-Arduino-IDE/
CC-MAIN-2017-51
en
refinedweb
Define two pointers slow and fast. Both start at head node, fast is twice as fast as slow. If it reaches the end it means there is no cycle, otherwise eventually it will eventually catch up to slow pointer somewhere in the cycle. Let the distance from the first node to the the node where cycle begins be A, and let say the slow pointer travels travels A+B. The fast pointer must travel 2A+2B to catch up. The cycle size is N. Full cycle is also how much more fast pointer has traveled than slow pointer at meeting point. A+B+N = 2A+2B N=A+B From our calculation slow pointer traveled exactly full cycle when it meets fast pointer, and since originally it travled A before starting on a cycle, it must travel A to reach the point where cycle begins! We can start another slow pointer at head node, and move both pointers until they meet at the beginning of a cycle. public class Solution { public ListNode detectCycle(ListNode head) { ListNode slow = head; ListNode fast = head; while (fast!=null && fast.next!=null){ fast = fast.next.next; slow = slow.next; if (fast == slow){ ListNode slow2 = head; while (slow2 != slow){ slow = slow.next; slow2 = slow2.next; } return slow; } } return null; } } since slow is already in the loop, so slow can only move inside the loop. While slow2 just moves from the beginning. Thus they will meet exactly at the beginning of cycle. I can't understand the formula very well A+B+N = 2A+2B N=A+B Think about the LikedList: 1->2->3->4->5->6->7->8->9->10-> the cycle begin 3. The slow pointer and fast pointer will meet at 9, N = A + B Here A = 2 (1->3), N = 8 (from 3->10 + 10->3), B = N-A = 6, Meeting point is A+B = 2+6=8 (8th node from start which is 9) If you meet at node 7, how can the slow meet the slow2 at node 3? from node 1 to 3 is two steps, from node 7-3 is four steps Is this diagram help you understand? When fast and slow meet at point p, the length they have run are 'a+2b+c' and 'a+b'. Since the fast is 2 times faster than the slow. So a+2b+c == 2(a+b), then we get 'a==c'. So when another slow2 pointer run from head to 'q', at the same time, previous slow pointer will run from 'p' to 'q', so they meet at the pointer 'q' together. ListNode fast = head, slow = head; while(fast != null && fast.next != null){ fast = fast.next.next; slow = slow.next; if (fast == slow){ ListNode slow2 = head; while (slow != slow2){ slow2 = slow2.next; slow = slow.next; } return slow; } } return null; Since there is a cycle, when slow1 moves, it will loop from p to q while the slow2 moves from head to the q. And from the proof, we know that a==c, that means, the slow1 and slow2 will meet at the point which the cycle starts. This is actually called the Tortoise and Hare algorithm (in reference to Aesop's fable) and is attributed to Floyd, the same Floyd as the Floyd–Warshall algorithm, in case you were interested in more research. @lwen8989gmail.com The picture really helps! thanks a lot! Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/19367/java-o-1-space-solution-with-detailed-explanation/13
CC-MAIN-2017-51
en
refinedweb
Space Rangers 2: Rise of the Dominators FAQ by Laclongquan Version: 0.5 | Updated: 09/14/06 | Search Guide | Bookmark Guide SPACE RANGERS 2: DORMINATORS Compiled by Ancient Talisman (or as known as Lac Long Quan) Authors: various forum users whose name I really dont pay too much attentions Forums: Most of data in this file was compiled from the users of the following forums. Which part is my idea I will state clearly. Anything else can be considered Anonymous. {I am really bad with tracking authors in this game). 0000-INDEX-0000 0001.Part 1: the (nearly official) FAQs for this game (you read this later if you like) 0002.Tips 0002.01.Probes 0002.02.Trading 0002.03 News and data Search 0002.04 RTS Planetary Battles 0002.05 RTS Planetary Battles - Why your ass get licked 0002.06 Equipment 0002.07 Missions 0002.08 Stims and Diseases 0002.09 Combat 0002.10 Blackholes 0002.11 Miscellaneous 0004.Quests 0005.Ways of Ranger 0006.Advanced Techniques (in progress) I put this together to help new or prospective players and because the original thread is getting a bit long-winded at 11 pages. I essentially did a copy and paste from that thread (cleaning up grammer here and there), I hope no one minds that I took their words without giving them proper credit. For the record, I take credit for none of these (except the ones that are actually mine). If you have a problem with me doing so, let me know and I’ll edit your text out. 0001.Part 1: the (nearly official) FAQs for this game Where can I buy this game? GoGamer (it says Space Rangers, but it’s really Space Rangers 2): Play.com: Interact: Fry’s Electronics (but not all of them – call the one in your area) Does the game use Starforce? Yes, Starforce is the copy protection used by Space Rangers 2. Consider yourself warned. Note that there have been no reported problems on this board with the version of starforce on SR2. Please save the discussions of whether Starforce is good or evil for another thread. Do I need a DVD drive to play the game? Yes, as far as I can tell, the game only ships on DVD. Is there a demo? No Update 5/8/06: A demo has been released. Is there anything that would allow me to understand the game a bit better? There is a narrated game play movie that goes through the basics of the game. It’s available for download here: Is there a mission editor? There is an RTS map editor, but right now it’s only in Russian. Download it here: Other useful links? Publisher – Excalibur Developer – Elemental Games Original OO thread Quarter to Three thread Developer forum What resolutions are supported? 800x600 and 1024x768 Any known bugs? There does seem to be one bug. I've had it crash several times if I am killed while jumping out of system (i.e. while going through the white rings). Anyone else encountered this? What are the major differences between Space Rangers and Space Rangers 2? -RTS planetary battles -Probes for uninhabited planets -Medical bases (stimulants, cure disease) -Business Centers (for market analysis, loans, life insurance) -All types of ships are available for purchase -More hull variations -Illnesses and stimulators -you can finance the building of a space station -character avatars (rather than just the default picture based on race) -Three new sectors Is the combat "first person" i.e. freespace or wing commander style from the cockpit of your ship flying with the joystick, etc? I am so confused Nope, it's 2d, top-down Is there a storyline? The overall plot point is that there is an alien race (three, actually... three different types of dominators all working together) trying to take over the galaxy. There are frontline areas where these aliens are expanding, and large battles that take place between the occupying sentient races and the invaders. There are also all sorts of military, trade and pirate ships duking it out all over the place as well. What size is the game’s universe and is it always the same? According to the manual, there are 60 systems and 250 planets in total. The universe is randomly generated each time you start a new game, with planets being randomly placed as well, the lone exception being the Sol system. Can you name your ship or just your pilot? You can’t name your ship in the game. How does the whole turn-based flight thing work? {from Qt3} The game's turn-based movement is really pretty neat--it's a bit like Laser Squad Nemesis. You plot your actions and hit the end turn button, and then everyone in the whole galaxy moves simultaneously. In combat, you might want to advance one turn at a time, but outside of combat you can just double-click on a destination and the game will just run until you get there (at which point it pauses again), so you get the benefits of real-time movement when you need them. You can also hit the end turn button again during this auto-movement to stop and change your orders. What’s this about a flexible quest system? {from Qt3} If you go to a government and request a mission (say, deliver this package to X system), you can say "have anything easier?". What this does is give you more time for the mission, but for less pay. So, for instance, it may give you an extra month to deliver the package, or take over a planet, etc., but for 300 less. Inverarity’s note: the same works in reverse. Asking for a harder mission means more credits, but less time. Can you describe the text adventures? {from Qt3} It's a text adventure, like the old Infocom games, but with a list of clickable options rather than a parser, and a little info box listing your current status that (I assume) is different for each game (mine, for instance, listed the number of days since I started the quest (important, since the main event was on a timer), my funds, damage and fuel levels for the truck, my current cargo, and my health, weariness, and hunger levels). They are also illustrated, with still artwork for each location that you go to, and for events and characters that you meet. They can be pretty complex and lengthy... mine was, at least. It took me about forty minutes to solve, the second time through (I died the first time through, when I miscalculated my fuel, ran out in the middle of nowhere, and then got bitten by a poisonous indigenous animal trying to walk back to town). The paper manual that comes with the game sucks! To be fair, the paper manual that comes with the game is really just the quick-start guide The full manual is installed when you install the game and is much more comprehensive, though still missing some stuff, and obviously, only electronic. There isn't a built-in option to print, but all the pages are htm files so you can print them yourself if you'd like - there are 58 pages total and they are located in your Space Rangers 2 install directory, under \help\content I’m sick of watching that silly intro video. Can I automatically skip it? The easy way to skip the videos is to go into Options-Space-Graphics and change "Play Trailers" to "no" (thank you Qt3). How do you destroy the drone-controlled spacecraft in the tutorial? He keeps blowing me up! Buy and mount a second weapon before leaving the planet to fight him. Can you explain the starting conditions? I’m not sure what race/class to choose. There are 25 starting setups (5 races * 5 classes) and all have different starting equipment, money, and relations with different races (you probably know that however). Starting Maloqi hulls have 4 weapon mounts (all other hulls have 3). Maloqi and Pelengi hulls have 2 "special equipment" slots, Human and Gaalian have 3, and the Faeyan one has 4 (actually not that important, since you are likely to get a new hull before you finally get that many artifacts). Only Human and Faeyan starting hulls have afterburners. Also you get to choose 2 starting skills (more on that later) and 2 extra pieces of equipment. The rocket launcher is a very good choice here because it's an excellent weapon, it's small, and you get it way before it's available for sale. How does manufacturer race affect equipment durability and price? Maloqs < Pelengs < Humans < Faeyans < Gaalians Maloqi equipment is the worst, Gaalian is the best (Gaalian stuff is almost twice as expensive as similar Maloqi stuff, and doesn't have to be repaired so often). All you starting equipment is made by your starting race, so choose wisely. Do I have to play the RTS missions? The RTS missions are completely optional. You’ll have an option to turn down all future RTS missions when you’re given one. How do I know if a system is dominator-controlled before I fly into it? Just hit M to check the map key and see what color the system is. You can see which planets are controlled by which races by doing that. Dominator controlled systems appear blue-ish gray. If the dominators are attacking, there is a spinning shield icon next to the system. If a race is attacking a dominator-held system, there is a crossed swords symbol rotating next to the system. Is there an Afterburner or some such feature that allows me to fly faster? hit F to use your boost, or it's the button to the left of the engine; this doubles your speed. Your engine will take damage when boost is turned on. Important note: Your hull must have boost in order to use boost. Some hulls don't. You can tell if yours does by looking at it on the ship screen. If there's a little box on the left-hand side (attached to the engine slot), you've got it). Obviously, you can also tell if you've got one by hitting the boost button (F). If it works, well.... I don’t understand how damage works. Can you explain? Here’s the formula: dmg = initial dmg * (100% - generator strength) - hull strength. So if your ship has 15% generator and 5 hull strength and takes a 40 dmg torpedo it will take 40 * 85% - 5 = 29 damage. The above formula is true if your accuracy = target's maneuverability. The more your accuracy # exceeds their maneuv #, the higher your weapons' minimum damage raises. Likewise, if their maneuv is higher than your accuracy, your weapons will have a lower max damage range. What about Droids? Droids are pretty straightforward, they just repair a fixed amount of damage each turn (hull damage, if you want to repair equipment damage you need an artifact called Nanitoids). Where and/or how do you buy better equipment? Does better stuff appear as you get cash, get military promotions and/or ranger points? The technological level of the universe develops over time, so sayeth the manual. What do the thumbs up on the trade screen mean? I don't know if they explained it or not, I kind of skimmed the tutorial. The thumbs generally mean that you are getting a good deal. the more thumbs the better. How and why do stations appear and disappear? The Dominators will destroy stations when they attack systems. I don't know that any station will shut down though. Also the military bases like to go on missions where they jump into a dominator held system with some fighters for protection. Also, if you dock at the commerce stations you can pay to have a station built. You don't get to choose where, but you do get to pick what type of station is built. Does anyone know of any other way to get a planet that hostile (if you land there, you go to jail) feel better about you? Fighting the dominators is a good way to get everyone to like you, not sure exactly how it works however. Something like if you liberate a Faeyan system from dominators all Faeyan planets like you better To tell you the truth, I don't even know why some of these planets are so mad at me. For instance, Venus, which used to be extremely friendly towards me, hates my guts. No other human planet does. I never shot any of their ships... the only thing I can think of is that I failed a mission they gave me. Exactly. Failing a mission is very bad for relations. What do I do with excess stuff that I don’t want to sell? You probably already know that, but you can store stuff on planets and space stations (a little box in to the left side of the screen in the ship menu), so if you have a small ship and can't load enough goods for trading you can leave all unnecessary equipment behind (guns, droid etc) and come back for it later. If you are looking for long-time storage then don't use space stations since the dominators have a nasty habit of blowing them up. Also military bases may decide to hyper jump to a dominator system (with all your stuff in them) only to die in a futile attempt to liberate it. Can I only refuel by landing on a planet? There is a secret way to refuel by flying into a star, but you take a lot of damage, so you better have a strong hull. To clarify (from Qt3): You don't have to fly into (over) a system's star, just get very close. You'll get almost as much fuel and take a lot less damage. Can you explain micromodules and artifacts to me? There's two types of modules - some you drop on the relevant piece of equipment (you'll get an icon on the equipment) and others need to be placed in the little unmarked slots/boxes that some hulls have (some hulls have up to four, one at each corner) First of all, the stuff you put in the corner slots aren't modules - they are artifacts ("special equipment") So how do I know which equipment it is compatible with and if its compatible presumably it fits in the appropriate slot? Click on the micromodule and the compatible equipment is highlighted. Only 1 module can be installed in each piece of equipment. Modules cannot be removed later, so choose wisely where you put them. I started fighting dominators last night and got a few cargo holds full of nodes at a science base. There is an option to give them nodes or items for research for 3 types of dominators...what are these? Yes, there are 3 types of dominators, and each type has a control center you must destroy to win the game. I put everything I had into the first option (Blazeroids I think) and I got the research time down to 5000 days (~42%). I also noticed that this isn't a global setting, I went to another science station nearby and they were all still at 0% research. So can anyone tell me what these do and why I should be helping to research dominators (besides the obvious)? Ok, I really don't know if research is done locally or globally (I always assumed it was global for some reason). The stuff you give them is depleted during research, so it might be just that they used up everything they got from you before you arrived on the second science base. Who gets the kill - the last hit? Read in the manual that last shot gets the kill and full xp, everyone else who damaged gets half xp. Is there a way to get a history of messages? I had some messages ("I") that popped on my taskbar thingy, but then just kind of went away on their own before I could read them. One was something about my partner being killed, and another about some authorities caught a pirate guy who was hounding me. As far as I can tell there are two types of message that appear while you're in free flight, independently of your actions: 1. Chit-chat intercepted between other ships - this disappears when you leave the system; and 2. News headlines. These also disappear when you jump, but they're also listed on the news reports at bases/planets. If you have a quest to kill a pirate, and he gets arrested (damned drug runner), are you pretty much out of luck? I camped one for 2 months until the quest expired... You might be luckier. If you check your news you may see how long he is arrested for and then know if its a doomed quest. If you search for say, "Singular Engine", the game returns a list of said items. Each heading for the item has a yellow number to the right of it. Any idea what that number represents? Also, any idea if there's a way to see the size of the respective item? I don't care about the expense of an item; I just want the smallest one I can find. The number is the size of the item, unless we're looking at completely different things. The store selection probably did change by the time you got there. Can someone explain to me why the price for certain ships parts are in red? When the price is in red it means the item is damaged, I believe. The game is just letting you know you'll have to repair it. Can anyone tell me if ranger stations are the only place to get micromodules? No, in fact it's the very worst place to get them. You may get a micromodule after completing a mission, or sometimes killed dominators drop them. I have 14 spare modules in my cargo hold and I only bought one at the rangers' station during the whole game. I'm sure that loan is going to come and haunt me - anyone know what happens if I don't pay up - do they start taking contracts out on me? I don't know if this was the cause, but when I was unable to pay off a loan and received many, many fines added to my loan amount. The big issue was that my popularity dropped like a rock throughout the galaxy. I finally restarted a new character because even after paying off the loan, every planet of every race hated me and refused to do business with me. I can only assume that this was because of the loan business. On another note, is there any way to keep an income going solely through combat? (without kill missions) Seems like the repair bill is always higher than the worth of the goods I scoop up from the kill, I even have a decent shield and armor to keep the damage down! Yeah, in my current game (100%, year 7) I get a lot of money from combat. After liberating a system my repair costs about 20k but loot is usually like 50k. (For extra profit I use a combat program to make dominators drop equipment, twice they dropped their main weapon that costs around 40k In planetary battles, how do you capture the actual enemy factory? I captured all his resource buildings, but there was no circle to stand on at the main factory itself. {from Qt3} You click on the "capture" command button, then click on the building itself that has the little trapdoor thingy on it. It's confusing, as you always click on the circle pad on normal bases. What’s the flight path log? {from Qt3} There is a "flight path log" feature where you can rewind the game and check out what everyone was doing (if you could see them via scanners/sensors I assume) during the last 30 or so turns (you can increase/decrease this in options). So, for instance, I rewinded to the very first turn and just watched where I went, from planet to planet, looked at fights and analyzed them, watched other ships get into fights in the sector, watch asteroids smash into the atmosphere of planets, etc. Relly neat little feature. I don’t like the starting race I chose. Do I need to reroll? {from Qt3} Did you know you can get your race changed at the Pirate Stations, through extensive plastic surgery? Pretty neat. Inverarity’s note: this also helps if you’ve become a wanted man. What’s the deal with black holes? I think they were mis-translated and should be called “wormholes”. Regardless, flying into one causes a mini game to launch. If you win the mini game (which is real-time combat not entirely unlike Escape Velocity Nova combined with a pinball machine), you get a non-standard tech item…and you get sent to another part of the galaxy. How does hiring wingmen work? Your leadership skill determines how many wingmen you can have. If you don’t have any points towards leadership you can only hire the dude from the tutorial. Also, you can only hire rangers that rank lower than you on the ranger-ranking board. Pirates keep threatening me. Is there anything I can do to prevent this? {from Qt3} Pirates make big use of the scanner devices to see if 1- you're weak and 2- you have a lot of cargo. To beat scanners, you need really good shielding (notice those scanner things that say "get information about target who has shielding less then X%") If you buy and install a really good shielding system, 95% of the pirates will be unable to scan you and will leave you alone. For the other 5%, just use engine boost and run from them. You can manually control your ship ala Star Control 2 in combat, right? My ship keeps fighting automatically. When you click on an enemy ship, you're getting an icon on top of that ship with two little ships fighting each other. That means autocombat. Click again (or scroll the mousewheel) for two little ships with a crosshair (move to weapon range) or two little ships with nothing between them (move to minimum range). You should be able to change the default action in options. You don't even have to use those automove commands. Just click where you want to go, then target your weapons as I'm about to describe below. If you can control you ship during combat like that, how do you make sure only a laser weapon fires instead of, say, the laser AND missle launcher? Can you fire one, then the other on the fly? Hit 'W' to bring up your weapons panel (there is also a button on the bottom right of your screen you can click on.) This brings up your weapons panel. Either click each weapon you want to fire, then click a target, or you can just hit 1-5 (with or without the weapons panel up) to choose which weapons to aim. Hit ~ to select all weapons. You can have your weapons shooting at different targets. For instance, you could have one gun finish off an enemy, two shoot at another enemy, and have two more shoot down some missiles headed your way. Is it possible to have a research station upgrade an item multiple times, do you know? A piece of equipment can only be upgraded once. So if you are looking at something and it has a green stat, then it has already been upgraded. If you buy a new hull, do all of your components transfer over automatically? What happens to any "overrun"? Do items just get dropped into the ship's inventory? If you buy a new hull you could only afford by selling your existing one, it'll automatically swap them upon purchase. If you've got enough cash for the new hull without trade-in, the new one is placed in storage on that planet, and you drag and drop to replace it yourself. It seems to be all most impossible to kill a pirate.About the time you start doing some real damage to him he lands and repairs.Ad naseum. The pirates, along with everyone else, have money and have to pay for repairs just like you. Eventually, they will land but not be able to afford repairs, and take off still damaged. You can speed this up by extorting money from them. Can anyone tell me if there is a benefit to certain weapon types, or am I just better off going for highest damage? For example, does a fragment weapon have an advantage over an energy weapon? There are no major differences between frag and energy weapons (energy have a bit further range usually). Rocket weapons are a bit different however (rockets usually take a few turns to reach their target and can be shot down). Some artifacts only affect weapons of a specific type (like there's one that gives all frag weapon a slow-down effect on enemy ships). Also some micromodules give different bonuses to different weapon types (like the Rocketon module give +10 dmg to missle weapon but only +4 to energy. Apart from this, the weapon type isn't important. Can you explain Fuel Capacity? It appears that fuel tank capacity is Capacity = (Size / 2 + Tech Level * 5 + 5) There are slight differences in rounding, with higher-tech races (i.e. Gaal) rounding up rather than down, but otherwise that's it. Thus, a size 20, tech level 1 tank holds 20 / 2 + 1 * 5 + 5 = 20 fuel. A tech size 20, tech 8 tank holds 20 / 2 + 8 * 5 + 5 = 55 fuel. Thus, for any given tech level tank, each additional unit of capacity requires 2 spaces. Cisterns, which you generally find on planets in wreckage, are 1 space per unit of fuel. However, your jump is limited to the smaller of your fuel tank size or your drive limit, and cisterns don't help for that. It has medals? Cool. Man I dream about this game every day. I read this thread every hour or so hoping for new reports. I check reviews out on web sites. I go to GoGamer and check the price and view the box art Okay, that’s still a bit excessive, even if you bought the game Last edited by Inverarity on Mon May 08, 2006 2:35 pm; edited 4 times in total 0002.PART 2 – TIPS 0002.01.Probes You can buy planetary probes from various vendors. You can then fly to an uninhabited planet. The planet will have a certain percentage of mountains, plains and oceans. Each probe has the ability to scan one or more of these. So, while in orbit, you can drop one or more probes into various orbits around the planets. Depending on the quality of the probe, it will scan the planet's surface for any resources over the period of weeks or months. You then return to the planet and can collect any found resources. I also found it worthwhile to buy a probe and scan some planets - sometimes you get "micromodules" which can be used for upgrading your equipment further. Probes can be very profitable. They seem to do better on planets that have some land (i.e. not gas giants). Plop them down on Mercury, or better yet go to another system. Then just go trade for awhile or do a mission or whatever, and come back in 100 days or so for your goodies. Lac Long Quan said: it worth money to pay for a stim boost (Super Technician) then buy 7 probes in the beginning. A really worthy investment and profit you almost within the first or second month. Lac Long Quan said: and try to a quick fly into Dorm-held system, launch probes in planets there. Return later, you will be surprised with the quality of the loots on planet (or is it worth doing? somebody get back to me on this) Lac Long Quan said: When harvesting loots in probing planet, ONLY get what you can fly away with: first probes, second expensive and light goods, then cistern or parts. what you left there still be there when you return the next time. 0002.02.Trading A good way to get starting with trading is to go to a nearby 'economic centre' in a system, and pay for some info on local trade routes. I was informed as to a decent trade run taking alcohol between 2 planets that were 5 days travel apart, which was a very safe milk run and let me build up some cash fairly quickly. You can't search for goods but you can search for planets. Like you if you are heading to the Al Dagor system for a quest reason and want to bring some goods with you to sell there, you can look up each planet in that system in search and you'll get the prices. Also you can just write "planet" in search and you get the 30 nearest planets with the prices. Lac Long Quan said: Write the name of that planet to get only info on that planet. Last night I learned that if a planet is in radar range and you right-click on it, current prices will be added to your information bar. This is in the electronic manual, but I missed it the first time thru. I do like when you find a good trade run you need to get in quick to exploit it. Once the supply has been reached the prices quickly go the other way. I took two passes on a profitable trade run. The first was a huge profit, the second a huge loss since I noticed the selling price change too late. I did wind up ahead overall, so it wasn't all bad. Lac Long Quan said: if you got big money and dont want to spend on equipment or something, INVEST in goods. Not invest in Business Centre, but buying a high number of high price goods like Luxury or Drugs, or Weapons. Store them on a planet like Pelengs (best). Inflation happen often and quick, so you get a high return (100% or 150%, depend on what kind of goods). Lac Long Quan said to other would-be-sharks like him: dumping a high number goods on a planet will lead to low buying price of that goods (the price you sell to that planet) lead to no one want to sell to that planet. Over time, that kind of goods will be exhausted in that planet. Go there, load the goods, take off, wait for a few days, watching the price on that planet go through the roof, return, sell. Shit load of money, I tell ya. Best work with luxury on industrial planet. Or Drugs on Peleng industrial planets. 0002.03 News and data Search (all from Lac Long Quan) Pay attentionto the news. Various item of interests there. Some would be outdate by the time it get to you, but they are still useful. click the i button on the news item to save it to the bar. right click on it again will delete it. Chatters or galactic news will appear on that bar. left click on it will save, right click delete. leave them alone and it will self delete in 2(?)days. Use the search function intelligently: "planet" get you 30 planets nearest to your positions. You analyse that to determine trade runs. "ore hull ran" will get you info about where sell gravicore hull ranger, with what capacity. "tal hull dip" will get you info about where sell mesostructural hull diplomat, with what capacity. The colors of the hull tell you which race made them. Hulls sold in Bases or Centres more often than not are upgraded already and/or a bit damaged. (sold in pirate bases definitedly damaged) "frag" get you 30- info about every kind of fraggment cannon "nameofranger" to find out where he's now. Useful for bounty hunting or escort services (heh!) ... so on and so forth ... 0002.04 RTS Planetary Battles In the RTS battles, the "pause" key on the keyboard allows you to give orders to grouped units while the game is paused. For some reason, you can't give orders to non-grouped units. This is very helpful. Also, when you're building units (not bases), the game automatically pauses so you can take your time while figuring out what combination to build. Unit configurations favoured by Lac Long Quan: Sniper: 4Launcher/3L+1Stun/Antigrav/Locator/Mortar. To take down turrets and smash enemy from afar. retreat under fire will kill most of pursuers (if you dont let them in too near). Healer: 4 Repair/Locator/Antigrav. Never mix repair with gun, I hate that. Some like to mix, I simply NOT. but feel free to experiment. they like to use 3L+1 Repar, in group of four or five. Locator increase the range of repair. Assault: 3 Launcher+1Stun/Dynamo. If enemy use Stun, replace with Firewall. Stun take down even turrets so beware. Plasma gun is too expensive and not worth the resource IMO. RTS tips from Quarter to Three Why are you destroying their bases? You should be capturing them. It takes about 10 seconds to capture them. Once you have them, you can build defensive turrets, as well as reap more resources... and your robot cap goes up. If you capture an enemy factory, you can produce your own units there. The point of the RTS segments are to capture as many bases as you can. I couldn't imagine winning without doing that. You're 100% right about rushing being a good idea, but I have no trouble taking down turrets. I just lasso a group of 9 robots, hit "a" for attack, click on the turret, and down it goes. Lac Long Quan said: "rushing" meaning in the first few minutes build unit with antigarvitic legs to quickly capture plants. the idea is : ONE to get you resources; TWO to up your robot caps to build army as quickly as possible; THREE to deny enemy resource; FOUR to build strongpoint to defend against enemy if that's your style(two missile turrets will take down at least 3 normal enemy units). Me, I capture them then abandon them if they are near, build up defense if they are far. when enemy force busied themselves with those, my 4 launcher locator unit will take them down./. Usually, when I have to take out turrets (if there are no enemies guarding them), I hop into a launcher mech and do it myself. I can take out the turret pretty quickly without taking any damage, because unlike the AI, I feel no need to continue advancing once I'm in range. Lac Long Quan said: my above unit will take down any turret by itself, no need even to command./ You must, must, MUST capture bases as quickly as possible. Since they determine unit caps as well as resources, they are the key to victory. In general, your home base will stand up OK by itself, so don't bother leaving much (if any) defending force. Get out there and get cracking, especially on hitting up neutral bases since you can capture them twice as fast. You then have to make sure they're properly defended. You cannot overestimate the value of the healing tool. Stick three robots with healing tools behind a laser turret and a missle turret, and they will be able to repel almost any assault. My current SOP is to build one four-mount robot packing launchers or plasmaguns and antigrav for movement. Manually driving this guy around is the quickest way to take out defending base turrets. I then spam tri-mounts, usually with launchers and the healing tool, for the rest of the match (adding slowest or fastest movement load-outs depending on resource income). If you don't allow them to wander out into traffic (or into the range of enemy turrets), these guys kick ass without being outrageously expensive. RTS tips from Quarter to Three, part two The basics behind ground battles are basically to use common sense. For instance turrets can take far more damage than robots but are not invincible and can't kill large groups quickly, therefore they need repair and fire support. Given a small number of robots supporting them they can deal with a large enemy force with few or no losses much better than large number of robots with no turrets. Avoid your enemies strength, attack his weakness. Don't fight large robot to robot battles if there's anything you can do to avoid it. There's usually 2 or 3 enemy factions and it's every man for himself. If you can avoid their attention (by not seizing bases near them, for instance) you can wait until they send an attack wave against another enemy and then swoop with your own strike force to sieze their production factility while it's vulnerable (knocking them out of the game, if it's their only one). Lac Long Quan said: Good advice. Mobility is life itself to infantry. Let enemy units attack your defense point. Your healer will prolong the life of that turret, while your mobile infantry attack them in the flank or the rear. Hammer and Anvil, that situation. If enemy is too strong, get your asses out of there. we will fight another day. Or if you insist, some of your sniper go get their now-almost-defenseless base. That will draw their attention. The basics of the ground battles is that the AI is so incredibly braindead that if you do everything yourself, it's nearly impossible to lose. If you give your robots orders and expect them to do well on their own, though, you're in trouble. I have won every battle thus far with a complex, two-part strategy: 1. Expand really fast. 2. Build an uber-powerful robot and spearhead every attack personally. Thus far, this strategy has brought me nothing but landslide victories. I really wish the RTS portion of the game were better; it's a pretty jarring contrast to the rest of the game, which is generally superb. 0002.05 RTS Planetary Battles - Why your ass get licked Lac Long Quan said: Some of you have trouble with RTS Battles. That's probably because you can not resist attacking the weakest force. DONT. attack them evenly, so that they are equal to each other but inferior to you. The AI has the habit attacking weakest force. So when you eliminate one base, they devoured plants already and build up army then attack you. When they are equal to each other they will attack you. DEFENSE. Draw them in near the turrets then smash them. Repairmen will prolong the life of your units and turrets, long eoungh for the success. Defense is the quickest way to deplete their resources. 0002.06 Equipment I'm finding that at least at first it's better to shop around for smaller equipment, rather than bigger hulls. Most of the ships available aren't really much bigger if at all, but you can usually find components of the same effectiveness at about half the size, with some effort. I've got about 85 cargo free on my starting ship (human mercenary), with three guns, scanner, gripper, etc. Lac LOng Quan said: I can't stretch it enough: always get the LIGHTEST items. even if other got better power. in long term light items pay you back greatly. Light ship fly faster, retreat faster, trade bigger... The only exception is your engine and only in the case of light speed jumping. And even that case is depend on your judgement. It's also very worth getting a droid to repair your ship - that made a very great improvement to my survivability! Lac Long Quan said: in the begining, perhaps a good shield is important, mostly to stay out of pirate's eyes. If they dont know what you got, they will leave traders alone. But if you play fighting, wait till 40% defense shield. Another great thing to do is to upgrade your engines at a scientific base - I can now jump 35 light years at a time, and I've upgraded my fuel tank to 53 tons with a micromodule. I've also got some black gunk stuff that regenerates my fuel, so I now never need to stop for fuel. Great for courier missions. Lac Long Quan said: if upgrade, do it in highest term. Dont play cheapstake. First, you need to know that all standard equipment has 8 tech levels that have different names and basic capabilities. For example a level 5 engine is called "Splash drive" or something and has 700 speed and 29 hyperjump distance. (you see can see the tech level of equipment in shops, there are little bars to the right of the picture). At start the Coalition tech level is 2 I think, and it improves all the time. So new weapons, equipment and hulls start appearing. First there is usually a prototype, then 3-4 more appear, and then it starts being mass produced. Of course you may get lucky and happen to land on the planet where a new droid prototype just appeared for sale but you are better off searching for it using the in-game search engine. You need to know the names of different equipment levels for that (they should be in your manual). Lac Long Quan said: Search dont show the level of the items. Pity, really! but realistic! Buying new equipment every time it comes out isn't a very good idea however, it's better to upgrade your old stuff on science bases or with micromodules as soon as you buy it. this way it will still outperform the newer stuff and you won't have to replace it so soon. Lac LOng Quan said: a third level trader should repair before sell. Less skilled than that should just sell unless you got very high discount from pirate (25%+) Happy equipment hunting! P. S. Weapon levels also improve, but their name doesn't change, so it's a bit harder to find a good one. (because can't see if it's a level 1 Frag gun or level 5 Frag gun in the search) Also repairing in a Military station is a lot cheaper than other repair areas. Lac Long Quan said: not totally true. If you got high discount (16+) with pirate, you got it cheaper than both military base and scientific bases. How to do it easily? read on Pirate base sections and Pirate way sections If you need to repair your artifacts, you can only do that on a science or a pirate base. Further checking clarified this. If you talk to the official in the base Military or pirate (or even possibility scientific) they will do you a cheaper rate than if you do a straight repair. Comparing the prices on a 91 repair the pirate would do this for me at 71. Lac Long Quan said: talking to officer always cheaper than straight repair. SR only show your total damage worth. Repair directly like that can be done even on planet, med/business/ranger centres. 0002.07 Missions Government missions pay well, and I've been getting "special equipment", which allows me to have better maneuverability, and unlimited fuel. Lac Long Quan said: And good micromodules. And medals, which please my ego no ends. Be careful if you take missions to hunt down ships. They pay nice, but there's a catch sometimes. In one mission, I chased down a target and took it out. Came back for pay and they tell me "oops we made a mistake. That was just a civilian passenger vessel." Completely pissed off one of the alliance races. Couldn't even go into their sectors without them launching battleships at me. Lac LOng Quan said: I stay the hell out of bounty hunting. And if I go hunting, I got me fastest engine. leave goods and nonessential items behind. YOu got a job to do and you have no time to play trading. Even looting minerals and items should only be done AFTER the perp got killed. Lac LOng Quan said: and fly light is true for other FED-EX missions as well. some times the bloody flight cut it too close and a different in some 50 km will determine the success. I once had to jettison goods to flight to the planet. Damn that pissed me off! And do FED ex normal before beforyou got 25 LY engine. Easy, even. After that you can play normal or difficult. And do remember to get stim to up Charisma before accept missions. More money, you know :P 0002.08 Stims and Diseases I'm a bit surprised nobody mentioned stimulants because they are a major part of the game. You get stims at medical stations, and what they basically do is increase your skills for a few months (usually 5, blood something got 12). They don't cost much at all, compared to their efficiency. Diseases are same as stims but they lower your skills (there are some stims that lower some skills and diseases that increase them, but that's more an expectation than the rule). Diseases can be healed at medical stations. You get all the warnings about how dangerous stims they are, but really, they are pretty safe, worst that can happen is you getting addicted (a disease) and that's easily healed. If you buy 5-year medical insurance at a business center, you can buy stimulants and get healed from diseases for half the price, so it's really worth to buy. Good stims for combat: LLQ: either one should be taken before go to a Dorm system. enough for a fight of your life. Maloq *something* (not sure how to translate) Accuracy +4 Maneurability +3 Gaalistra of time Accuracy +4 Maneurability +2, ship speed increase! (only for 3 months though) Good stims for missions: Doubleplex Accuracy -1 Maneurability -1, mission payouts increase a lot (I think it's very rare because I’m in the 7th game year now and I haven't seen it) LLQ: I only see it once the very first game I play and stupid me doesnt know what it is. Later I only saw it on other rangers. I suspect it exclusively is for human. (anyone got back to me on this?) Whisper of Ragobam Charisma +10 (do I have to explain? Charisma increases money you get from missions) (and LLQ: Fucking AAA. Always get it unless you want to play dorm fighting. Then again I like do gov jobs, and trade). Absolute status Charisma +2, more time to complete missions (useful because more time means more flexibility, you can take many missions at once and still finish them all in time) Good stims for everything: Trade marking Trade +8 (good if you are selling something expensive, for example if you are changing your old hull for a new one) LLQ: Reserve a huge chunk of money. Get trade mark, then go shopping. and remember to repair before sell. Super technician Tech +5 (always good to have, less equipment deterioration, meaning you save money on repairs) LLQ: do remember to get it in the very beginning, then buy 7 probes. And launch. A Super Technician + 8 Probes is the very best kind of invest ment in the first 5 game year. And this is the best long term investment in skill you can make. Save hugely on wear and tear, especially in fighting. without it, after a month the guns got wear down to nothing. Stardust Accuracy +1 Maneurability +1 Trade +1 Tech +1 (a bit of everything) Lac Long Quan said: if you got your full allotment of drugs (2/2 or 3/3) with no Blood something to get you increase immunity then you got high chance of addictions. and diseases (?) LLQ: about diseases, we really need a good list of diseases, what they do, who they affect, time they run, chance to infect... Somebody do this and send to me I PROMISE I will give them credits LLQ: the best disease I swear by is Mysterious Luantanza. increase in Char, lower att def, cant use in dialog extort money/cargo/attack options. And sometimes (rare) they give you money. I try to get infected by that. Another is New Lozione for Galians only (? Galian or Fayan? smb check this for me please!) Increase in ATT and DEF, lower in CHAR and cant trade in food or medicine. 0002.09 Combat If you're fighting Dominators, one of the most important things to get imho is the droids that repair the cladding. I have one that repairs 40, and with the high armor and shielding on my ship naturally, I can zip into systems, fight a mob, and zip out, minimally damaged. I can really take out a few in a slugfest, but last time I did that I ran up a 25k repair bill (yikes). {from Qt3} To kill pirates early on, ask for lots and lots of help. Generally, any transports in the area will be happy to help, non-pirate rangers will probably help too, and occasionally consuls will help. Pirates, ranger or otherwise, will not help. Get them all to give you a hand and you should be ok. If the ship lands on a planet, it will take off a few days later and everyone will pile on again. If possible, try to catch the ship a long way from a planet. There's also a gun called Rethone that will slow down a target, that helps too. LLQ: Missile speed is 500 km (? not sure). so if they are faster than that, enemy can outrun missiles and either get away or pepper you with missile of their own. POint defense for missile is frag cannon. And although missile is costly in reload in the first third of the game, ONE a salvo is three missile, TWO, it enable you to get some exp when fight with Dorm. Let military guys kick them ass. You stay the hell from afar and pot shot them. All of them. And dont shoot near the sun, missile can fly into the sun and explode, BAD. Good investment in all kind of weapon, all kind of game time. Because the next missile weapon are torpedoes slow as hell at the time it's out. 12 missile (one salvo from 4 tube) is enough to take down first level enemy. 3-6 salvo for the next level. To outrun the missile Frostix is important. LLQ: Frag weapon got it good in term of micromodule. So I choose missile and frag. And in combat dont hesitate to use Dorm weapon. You dont lower their selling value (to S Base) so shoot to your heart content. LLQ: at least three save slots. First is when you just step into that system. One is when you kill about 2-3 and got good health left. three is when you wonder should get the items lying around or not. 0002.10 Blackholes Droids, armor and shields don't work. Your hull size is still your hit points. You can turn on autopilot, but it doesn't understand walls very well. It tends to get me killed in most black hole maps. Autopilot works pretty well against Keller, though, since the playing field is nearly wide open. LLQ: autopilot work well for 2 ships. more than that you got killed. You can only fire one weapon at a time. Having multiple weapons is still useful though, since each weapon has a limited charge, and you automatically switch to the next weapon when a weapon starts reloading. This has the downside that if you're not watching the weapon display, your attack type can switch abruptly. LLQ: and one good advice is using ONE kind of weapon. to avoid confusion. You can assign weapons to two banks, the control key and the shift key. By default everything fires from control. Despite what the manual says, right-clicking does nothing. To switch a weapon's assignment, press shift-number, i.e. shift-3 to switch weapon 3. Each weapon type has a different arcade effect, which is often unintuitive. A good normal-space weapon load may be very poor in hyperspace, and vice versa. Frag cannons give you a fairly straightforward short range attack with a lot of shots. Multi Resonators give you a long ranged glowing ball that attacks fires submunitions at anything it passes. However, the ball has a low enough speed that you'll hit yourself if you fire it while under thrust. You get 3 shots before you must reload. Atomic Vision weapons give you a short range missile that seems to frequently miss. You only get 2 shots before reloading. Torpedo launchers fire huge glowing balls in all directions. You only get one shot per charge, though, so this isn't as good as it sounds. Vertex weapons (the Dominator heavy energy weapon) is pretty much the same as the torpedo launcher, only it fires chevrons instead of glowing balls. So far I've had the best luck with multi-resonators, though they're a pain since you have to stop before firing or you'll hurt yourself. My strategy tends to involve trying to seperate the ships if there is more than one and then battling it out with that one. As they can only fire forwards getting on its back and staying their helps a lot. Once defeated I look for a health area (assuming their is one per hole?) and heal myself before going for the next. My black hole techniques: Run like a sissy. When you've got some distance between you and the enemy, turn around and shoot while flying backwards. Run like a sissy if they get close. Long tunnels work well for this. Don't quit right away when you win. Fly around and try to find a healing powerup (looks like a cross). Wait until you're fully healed or until it runs out, then exit. Helps with the repair bill. Flow blaster is nice for spamming (it's homing), and the Atomic Vision is awesome as a sniper rifle. Haven't tried all the other weapons though. If a good powerup is behind a wall, you might not have realized you just have to shoot one of the fenceposts holding up the wall to get in. Don't forget to run like a sissy, especially if more than one enemy is on top of you. 0002.11 Miscellaneous You know you can tell your wingman to dump everything in his cargo hold right? He isn't so happy about that but he obeys BTW, if you want to avoid spoilers, don't click on the other Rangers on the high score list on the endgame screen. On the good side, it looks like there are multiple ways to defeat each of the Dominator brains. On the bad side, I have a decent idea of what some of them are now Newbie Tips from Qt3 (yes, there is no number 4) Quick Newbie Tip of the Day: When you create a new character you have the option of selecting a couple of ship components along with your race, attributes, ect. I didn't understand why after doing this my ship's components seemed exactly the same. This is because the upgraded components are stashed in the storage area of the starting base. You should also be careful which upgraded components you choose; some of them like the repair droid are quite nice but very costly to keep running. They also can eat up all your available space. Newbie Tip 2: Choose the easy mission option as a newbie pilot. I've run out of time on a quest countless times, although generally by a couple of days. It has nothing to do with skill and everything to do with your components. Don't screw around 70 days is a lot shorter than you would think. Mission completion speed is mostly dependent on your jump radius. If you have to make 4 different jumps you'll have to land 4 times and refuel, which eats up a ton of time. Getting a faction penalty hit sucks. Newbie Tip 3: When you choose a Corsair or Pirate class and they say "You have a bounty on your head in system X" BELIEVE THEM. Also don't attack a Merchant ship near its race's planets, as the Mob Squad will be after your butt shortly Newbie Tip 5: Unless you have a pretty solid reason to choose otherwise, a Faeyian Merchant is the best starting race/class. Faeyian and Humans are the only ones that start with boosters (very critical) and the Faeyian starting merchant ship is one of hte larger starting hulls in terms of internal space. Since hull upgrades are very expensive, the bigger starting hull really lets a player do more early on which is nice. Newbie Tip 6: If confused about what equipment to start with, a scanner and a rocket launcher are good choices. Assuming a player took tip 5 above, the Faeyian starting ship doesn't come with a scanner. The rocket launcher is the only item on the starting equipment selection list that isn't available normally at base technology, so if that rocket launcher is passed up then the player won't get another crack at one for a long while. All the other starting items can be purchased by a player as soon as they get their hands on some funds Also remember to onload this equipment by hitting 'S' (ship screen) in the starting ranger base and moving it from the storage box to your ship. Newbie Tip 7: Combat skills are very important, even to people planning on a non-violent mercantile focus. With the tangle of race relations and the ever present pirates, skirmishes are inevitable. I always put one of my two starting skill points into either attack or defense skill, and the one into maintenance. Newbie Tip 8: Do the goddang tutorial. It's fairly easy missions with some decent payouts. Newbit tip 9: Do NOT fight the IP-47 drone craft in the tutorial until your ship is armed with better stuff than the default laser. A rocket launcher + the starting laser will work if you started with the rocket launcher. A rocket launcher + a frag cannon is better. I've never tried it without the rocket launcher because starting without the rocket launcher is silly. Newbie Tip 10: Once clear of the tutorial, buy a probe and deploy it on a uninhabited planet. The probe scans the planet and finds free equipment/cargo/modules. This is generally time consuming, so the probe can be deployed and then you can go about other missions or trade runs for 50-100 days. The probe cargo can make a great insurance policy if things have gone wrong and a player is broke and needs money for repairs/seed cargo. Just go to the planet, pick up the free cargo, and go sell it. Don't forget to pick up the probe before leaving the planet, and have the probe repaired at a planet/station before deploying it at another planet. Last edited by Inverarity on Sat Aug 20, 2005 12:48 am; edited 2 times in total View in:
https://www.gamefaqs.com/pc/920464-space-rangers-2-rise-of-the-dominators/faqs/44831
CC-MAIN-2017-51
en
refinedweb
Although it was possible in previous versions of StructureMap, the new fluent interface for StructureMap configuration in version 2.5 allows easy configuration of array type constructor parameters. For example, consider a simple order processor pipeline: public class OrderPipeline { private readonly IOrderPipelineStep[] _steps; public OrderPipeline(IOrderPipelineStep[] steps) { _steps = steps; } public IOrderPipelineStep[] Steps { get { return _steps; } } public OrderPipelineResponse Execute(OrderPipelineRequest request) { OrderPipelineResponse response = null; foreach (var step in _steps) { response = step.ExecuteStep(request); if (!response.IsSuccessful) return response; } return response; } } This pipeline takes an array of pipeline steps. Given a pipeline request, which may contain information necessary for the steps, it executes each of the steps in order. If any of the responses is not successful, the pipeline stops executing and returns the current response. No where in this pipeline do we see which steps should be created. Some steps may require service location from StructureMap, while others may not have any dependencies. In any case, we want the construction of the pipeline steps for the pipeline to be external from the pipeline, as we can see it’s only concerned with executing steps. But someone has to be concerned with which steps can be executed. For many of our dependencies, the DefaultConventionScanner is all we need to construct our dependencies. With an array parameter, there is no way StructureMap could automatically figure out which dependencies to create, and which order. Instead, we can create a custom Registry to configure our dependency: public class PipelineRegistry : Registry { protected override void configure() { ForRequestedType<OrderPipeline>() .TheDefault.Is.OfConcreteType<OrderPipeline>() .TheArrayOf<IOrderPipelineStep>() .Contains(x => { x.OfConcreteType<ValidationStep>(); x.OfConcreteType<SynchronizationStep>(); x.OfConcreteType<RoutingStep>(); x.OfConcreteType<PersistenceStep>(); }); } } In this Registry, I tell StructureMap first what the requested and default concrete types are. Next, I tell StructureMap that the array of IOrderPipelineStep contains a set of concrete types. The Contains method takes a delegate, so I can use a lambda to configure all of the individual concrete types. Each step is created in the order I specify in the lambda block. Here’s the passing test: [Test] public void Should_construct_the_pipeline_steps_correctly() { StructureMapConfiguration .ScanAssemblies() .IncludeTheCallingAssembly() .With<DefaultConventionScanner>(); var pipeline = ObjectFactory.GetInstance<OrderPipeline>(); pipeline.Steps.Length.ShouldEqual(4); pipeline.Steps[0].ShouldBeOfType(typeof (ValidationStep)); pipeline.Steps[1].ShouldBeOfType(typeof (SynchronizationStep)); pipeline.Steps[2].ShouldBeOfType(typeof (RoutingStep)); pipeline.Steps[3].ShouldBeOfType(typeof (PersistenceStep)); } Notice that I did not need to specify the individual Registry. StructureMap scans the given assemblies for Registries, and automatically adds them to the configuration. Next, I ask StructureMap for an instance of the OrderPipeline. Again, nowhere do we see any code for constructing the correct list of IOrderPipelineSteps, this is encapsulated in our Registry. Finally, the rest of the test asserts that both the correct steps were created, and in the right order. With the new fluent interface in StructureMap 2.5, I get a nice declarative interface to configure all of the special dependencies. Although the DefaultConventionScanner picks up almost all of my dependencies, in some special cases I still need to configure them. Array dependencies are created simply enough just with a lambda specifying the correct steps.
https://lostechies.com/jimmybogard/2008/09/03/building-arrays-in-structuremap-2-5/
CC-MAIN-2017-51
en
refinedweb
The idea is to maximize: - f(n) = f(i)*f(n-i) - It is intuitive to use DP then It is easier for me to come up with this DP solution. The highest-voted solution is considering math, about factoring 2 or 3, while it is not easy for me to make such a thought. Attached is my 1ms AC code: public class Solution { int[] dict; public int integerBreak(int n) { dict = new int[n+1]; //Handle n=2 and n=3 if(n<=3) return n-1; return helper(n); } private int helper(int n){ if(n<=4) return n; if(dict[n]!=0) return dict[n]; int res = 0; for(int i=2;i<=n/2;i++){ int tmp1=helper(i), tmp2 = helper(n-i); int tmp = tmp1*tmp2; if(res<tmp) res = tmp; } dict[n]=res; return dict[n]; } }
https://discuss.leetcode.com/topic/43162/my-1ms-dp-solution-intuitive-without-math-consideration-of-2-or-3
CC-MAIN-2017-51
en
refinedweb
I have three static arrays of strings that specify how to translate each number value into the desired format. However I am stumped as to how I would complete the rest of the program. When I searched the forums, I wasn't able to find any such posts using classes. I need to create a constructor that accepts a nonnegative integer and uses it to initialize the Numbers object. I also need a member function for example print() which obviously prints the English Description. #include <iostream> using namespace std; class Numbers { private: int number; public: char lessThan20[20][25] = {"zero", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine", "ten", "eleven", "twelve", "thirteen", "fourteen", "fifteen", "sixteen", "seventeen", "eighteen", "nineteen"}; char hundred[] = "hundred"; char thousand[] = "thousand"; }; int main() { int number; //Ask user for number input cout << "Enter a number between 0 and 9999: "; cin >> number; while (number < 0) { cout << "This program does not accept negative numbers." << endl; cout << "Enter a number between 0 and 9999: "; cin >> number; } while (number > 9999) { cout << "This program does no accept numbers greater than 9999." << endl; cout << "Enter a number between 0 and 9999: "; cin >> number; } system("PAUSE"); return 0; }
http://www.dreamincode.net/forums/topic/116601-c-program-to-convert-number-to-words-using-classes/
CC-MAIN-2017-51
en
refinedweb
//Tom Nanke //CIS150-001 //10/30/07 //Program 3 //This program plays the game of pig. This is a game in which two players take turns rolling a six-sided die and the first player //to reach 100 points wins. In this game, however, the two players are a human player and a computer player. The //human player takes the first roll, and if he/she rolls from 2-6, he/she can choose to roll again or hold. If he/she decides to //hold, then the sum of all the rolls from the current turn is stored into his/her total score of the game. If a 1 is rolled, //however, the user's turn ends and no new points are added to his/her total game score. Once the user's turn is over, either //because of a hold or a 1 is rolled, then it becomes the computer's turn. To start, the computer keeps rolling a die until it //either rolls a 1 or gets a total sum of 20 or more, in which case it then holds. Then it would once again become the user's //turn, and after his/her turn, the computer keeps trying to roll until it reaches a 100, going back to last held value if it //rolls a one. The expected input is the user's selection of whether to roll or hold. The expected output is the total scores //at each turn, and then the winner of the game. #include <iostream> #include <ctime> //Here I include both ctime and time.h so that I get generate different random numbers for the human #include <time.h> //roll and for the computer roll. using namespace std; int humanTurn(int &humanTotalScore); //function prototype: This function calculates the human's score for a single turn. //pre-cond: The game has started. //post-cond: Returns a value for the total turn score to be added to the human's total game score. //The input parameter is the total game score for the human. int computerTurn(int &computerTotalScore); //function prototype: This function calculates the computer's score for a single turn. //pre-cond: The human has already rolled and either held or rolled a 1. View Full Document This preview has intentionally blurred sections. - Fall '07 - L.TSUI - Binary numeral system, Roll Click to edit the document details
https://www.coursehero.com/file/5871912/program3/
CC-MAIN-2017-51
en
refinedweb
#. Delegates Create a new scene. Add three GameObjects: “Controller”, “Foo”, and “Bar”. Parent Foo and Bar to Controller and then create and assign a new C# script for each object according to its name. The contents of each script are provided below: using UnityEngine; using System.Collections; public class Controller : MonoBehaviour { void Start () { Foo foo = GetComponentInChildren<Foo>(); Bar bar = GetComponentInChildren<Bar>(); foo.doStuff = bar.OnDoStuff; foo.TriggerStuffToDo(); } } using UnityEngine; using System.Collections; public delegate void MyDelegate (); public class Foo : MonoBehaviour { public MyDelegate doStuff; public void TriggerStuffToDo () { if (doStuff != null) doStuff(); } } using UnityEngine; using System.Collections; public class Bar : MonoBehaviour { public void OnDoStuff () { Debug.Log("I did stuff"); } } The delegate was globally defined above the declaration of the Foo class within the “Foo.cs” script. You can tell because it uses the “delegate” keyword. Within that line we show a few important things. We are basically defining a method that will be implemented elsewhere in a way similar to the method declaration you might put inside of an interface. The return type and parameters of the delegate declaration must be followed exactly in any method that tries to become the observer for this delegate, although they do not need to use the same method name. You can see this for yourself because the “OnDoStuff” method of Bar was able to be assigned to the “MyDelegate” definition – this is because they both returned void and did not take any parameters. The name assigned to the delegate definition is still important. It is used inside the Foo class as a Type from which to declare a property. You are basically saying you want a pointer to a method, and since it is public, it can be assigned at a later point. The delegate can be assigned like any other property, by referencing only the name of a method to assign – don’t use the parenthesis. To actually invoke the method, you just treat the delegate property as if it was a method in your class – you do use the parenthesis and pass along any required parameters. Run the sample, and Bar should do some work, logging “I did stuff” to the console. At the moment this sample seems like a lot of extra work to do something we could have accomplished in the Foo script alone. However, the beauty of this system is that we now have more options. By delegating work that needs to be done, the way that work is fulfilled can change – even at run time. It’s also possible we don’t want any work to be done, in which case we simply don’t assign the delegate. The other benefit of this system is that it is loosely coupled. The scripts Foo and Bar are completely ignorant of each other, making them very reusable, and yet they work together as efficiently as if they had direct references to each other. The only script which is not loosely coupled is our controller script, but controller scripts are almost never reusable anyway and this is to be expected. There are a few gotchas when working with delegates. The first issue to point out is that they keep strong pointers to objects – this means that they can keep an object alive that you thought would have gone out of scope. To demonstrate why this is a problem, modify the Controller script so that you destroy the Bar object after assigning it as a delegate but before triggering the delegate call. Note that I had to modify the Start method to an Enumerator so that I could wait a frame – Unity waits one frame before actually destroying GameObjects. IEnumerator Start () { Foo foo = GetComponentInChildren<Foo>(); Bar bar = GetComponentInChildren<Bar>(); foo.doStuff += bar.OnDoStuff; GameObject.Destroy(bar.gameObject); yield return null; foo.TriggerStuffToDo(); } Run the example now, and you will see that the Bar object still performs its work even though the GameObject has already been destroyed. This isn’t necessarily an issue in the demo right now, but if the Bar script tried to reference its GameObject or any Components that had been on it, such as Transform, then you will get a “MissingReferenceException: The object of type ‘Bar’ has been destroyed but you are still trying to access it.” Revert all of your changes to the original version of the script. Uncomment line 12 of Foo.cs so that our script doesn’t compare doStuff to null, and then uncomment line 11 of Controller.cs where we actually assign it. Run the scene again. This demonstrates that you should always check that a delegate exists before calling it, or else you risk a NullReferenceException. Revert your changes. Now modify line 11 of Controller.cs to: foo.doStuff += bar.OnDoStuff; Notice that the line looks almost identical, we merely added a “+” in front of the assignment operator. With this method of assignment you are essentially stacking objects into the single delegate property, and when it is invoked, all delegates on the stack will be called. To see this in action, you can duplicate line 11 so that there are several additions of bar.OnDoStuff. If you run the scene now you will see a log for each time you added the delegate. After your last delegate assignment, add another line: foo.doStuff -= bar.OnDoStuff; This time we added a “-” before the assignment operator which tells the delegate to remove an object from its delegate stack. If you run the scene now you will see that there is one less log than there had previously been, although if you added more than you removed it should still be logging something. This is important to note, because if you ever get out of balance on adding and removing delegates, you may find yourself executing code more frequently than you anticipated. Note that there are no negative consequences to attempting to unregister a delegate beyond the number of times that you had registered it. This is helpful because you could for example, unregister any delegate that might have been registered whenever you prepare to dispose of an object, without needing to check if you actually had registered it. In the case of a Monobehaviour, the OnDisable or OnDestroy methods would be great opportunities to unregister all of your listeners. Note also that you can not rely on the Destructor of a native C# object to remove a delegate, because the delegate itself is keeping the object alive. If you assign a delegate using only the assignment operator “=” without a plus or minus, the entire stack of delegate objects will be replaced with whatever you assign the new value to. For example, you can add “+=” multiple copies of bar.OnDoStuff and then assign “=” a single copy of bar.OnDoStuff, and now only the newly assigned handler will do any work. You can also assign null to the delegate which is an easy way of saying, “Hey remove all of the listeners from this”. The assignment of a delegate can be a double edged sword – it is nice to be able to easily remove all listeners, however, you risk other scripts removing or replacing a delegate which you intended to keep registered. The solution to this issue is to turn your delegate into an event. Before I finish discussing delegates, there is one last bit of information I want to share. Because it is very common to need to define delegates, C# has predefined several to save you the effort. To use them you will need to reference the System namespace – add a “using System;” line to your script. See below for several use cases of “Action”, a generic delegate with a return type of void, and “Func” which is another delegate with a non void return type (the last parameter type in its generic definition is the type to be returned): public Action doStuff; public Action<int> doStuffWithIntParameter; public Action<int, string> doStuffWithIntAndStringParameters; public Func<bool> doStuffAndReturnABool; public Func<bool, int> doStuffWithABoolAndReturnAnInt; and here are sample methods that could observe them: public void OnDoStuff () {} public void OnDoStuffWithIntParameter (int value) {} public void OnDoStuffWithIntAndStringParameters (int age, string name) {} public bool OnDoStuffAndReturnABool () { return true; } public int OnDoStuffWithABoolAndReturnAnInt (bool isOn) { return 1; } Events All events are delegates, but not all delegates are events. To make a delegate an event you must add the word “event” to its property declaration: public delegate void MyDelegate (); public class Foo : MonoBehaviour { public event MyDelegate doStuff; public void TriggerStuffToDo () { if (doStuff != null) doStuff(); } } When the delegate is registered as an event in this way, you can no longer use the assignment operator directly. You can now only increment “+=” and decrement “-=” the listeners. This forces the scripts which add listeners to be responsible for themselves and stop listening to the event when they can. When using events it is common to use a particular pre-defined delegate called an “EventHandler”. This delegate is also generic but not in the same way as “Action” and “Func”. The EventHandler always passes exactly two parameters, the first being an “object” representing the sender of the event and the second being an “EventArgs” which will hold any information relevant to the event. If you reference the generic version of the EventHandler you are defining what subclass of EventArgs is going to be passed along. Following are some examples of their use: using UnityEngine; using System; using System.Collections; public class MyEventArgs : EventArgs {} public class Foo : MonoBehaviour { // Define EventHandlers public event EventHandler doStuff; public event EventHandler<MyEventArgs> doStuff2; // These methods can be added as observers public void OnDoStuff (object sender, EventArgs e) {} public void OnDoStuff2 (object sender, MyEventArgs e) {} public void Start () { // Here we add the method as an observer doStuff += OnDoStuff; doStuff2 += OnDoStuff2; // Here we invoke the event if (doStuff != null) doStuff( this, EventArgs.Empty ); if (doStuff2 != null) doStuff2( this, new MyEventArgs() ); } } One last note is that your events can be made static. Static events are ones which do not exist within the instance of a class, but within the class itself. You might consider this pattern when there are multiple objects you wish to listen to events from, without wanting to get a reference to each one. For example, you could have a game controller listen to a static enemy event called “diedEvent”. Each event would pass the enemy which died along as the sender and the game controller could know about each one even though it only had to register to listen to this event a single time. See below for an example: using UnityEngine; using System; public class Controller : MonoBehaviour { void OnEnable () { Enemy.diedEvent += OnDiedEvent; } void OnDisable () { Enemy.diedEvent -= OnDiedEvent; } void OnDiedEvent (object sender, EventArgs e) { // TODO: Award experience, gold, etc. } } using UnityEngine; using System; public class Enemy : MonoBehaviour { public static event EventHandler diedEvent; void OnDestroy () { if (diedEvent != null) diedEvent(this, EventArgs.Empty); } } Pros and Cons of Events Events are a powerful design pattern, one which makes it very easy to write flexible and reusable code which is also very efficient. They do require a certain level of responsibility to use correctly, or you may suffer some unexpected consequences, but most of these can be easily anticipated by making sure you have a corresponding unregister statement to clean up each of your register statements. Delegates and Events are a great solution to solve scenarios where you want one-to-one communication and one-to-many communication scenarios. They do not offer efficient solutions to many-to-many or many-to-one scenarios (where the “many” is implemented as many different classes). For example, in the TextBased RPG example I have been working on, it would be nice to allow any script, whether it is based on MonoBehaviour or native C# script to post an event that some text should be logged to the interface. This could happen anywhere in my program as a result of any kind of object and action. There are two obvious solutions to this problem, but I don’t recommend you use either. The first solution, using events, would be to make the controller which knows to listen for messages to be posted subscribe to the event of each and every class that will actually post a message. What a nightmare. The larger your project grows the harder that would be to maintain. Another approach would not use events, and would have the objects that wish to post a message acquire a reference to the object which displays the message (perhaps you would make it a singleton) and invoke a method on it directly. This is slightly better than the first solution, but now all of your scripts are tightly coupled to the object displaying a message. We might want to reuse these scripts later with a full-blown visual RPG that doesn’t have a text window, and in that case we would have to manually disconnect that bit of logic in a potentially large number of scripts across your project. The solution I would pick is a custom NotificationCenter, which I will present in part 3, but since we haven’t gotten that far, I will show one last option. using System; public class TextEventArgs : EventArgs { public readonly string text; public TextEventArgs (string text) { this.text = text; } } public static class ObjectExtensions { public static event EventHandler<TextEventArgs> displayEvent; public static void Display (this object sender, string text) { if (displayEvent != null) displayEvent(sender, new TextEventArgs(text)); } } This demonstration uses something called “extensions” which is a way to add functionality to a class that hadn’t previously been there. Extensions must be defined in a static class. The functionality you are adding will be a static method, and the first parameter (begins with “this”) determines what class you are adding functionality to. In my example I added the functionality to System.object which means that ANYTHING whether inheriting from MonoBehaviour or not, can now trigger a display text event. See below for an example of a script posting the event, and another script listening to it. public class Enemy : MonoBehaviour { void OnDestroy () { this.Display(string.Format("The {0} died.", this.GetType().Name)); } } public class UIController : MonoBehaviour { void OnEnable () { ObjectExtensions.displayEvent += OnDisplayEvent; } void OnDisable () { ObjectExtensions.displayEvent -= OnDisplayEvent; } void OnDisplayEvent (object sender, TextEventArgs e) { // TODO: a more complete sample would have a reference // to an interface object and append the text there Debug.Log(e.text); } } And now we have an event based solution for many-to-one or many-to-many communication! The solution I will probably be using for the rest of this project is the Notification Center which will be presented in Part 3, although the event based architecture presented here is capable of completing any professional level project. It is really a matter of personal preference. 2 thoughts on “Social Scripting Part 2” In the Controller / Enemy code sample, did you make typo by using the “+=” in the OnDisable event? 🙂 Good catch, the correct line should be a “-=” here. Thanks!
http://theliquidfire.com/2014/12/10/social-scripting-part-2/
CC-MAIN-2017-51
en
refinedweb
Solution for Programmming Exercise 4.5 This page contains a sample solution to one of the exercises from Introduction to Programming Using Java.. This is an exercise in making a rather small modification to a relatively complicated existing program. The only real problem is to write a new subroutine, which I will call brightenSquare. Much of the program comes directly from RandomMosaicWalk.java. The randomMove() routine is unchanged. The only change in the main() routine is to substitute a call to brightenSquare for the call to changeToRandomColor. The subroutines fillWithRandomColors and changeToRandomColor in the RandomMosaicWalk2 program are not needed in the new program and should be removed. In the three lines that define the constants, the values are changed according the instructions in the exercise: final static int ROWS = 80; // Number of rows in the mosaic. final static int COLUMNS = 80; // Number of columns in the mosaic. final static int SQUARE_SIZE = 5; // Size of each square in the mosaic. With these values, the program is interesting to watch for a while. In the end, I did make one other small change to the main() routine to make the program run better: I change the delay in the call to Mosaic.delay(20) from 20 to 5 to make the animation run faster. You might want to try using shades of green, blue, or gray, instead of red. Or even use three disturbances, one incrementing the red component of the color, one incrementing the green component, and one incrementing the blue. An outline for the brightenSquare routine is clear: Let r be the current red component of the square Add 25 to r Set the color components of the square to r, 0, 0 The green and blue components of the color will always be zero. However, they must be specified in the Mosaic.setColor() routine. Written in Java, the body of the routine is just three lines long: static void brightenSquare(int row, int col) { int r = Mosaic.getRed(row,col); r += 25; Mosaic.setColor(row,col,r,0,0); } In fact, you could even write the body of the routine using just one line: Mosaic.setColor(row, col, Mosaic.getColor(row,col) + 25, 0, 0); One thing here might bother you: It looks like the value of the red component of a given square might get bigger than 255 if the disturbance visits it often enough. But the largest legal value for a color component is 255. What I haven't told you is that when a value greater than 255 is used for a color component, Mosaic.setColor will silently change the value to 255. If this were not the case, it would be necessary to rewrite brightenSquare to avoid illegal values of r: static void brightenSquare(int row, int col) { int r = Mosaic.getRed(row,col); r += 25; if ( r > 255 ) r = 255; Mosaic.setColor(row,col,r,0,0); } /** * This program opens a Mosaic window that is initially filled with black. * A "disturbance" moves randomly around in the window. Each time it visits * a square, the red component of the color of that square is increased * until, after about ten visits, it has reached the maximum possible level. * The animation continues until the user closes the window. */ public class Brighten { final static int ROWS = 80; // Number of rows in the mosaic. final static int COLUMNS = 80; // Number of columns in the mosaic. final static int SQUARE_SIZE = 5; // Size of each square in the mosaic. static int currentRow; // Row currently containing the disturbance. static int currentColumn; // Column currently containing disturbance. /** * The main program creates the window, fills it with random colors, * and then moves the disturbakcs in a random wals around the window * as long as the window is open. */ public static void main(String[] args) { Mosaic.open( ROWS, COLUMNS, SQUARE_SIZE, SQUARE_SIZE ); currentRow = ROWS / 2; // start at center of window currentColumn = COLUMNS / 2; while (Mosaic.isOpen()) { brightenSquare(currentRow, currentColumn); randomMove(); Mosaic.delay(5); } } // end main /** * Add a bit of red to the rectangle in a given row and column. * Precondition: The specified rowNum and colNum are in the valid range * of row and column numbers. * Postcondition: The red component of the color of the square has * been increased by 25, except that it does not go * over its maximum possible value, 255. */ static void brightenSquare(int row, int col) { int r = Mosaic.getRed(row,col); r += 25; Mosaic.setColor(row,col,r,0,0); } /** * = ROWS - 1; break; case 1: // move right currentColumn++; if (currentColumn >= COLUMNS) currentColumn = 0; break; case 2: // move down currentRow ++; if (currentRow >= ROWS) currentRow = 0; break; case 3: // move left currentColumn--; if (currentColumn < 0) currentColumn = COLUMNS - 1; break; } } // end randomMove } // end class Brighten
http://www-h.eng.cam.ac.uk/help/importedHTML/languages/java/javanotes5.0.2/c4/ex5-ans.html
CC-MAIN-2017-51
en
refinedweb
The WMI Event Subsystem allows you to subscribe to WMI events. WMI events represent changes in WMI data: if you start Notepad, an instance of the Win32_Process WMI class is created, and an instance creation WMI event is also created. If you delete a file from the disk, an instance of the CIM_DataFile class is deleted, and an instance deletion WMI event is created. In fact, any change in WMI data can be used to create an event, so it is easy to see how WMI events can be useful in system administration. Win32_Process CIM_DataFile WMI will not create events for you unless you subscribe to them. Applications that register their interest in WMI events are called event consumers. There are two types of event consumers: temporary and permanent. Temporary event consumers are typically applications that use the .NET Framework and its System.Management namespace or WMI scripting library to receive WMI events, which means that they receive events only when they are started by a user. Permanent event consumers are different - they are designed to receive events at all times. Both temporary and permanent event consumers use WMI event queries to subscribe to events they are interested in. System.Management Just like other WMI queries, WMI event queries are issued using WQL (WMI Query Language). There are several differences between event queries and other query types, but the most significant one is that WMI event queries use WMI event classes. A WMI class is an event class if it is derived from the __Event system class. So, in order to see what tasks can be accomplished using WMI events, you first need to examine the WMI event classes. But, how can you do that? Since all event classes are derived from the __Event system class, you can use a query like this one: __Event Select * From Meta_Class Where __This Isa "__Event" Although this query includes a reference to the __Event class, it is not an event query. Actually, it is a WMI schema query - it uses Meta_Class, a special class that represents all classes in a WMI namespace. Since you don't want all the classes, but just the __Event derived classes, you also need to add the WHERE clause. When issued, the query returns a list of WMI classes that looks like this: Meta_Class WHERE . . . MSFT_WMI_GenericNonCOMEvent MSFT_WmiSelfEvent Msft_WmiProvider_OperationEvent Msft_WmiProvider_ComServerLoadOperationEvent Msft_WmiProvider_InitializationOperationFailureEvent Msft_WmiProvider_LoadOperationEvent Msft_WmiProvider_OperationEvent_Pre Msft_WmiProvider_DeleteClassAsyncEvent_Pre Msft_WmiProvider_GetObjectAsyncEvent_Pre Msft_WmiProvider_AccessCheck_Pre Msft_WmiProvider_CreateClassEnumAsyncEvent_Pre Msft_WmiProvider_ExecQueryAsyncEvent_Pre Msft_WmiProvider_CreateInstanceEnumAsyncEvent_Pre Msft_WmiProvider_NewQuery_Pre Msft_WmiProvider_DeleteInstanceAsyncEvent_Pre Msft_WmiProvider_CancelQuery_Pre Msft_WmiProvider_PutInstanceAsyncEvent_Pre . . . On my test Windows XP SP2 machine, the query returns a total of 136 classes. The number may be different on your computer, but if you examine the list closely, you'll notice that most commonly used WMI classes like Win32_Process or Win32_Service are not on it. Win32_Service So, the classes that you are really interested in are not derived from the __Event class, but it is still possible to use them in WMI event queries. You can use all WMI classes in event queries, only not directly. In order to use a class that is not derived from __Event in an event query, you need to use one of the helper classes like: __InstanceCreationEvent __InstanceModificationEvent __InstanceDeletionEvent All of the above classes are derived from __InstanceOperationEvent, and have a TargetInstance property, which is a reference to the class instance you want to receive event notifications from. So, if you use a query like this: __InstanceOperationEvent TargetInstance Select * From __InstanceCreationEvent Where TargetInstance Isa "Win32_Process" the TargetInstance property of the returned event will contain a reference to the Win32_Process instance that was created. If you want to refer to the Win32_Process.ExecutablePath property, use the __InstanceCreationEvent.TargetInstance.ExecutablePath property. In addition, the __InstanceModificationEvent class has the PreviousInstance property that contains a reference to a copy of the WMI class instance before it was modified. Classes derived from __InstanceOperationEvent and their TargetInstance property enable you to use all WMI classes in event queries. Win32_Process.ExecutablePath __InstanceCreationEvent.TargetInstance.ExecutablePath PreviousInstance The WMI event subsystem uses a polling mechanism for event delivery. To specify the polling interval, use the WITHIN keyword followed by the polling interval (in seconds): WITHIN Select * From Win32_Process Within 10 Where TargetInstance Isa "Win32_Process" In this example, WMI initially enumerates all Win32_Process instances, and polls for changes every ten seconds. This means that it is possible to miss some events: if a process is created and destroyed in less than ten seconds, it will not raise an event. The Group clause causes WMI to create only one event notification to represent a group of events. For example, this query: Group Select * From __InstanceModificationEvent Within 10 Where TargetInstance Isa "Win32_PerfFormattedData_PerfOS_Processor" Group Within 5 Having NumberOfEvents > 3 will create one event that represents all the modification events for Win32_PerfFormattedData_PerfOS_Processor that occurred within 5 seconds, but only if the number of events is greater than 3. Win32_PerfFormattedData_PerfOS_Processor To summarize: Within Isa A temporary event consumer is any application that requests event notifications from WMI. In most cases, it is a VBScript or code that uses the System.Management namespace. Here is a sample VBScript that subscribes to the Win32_Process creation events: ' VBScript source code'") Do Set objLatestProcess = colMonitoredProcesses.NextEvent Wscript.Echo objLatestProcess.TargetInstance.Name Loop Although this code works, it has at least three disadvantages: Here is a sample of the third case: ' VBScript source code strComputer = "." Set objWMIService = GetObject("winmgmts:" _ & "\\" & strComputer & "\root\cimv2") Set colEvents = objWMIService.ExecNotificationQuery _ ("Select * From __InstanceCreationEvent Within 2" _ & "Where TargetInstance Isa 'Win32_Directory' " _ & "And TargetInstance.Path = '\\Scripts\\'") Do Set objEvent = colEvents.NextEvent() WScript.Echo objEvent.TargetInstance.Name Loop Scripts like this one are typically run from the Command Prompt. But, even if you stop the script, the event notification is not canceled - you can easily observe that, because the floppy disk drive is still flashing every two seconds (I call this the 'FDD Light Show'). This is true not only of the file system monitoring scripts, but every other. The only way to cancel the event notification in this case is to stop the Winmgmt service itself, using: net stop winmgmt The Windows Firewall service depends on Winmgmt, so it is easy to imagine a situation where this can become a problem. WMI permanent event subscription can remedy all these problems. It doesn't depend on a running process (save for svchost.exe that hosts the Winmgmt service). To interrupt it, you need knowledge of WMI, so it is not easy to stop it accidentally, and you can cancel it anytime, without having to restart the Winmgmt service. In its basis, permanent event subscription is a set of static WMI classes stored in a CIM repository. Of course, you can use VBScript or the .NET Framework System.Management classes to create these instances and set up a permanent event subscription, but the easiest way is (at least in my opinion) to use MOF. Here is a sample MOF that you can use as a template for creating permanent event subscriptions: // 1. Change the context to Root\Subscription namespace // All standard consumer classes are // registered there. #pragma namespace("\\\\.\\root\\subscription") // 2. Create an instance of __EventFilter class // and use it's Query property to store // your WQL event query. instance of __EventFilter as $EventFilter { Name = "Event Filter Instance Name"; EventNamespace = "Root\\Cimv2"; Query = "WQL Event query text"; QueryLanguage = "WQL"; }; // 3. Create an instance of __EventConsumer // derived class. (ActiveScriptEventConsumer // SMTPEventConsumer etc...) instance of __EventConsumer derived class as $Consumer { Name = "Event Consumer Instance"; // Specify any other relevant properties. }; // 4. Join the two instances by creating // an instance of __FilterToConsumerBinding // class. instance of __FilterToConsumerBinding { Filter = $EventFilter; Consumer = $Consumer; }; To create a permanent WMI event subscription, you need to follow these steps: Root\Subscription __EventFilter __IndicationRelated instance of Name EventNamespace Root\Cimv2 Query __EventConsumer ActiveScriptEventConsumer LogFileEventConsumer __FilterToConsumerBinding Filter Consumer In the rest of this article, I will attempt to walk you through several samples of permanent event subscription that use standard event consumer classes. A tool named WMI Event Registration is included with WMI Tools, and it is very helpful when working with permanent subscription: it allows you to explore existing filters, consumers, or timers, and also create new ones using a user-friendly interface. You can also cancel event subscriptions using this tool. When this tool is first opened, you are offered to connect to the Root\Cimv2 namespace, but connect to Root\Subscription instead - this is where you will create most of the permanent event subscriptions. Once connected, if you select 'Consumers' from the leftmost dropdown, you will see a list of all available standard event consumer classes listed in the left hand pane, as they are already registered there. If an instance of any of the standard event consumer classes already exists, by selecting it, you can view the available __EventFilter instances in the right hand pane. If any of the __EventFilter instances is joined with the selected consumer instance, it is checked, so, the green checkmark actually represents an instance of the __FilterToConsumerBinding class. All permanent event subscription samples presented here are created using MOF - you need a tool called mofcomp.exe to store instance definitions contained in a MOF file into the CIM repository. Mofcomp.exe is stored in the Windows directory (typically C:\Windows\System32\Wbem\) and its basic syntax is: mofcomp FileName.mof The ActiveScriptEventConsumer class is one of the standard event consumer classes: it allows you to run ActiveX script code whenever an event is delivered to it. To create an instance of the ActiveScriptEventConsumer class, you need to assign values to its properties: ScriptingEngine ScriptText ScriptFileName As a test, you can create an event consumer that executes some arbitrary VBScript code whenever an instance of Win32_Process named 'notepad.exe' is created. To create a permanent event subscription that uses ActiveScriptEventConsumer: #pragma namespace("\\\\.\\root\\subscription") instance of __EventFilter as $EventFilter { EventNamespace = "Root\\Cimv2"; Name = "New Process Instance Filter"; Query = "Select * From __InstanceCreationEvent Within 2" "Where TargetInstance Isa \"Win32_Process\" " "And Targetinstance.Name = \"notepad.exe\" "; QueryLanguage = "WQL"; }; In this case, you will receive a __InstanceCreationEvent class instance, but from the Root\Cimv2 namespace, as the Win32_Process class is located there. instance of ActiveScriptEventConsumer as $Consumer { Name = "TestConsumer"; ScriptingEngine = "VBScript"; ScriptText = "Set objFSO = CreateObject(\"Scripting.FileSystemObject\")\n" "Set objFile = objFSO.OpenTextFile(\"c:\\log.txt\", 8, True)\n" "objFile.WriteLine Time & \" \" & \" Notepad started\"\n" "objFile.Close\n"; }; The VBScript code that is assigned to its ScriptText property simply logs the time of the notepad.exe process creation to a text file. instance of __FilterToConsumerBinding { Consumer = $Consumer; Filter = $EventFilter; } When you compile the above MOF using mofcomp.exe, every time Notepad is opened, the time of the notepad.exe process creation is logged to the c:\log.txt file. If the file doesn't already exist, it is created when the first event notification is received. Instead of ScriptText, you can also use the ScriptFileName property: instance of ActiveScriptEventConsumer as $Consumer { Name = "ExternalScriptConsumer"; ScriptingEngine = "VBScript"; ScriptFileName = "C:\\Consumer.vbs"; }; In that case, you also need an external script file: c:\Consumer.vbs. When creating VBScript or JScript scripts to use with ActiveScriptEventCosumer, you need to be aware of some limitations: ActiveScriptEventCosumer WScript WScript.CreateObject WScript.Sleep MsgBox When setting up a permanent event subscription, it is likely that you will need to use strings, so here is a quick note: An MOF string is a sequence of characters enclosed in double quotes. Successive strings are joined together, so this: "Select * From __InstanceCreationEvent " "Within 30 " becomes: "Select * From __InstanceCreationEvent Within 30 " You can also use the following escape sequences: \b backspace \t horizontal \n linefeed \f form feed \r carriage return \" double quote \' single quote \\ backslash A script executed by an ActiveScriptEventConsumer instance can access an environment variable called TargetEvent, which holds a reference to the event class: TargetEvent instance of ActiveScriptEventConsumer as $Consumer { Name = "TargetEventConsumer"; ScriptingEngine = "VBScript"; ScriptText = "Const ForReading = 1\n" "Const ForWriting = 2\n" "\n" "Set objFso = CreateObject(\"Scripting.FileSystemobject\")\n" "Set objStream = objFso.OpenTextFile( _\n" " TargetEvent.TargetInstance.Name, ForReading, False)\n" "\n" "strContent = objStream.ReadAll()\n" "objStream.Close\n" "\n" "Set objStream = objFso.OpenTextFile( _\n" " TargetEvent.TargetInstance.Name, ForWriting, False)\n" "\n" "objStream.Write( _\n" " Replace(strContent, \"127.0.0.1\", \"Localhost\"))\n" "objStream.Close\n"; }; The event class is typically one of various __InstanceOperationEvent derived classes, whose TargetInstance property, in turn, is a reference to the actual class instance of what was created. If that class is, for example, CIM_DataFile, you need to use the following to access its Name property: TargetEvent.TargetInstance.Name This class sends an e-mail message each time an event is delivered to it. To create an instance of the SMTPEventConsumer class, assign values to its properties: SMTPEventConsumer SMTPServer ToLine FromLine Subject As an example, set up a permanent event subscription that uses the SMTPEventConsumer class to send an e-mail message each time a printer status changes. To use SMTPEventConsumer in permanent event subscription: instance of __EventFilter as $EventFilter { EventNamespace = "Root\\Cimv2"; Name = "SMTPEventFilter"; Query = "Select * From __InstanceModificationEvent " "Within 2 " "Where TargetInstance Isa \"Win32_Printer\" " "And (TargetInstance.PrinterStatus = 1 " "Or TargetInstance.PrinterStatus = 2) " "And Not (PreviousInstance.PrinterStatus = 1 " "Or PreviousInstance.PrinterStatus = 2)"; QueryLanguage = "WQL"; }; With the above WQL query, you subscribe to modification events of the Win32_Printer class instances. Note the usage of the __InstanceModificationEvent.PreviousInstance property, which contains a copy of the Win32Printer instance before it was changed. It is useful for comparing the instance properties before and after it was modified. In this case, we are only interested in the events in which the Win32_Printer.PrinterStatus value is changed from anything else to 1 or 2. Win32_Printer __InstanceModificationEvent.PreviousInstance Win32Printer Win32_Printer.PrinterStatus instance of SMTPEventConsumer as $Consumer { Name = "Printer Error Event Consumer"; SMTPServer = "SMTPServerName"; ToLine = "[email protected]"; FromLine = "[email protected]"; Subject = "Printer Error!"; Message = "An error is detected in one of the printers!\n" "Printer Name: %TargetInstance.Name%\n" "Server: %TargetInstance.__Server%\n" "Event Date: %TIME_CREATED%"; }; If you look at the SMTPEventConsumer class MOF code, you will see that most of its properties are marked with a Template qualifier. This means that you can use WMI standard string templates when setting their values. Using standard string templates, you can access the event class properties, just as you can use the TargetEvent environment variable with ActiveScriptEventConsumer. So, for example, if TargetInstance is Win32_Printer, this: Template "Printer Name: %TargetInstance.Name%" will be translated into something like: "Printer Name: HP LaserJet III PostScript Plus v2010.118" Also, this: "Event Date: %TIME_CREATED%" will become: "Event Date: 128611400690000000" __InstanceModificationEvent.Time_Created is the number of 100-nanosecond intervals after January 1, 1601, so if you want to convert it to a readable format, it will probably take a bit of work. __InstanceModificationEvent.Time_Created instance of __FilterToConsumerBinding { Consumer = $Consumer; Filter = $EventFilter; }; The LogFileEventConsumer class writes customized strings to a text file each time an event is delivered to it. Significant properties: Text Filename A sample usage of LogFileEventConsumer could be to log changes in a Windows service's state. To use LogFileEventConsumer with permanent event subscription: instance of __EventFilter as $EventFilter { EventNamespace = "Root\\Cimv2"; Name = "Service State Event Filter"; Query = "Select * From __InstanceModificationEvent " "Within 2 " "Where TargetInstance Isa \"Win32_Service\" " "And TargetInstance.State <> " "PreviousInstance.State"; QueryLanguage = "WQL"; }; instance of LogFileEventConsumer as $Consumer { Name = "Service State Log Consumer"; Filename = "c:\\scripts\\ServiceStateLog.csv"; IsUnicode = true; Text = "\"%TargetInstance.Name%\"," "\"%PreviousInstance.State%\"," "\"%TargetInstance.State%\"," "\"%TIME_CREATED%\""; }; When assigning value to the LogFileEventConsumer.Text property, use WMI standard string templates to access event-related data. LogFileEventConsumer.Text The CommandLineEventConsumer class launches an arbitrary process when an event is delivered to it. Important properties are: CommandLineEventConsumer ExecutablePath Null CommandLineTemplate Here is a sample of a CommandLineEventConsumer MOF that monitors PNP device changes. To create a permanent event subscription that uses CommandLineEventConsumer: Win32_PNPEntity instance of __EventFilter as $EventFilter { EventNamespace = "Root\\Cimv2"; Name = "Test Command Line Event Filter"; Query = "Select * From __InstanceCreationEvent " "Within 2 " "Where TargetInstance Isa \"Win32_PNPEntity\" "; QueryLanguage = "WQL"; }; instance of CommandLineEventConsumer as $Consumer { Name = "Test CommandLine Event Consumer"; RunInteractively = false; CommandLineTemplate = "cmd /c " "WMIC /Output:" "C:\\HWLogs\\PNPDeviceLog%TIME_CREATED%.html " "Path Win32_PNPEntity " "Get Caption, DeviceId, PNPDeviceId " "/Format:HTable.xsl"; }; This CommandLineEventConsumer instance uses the WMI command line utility (WMIC) to create a simple HTML file that contains a list of all Win32_PNPEntity instances each time a new Win32_PNPEntity instance is created. The Win32_LocalTime class is an exception: it is not derived from the __Event class, but you can still use it in WQL event queries, which means that you can also use it to set up a permanent event subscription. An interesting use of the Win32_LocalTime class can be to mimic the Windows Scheduler service. To create a permanent event subscription that subscribes to Win32_LocalTime events: Win32_LocalTime instance of __EventFilter as $EventFilter { EventNamespace = "Root\\Cimv2"; Name = "Sample Timer Event Filter"; Query = "Select * From __InstanceModificationEvent " "Where TargetInstance Isa \"Win32_LocalTime\" " "And TargetInstance.Hour = 18 " "And TargetInstance.Minute = 10 " "And TargetInstance.Second = 30"; QueryLanguage = "WQL"; }; Use this filter to subscribe to Win32_LocalTime modification events. In the query, you can use any combination of Win32_LocalTime properties: Day, DayOfWeek, Hour, Milliseconds, Minute, Month, Quarter, Second, WeekInMonth, and Year. Day DayOfWeek Hour Milliseconds Minute Month Quarter Second WeekInMonth Year instance of CommandLineEventConsumer as $Consumer { Name = "Test CommandLine Event Consumer"; RunInteractively = false; CommandLineTemplate = "cmd /c " "C:\\Backup\\LocalBackup.bat"; }; In this case, it is an instance of the CommandLineEventConsumer class, but it can be any of the standard consumer classes. Permanent event subscription has several advantages over temporary event subscription, but it also has one disadvantage: temporary event subscription is easier to debug. If you are using the System.Management namespace to create an application that subscribes to WMI events, you have all Visual Studio debugging tools at your disposal. If you are using VBScript, you can test WQL event queries separately from the rest of the code, and you receive meaningful (at least sometimes) error messages from WMI. While you are testing permanent event subscription, the only source of debugging information is the WMI event subsystem log file, named Wbemess.log (it is typically located in the C:\Windows\System32\Wbem\Logs\ directory) - all errors detected both in the event filter and the event consumer instances are logged there, and the messages are not always easy to decipher. So, it is probably better to test WQL queries you want to use for permanent event subscription using System.Management or VBScript first. Permanent event subscription can be useful, but if you don't use it carefully, it can consume too much system resources and become inefficient. There are two ways to deal with this: Where Select * From __InstanceCreationEvent Where TargetInstance Isa "CIM_DataFile" And TargetInstance.Path = "\\Logs\\" will be less efficient than this one: Select * From __instanceCreationEvent Where TargetInstance Isa "CIM_DataFile" And TargetInstance.Drive = "C:" And TargetInstance.Path = "\\Logs\\" And TargetInstance.Extension = "Log" Queries that include file system classes like CIM_DataFile or Win32_Directory can be very resource consuming, in general: a query that monitors a couple of hundreds of files can slow down your system noticeably. Win32_Directory WQL is a version of SQL, and for SQL queries, it is often recommended not to select all fields in a table (using '*') unless you really need all of them. I haven't tested this recommendation with WQL queries, but I don't think this advice applies to WQL. There is not much documentation concerning permanent event subscription, but you can find some in MSDN: This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) #pragma namespace ("\\\\.\\Root\\subscription") instance of ActiveScriptEventConsumer as $CONSUMER { Name = "ExternalScriptConsumer"; ScriptingEngine = "VBScript"; ScriptFileName = "C:\\Consumer.vbs"; }; instance of __EventFilter as $FILTER { EventNamespace = "\\\\.\\Root\\Cimv2"; Name = "MyRemDevFilter2"; Query = "Select * From __InstanceDeletionEvent Within 2" "Where TargetInstance Isa \"Win32_Process\" " "And Targetinstance.Name = \"notepad.exe\" "; QueryLanguage = "WQL"; }; instance of __FilterToConsumerBinding { Consumer = $CONSUMER; Filter = $FILTER; }; strComputer = "." strService = " 'Alerter' ".StartService() instance of ActiveScriptEventConsumer as $Consumer { Name = "TestConsumer2"; ScriptingEngine = "VBScript"; ScriptText = "Set objFSO = CreateObject(\"Scripting.FileSystemObject\")\n" "Set objFile = objFSO.OpenTextFile(\"c:\\log.txt\", 8, True)\n" "objFile.WriteLine Time & \" \" & \" Notepad started stoped\"\n" "objFile.Close\n"; }; General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/28226/Creating-WMI-Permanent-Event-Subscriptions-Using-M?msg=2969262
CC-MAIN-2015-35
en
refinedweb
All about Async/Await, System.Threading.Tasks, System.Collections.Concurrent, System.Linq, and more….); } Next time, we’ll step things up a bit and try our hand at implementing an asynchronous reader/writer lock. 2 part question: Are you the original author of the code above and can I use/modify it as part of a OS project? Is there licensing that will need to be added? I don't know much about licenses... (my comment above was directed to Stephen Toub). Great article BTW!!! @Stewart Anderson: I wrote the code. It's covered by the Microsoft Public License (opensource.org/.../ms-pl). Glad you enjoyed the article. I'm still a beginner in regards to Async & Await so correct me if I'm wrong, on the 'LockAsync' method, you could replace 'new Releaser((AsyncLock)state)' by 'm_releaser' without problems. Going even further, why not implementing 'LockAsync' method like the following? public async Task<Releaser> LockAsync() { await m_semaphore.WaitAsync(); return m_releaser; } Obviously, if we write the 'LockAsync' method in the way I described previously, we would need to change the type of 'm_releaser' to 'Releaser' instead of leaving it as a 'Task<Releaser>'. But I would also make other small changes. public class AsyncLock readonly AsyncSemaphore m_semaphore; readonly Releaser m_releaser; public AsyncLock() { m_semaphore = new AsyncSemaphore(1); m_releaser = new Releaser(m_semaphore); } public async Task<IDisposable> LockAsync() { await m_semaphore.WaitAsync(); return m_releaser; } class Releaser : IDisposable readonly AsyncSemaphore m_semaphore; public Releaser(AsyncSemaphore m_semaphore) { this.m_semaphore = m_semaphore; } public void Dispose() { m_semaphore.Release(); } Note, since we are instantiating 'Releaser' just once, we don't need to leave it as a public 'struct' anymore. @Daniel Bezerra: Functionally your suggestions are fine. Performance-wise, though, they regress from what I have in this post. The reason I cache the m_releaser is to avoid allocating the returned Task<Releaser> object in the case where the lock is available and the LockAsync method returns synchronously; using a cached Task<Releaser> makes that allocation-free, whereas making the method an async method and returning a Releaser instance forces the method to allocate a Task<Releaser> to store it. Your second IDisposable suggestion forces another disposable object to be allocated, whereas by returning a struct we avoid that allocation. I hope that helps. @Stephen Toub: Now I see. I overlooked the inevitable instantiation of a task when using the 'async' keyword. I understand that method should avoid GC allocations. This is a place where I miss C++; the same type can be explicitly allocated on the stack or on the heap, with the added advantage of no worries about boxing neither about the Disposable pattern. It is sad, for us programmers, that more often than not we have to trade simplicity for efficiency. Anyway, thank you for your comments. They are really appreciated. Since this implementation is using the previous topic's subject, AsyncSemaphore, does AsyncLock guarantee the locking order (FIFO) for multiple threads trying to acquire the lock? I've read some previous SO discussions about the traditional lock keyword, and it appears that the conclusion was that order is not guaranteed.... If that is true, I would guess no, because AsyncSemaphore also uses the lock keyword? Also, there is already a SemaphoreSlim class , with async wait capability. Would this be a good replacement for AsyncSemaphore here? SemaphoreSlim in MSDN docs mentioned though, that FIFO order is not guaranteed :-( I'm basically looking for a multi-threaded solution, for an awaitable synchronization to a lock to a resource/code block that guarantees FIFO locking. So far, I had found the use of TPL DataFlow TransformBlock<Tin,Tout> with the option of BoundedCapacity = 1 to achieve this behaviour. I'm wondering is there is a more primitive way of achieving this, as some folks opined that TransformBlock is an overkill for this scenario. As a followup to my previous comment, after reading the AsyncSemaphore topic again, it appears to me that FIFO is guaranteed, because there is an internal Queue that holds the awaiters. If this AsyncLock implementation is replaced with a SemaphoreSlim, I guess it won't be the case :-( @Stephen Toub: I think the optimization of 'LockAsync' has a bug. If 'wait.IsFaulted' is true after 'var wait = m_semaphore.WaitAsync();', then the exception that caused 'wait' to fail will disappear after we leave the method, instead of been propagated upper the stack. Imagine the mess this can cause. Note that we don't have this problem if 'wait.IsComplete' is false; when 'wait.IsFaulted' is true then 'wait.IsComplete' is also true. In the specific case here this might not be a problem because 'AsyncSemaphore.WaitAsync' might throw exceptions only on extreme cases, like out of memory exceptions, stack overflow, etc. But the code here is at least misleading if not wrong. @Daniel Bezerra: In what circumstances will that task be faulted? @Stephen Toub: I don't see any circumstance other than extremes like 'OutOfMemoryException' (i.e. 'AsyncSemaphore.WaitAsync()' uses the 'new' operator). But my point is, since this post is for people that are not expert on async/await, the optimization presented here can be misleading. Someone could see it like a pattern and try to apply it on a different scenario where 'IsFaulted' could be true. Two small changes would make the code more general: public Task<Releaser> LockAsync() var wait = m_semaphore.WaitAsync(); return wait.RanToCompletion ? m_releaser : wait.ContinueWith((_, state) => new Releaser((AsyncLock)state), this, CancellationToken.None, TaskContinuationOptions.ExecuteSynchronously | TaskContinuationOptions.OnlyOnRanToCompletion, TaskScheduler.Default); Sorry I meant: wait.Status == TaskStatus.RanToCompletion Instead of wait.RanToCompletion @DonVitz: No guarantees are made about acquisition order.
http://blogs.msdn.com/b/pfxteam/archive/2012/02/12/10266988.aspx?PageIndex=2
CC-MAIN-2015-35
en
refinedweb
NAME wctype - wide-character classification SYNOPSIS #include <wctype.h> wctype_t wctype(const char *name); DESCRIPTION(3)(3) classification function "alpha" - realizes the isalpha(3) classification function "blank" - realizes the isblank(3) classification function "cntrl" - realizes the iscntrl(3) classification function "digit" - realizes the isdigit(3) classification function "graph" - realizes the isgraph(3) classification function "lower" - realizes the islower(3) classification function "print" - realizes the isprint(3) classification function "punct" - realizes the ispunct(3) classification function "space" - realizes the isspace(3) classification function "upper" - realizes the isupper(3) classification function "xdigit" - realizes the isxdigit(3) classification function RETURN VALUE The wctype() function returns a property descriptor if the name is valid. Otherwise it returns (wctype_t) 0. CONFORMING TO C99. NOTES The behavior of wctype() depends on the LC_CTYPE category of the current locale. SEE ALSO iswctype(3) COLOPHON This page is part of release 3.24 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
http://manpages.ubuntu.com/manpages/maverick/man3/wctype.3.html
CC-MAIN-2015-35
en
refinedweb
This tutorial is the ninth in a series of a Visual Basic versions of the Introduction to ASP.NET MVC 5 tutorials published on the site. The original series, produced by Scott Guthrie (twitter @scottgu ), Scott Hanselman (twitter: @shanselman ), and Rick Anderson ( @RickAndMSFT ) was written using the C# language. My versions keep as close to the originals as possible, changing only the coding language. The narrative text is largely unchanged from the original and is used with permission from Microsoft. This tutorial series will teach you the basics of building an ASP.NET MVC 5 Web application using Visual Studio 2013 and Visual Basic. A Visual Studio Express For Web project with VB source code is available to accompany this series which you can download. The tutorial series comprises 11 sections in total. They cover the basics of web development using the ASP.NET MVC framework and the Entity Framework for data access. They are intended to be followed sequentially as each section builds on the knowledge imparted in the previous sections. The navigation path through the series is as follows: - Getting Started - Adding a Controller - Adding a View - Adding a Model - Creating a Connection String and Working with SQL Server LocalDB - Accessing Your Model's Data from a Controller - Examining the Edit Methods and Edit View - Adding Search - Adding a New Field - Adding Validation - Examining the Details and Delete Methods 9. Adding (Shift+Ctrl+B).vb file in a new Migrations folder. Visual Studio opens the Configuration.vb file. Replace the Seed method in the Configuration.vb file with the following code: Protected Overrides Sub Seed(context As MovieDbContext) context.Movies.AddOrUpdate(Function(i) i.Title, New Movie() With { .Title = "When Harry Met Sally", .ReleaseDate = DateTime.Parse("1989-1-11"), .Genre = "Romantic Comedy", .Price = 7.99D }, New Movie() With { .Title = "Ghostbusters ", .ReleaseDate = DateTime.Parse("1984-3-13"), .Genre = "Comedy", .Price = 8.99D }, New Movie() With { .Title = "Ghostbusters 2", .ReleaseDate = DateTime.Parse("1986-2-23"), .Genre = "Comedy", .Price = 9.99D }, New Movie() With { .Title = "Rio Bravo", .ReleaseDate = DateTime.Parse("1959-4-15"), .Genre = "Western", .Price = 3.99D }) End Sub Right click on the blue squiggly line under Movie and then hit Shift+Alt+F10. Click on Import 'MvcMovie.Models'. Doing so adds the following Imports statement to the top of the file: Imports(Function(i) i.Title, New Movie() With { .Title = "When Harry Met Sally", .ReleaseDate = DateTime.Parse("1989-1-11"), .Genre = "Romantic Comedy", .Price = 7.99D } Because the Seed method runs with every migration, you can't just insert data, because the rows you are trying to add will already exist(Function(i) i. Code First Migrations creates another class file in the Migrations folder (with the name {DateStamp}_Initial.vb ), and this class contains code that creates the database schema. The migration filename is pre-fixed with a timestamp to help with ordering. Examine the {DateStamp}_Initial.vb file; it contains the instructions to create the Movies table for the Movie DB. When you update the database in the instructions below, this {DateStamp}_Initial.vb.vb file and add the Rating property like this one: Public Property Rating As String The complete Movie class now looks like the following code: Public Class Movie Public Property ID As Integer Public Property Title As String <Display(Name:="Release Date")> <DataType(DataType.Date)> <DisplayFormat(DataFormatString:="{0:yyyy-MM-dd}", ApplyFormatInEditMode:=True)> Public Property ReleaseDate As DateTime Public Property Genre As String Public Property Price As Decimal Public Property Rating As String End Class: @ModelType IEnumerable(Of MvcMovie.Models.Movie) @Code ViewData("Title") = "Index" End Code <h2>Index</h2> <p> @Html.ActionLink("Create New", "Create") @Using Html.BeginForm("Index", "Movies", FormMethod.Get) @<p>Genre: @Html.DropDownList("movieGenre", "All") Title: @Html.TextBox("SearchString") <br /> <input type="submit" value="Filter" /></p> End Using </p> <table class="table"> <tr> <th> @Html.DisplayNameFor(Function(model) model.Title) </th> <th> @Html.DisplayNameFor(Function(model) model.ReleaseDate) </th> <th> @Html.DisplayNameFor(Function(model) model.Genre) </th> <th> @Html.DisplayNameFor(Function(model) model.Price) </th> <th> @Html.DisplayNameFor(Function(model) model.Rating) </th> <th></th> </tr> @For Each item In Model @<tr> <td> @Html.DisplayFor(Function(modelItem) item.Title) </td> <td> @Html.DisplayFor(Function(modelItem) item.ReleaseDate) </td> <td> @Html.DisplayFor(Function(modelItem) item.Genre) </td> <td> @Html.DisplayFor(Function(modelItem) item.Price) </td> <td> @Html.DisplayFor(Function(modelItem) item.Rating) </td> <td> @Html.ActionLink("Edit", "Edit", New With {.id = item.ID }) | @Html.ActionLink("Details", "Details", New With {.id = item.ID }) | @Html.ActionLink("Delete", "Delete", New With {.id = item.ID }) </td> </tr> Next </table> Next, open the \Views\Movies\Create.vbhtml file and add the Rating field with the following highlighed markup. This renders a text box so that you can specify a rating when a new movie is created. <div class="form-group"> @Html.LabelFor(Function(model) model.Price, New With { . @Html.EditorFor(Function(model) model.Price) @Html.ValidationMessageFor(Function(model) model.Price) </div> </div> <div class="form-group"> @Html.LabelFor(Function(model) model.Rating, New With {. @Html.EditorFor(Function(model) model.Rating) @Html.ValidationMessageFor(Function(model) model.Rating) </div> </div> <div class="form-group"> <div class="col-md-offset-2 col-md-10"> <input type="submit" value="Create" class="btn btn-default" /> </div> </div> (). .vb file and add a Rating field to each Movie object. New Movie() With { .Title = "When Harry Met Sally", .ReleaseDate = DateTime.Parse("1989-1-11"), .Genre = "Romantic Comedy", .Price = 7.99D, .Rating = "PG" }. Imports System Imports System.Data.Entity.Migrations Imports Microsoft.VisualBasic Namespace Migrations Public Partial Class Rating Inherits DbMigration Public Overrides Sub Up() AddColumn("dbo.Movies", "Rating", Function(c) c.String()) End Sub Public Overrides Sub Down() DropColumn("dbo.Movies", "Rating") End Sub End Class End Namespace Build the solution, and then enter the update-database command in the Package Manager Console window. Re-run the application and navigate to the /Movies URL. You can see the new Rating field. Click the Create New link to add a new movie. Note that you can add a rating. Click Create. The new movie, including the rating, now shows up in the movies listing: . 9 Comments - Dave X SixtyFour public ActionResult Edit([Bind(Include="ID,Title,ReleaseDate,Genre,Studio,Price")] Movie movie) { .... } - Dave X SixtyFour - Mike The Gotcha makes sense if you think about it. You had to add a binding for Rating in the tutorial, so it kind of follows that if you change the name of the property to something else, the name of the binding will change too. As to deleting a field, you do the reverse of adding it: 1. Remove it from the model class 2. Remove it form the binding 3. Remove it from any views 4. Add a new migration 5. Run the migration - Dave X SixtyFour My next effort is going to be doing MVC using MySQL without Entity Framework...I would like to know how to persist the data by hand without the magic box. - Mike This article should give you a starting point: ASP.NET MVC is not all about Linq to SQL - Cameron PM> enable-migrations -ContextTypeName MVCMovie.Models.MovieDBContext The context type 'MVCMovie.Models.MovieDBContext' was not found in the assembly 'MVCMovie'. Any ideas? - Mike Perhaps you didn't add the Models namespace to your MovieDBContext class. - Chet Ripley - Gjuro PM> enable-migrations -ContextTypeName MVCMovie.Models.MovieDBContext The context type 'MVCMovie.Models.MovieDBContext' was not found in the assembly 'MVCMovie'. Then I discovered that the message disappears when I correct MovieDBContext to MovieDbContext (that is how it was written in Part 4 of this tutorial). That should probably be corrected on one of the places.
http://www.mikesdotnetting.com/article/238/adding-a-new-field
CC-MAIN-2015-35
en
refinedweb
Au CC crashes on launchJim Curtis Jun 20, 2013 9:19 AM Every time, even after un- and reinstall. Here are the first few lines of the crash report. Au CS6 works beautifully. Any ideas to get it working appreciated. 10.7.5 MacPro3,1 32G RAM Thread 0 Crashed:: Dispatch queue: com.apple.main-thread 0 libsystem_kernel.dylib 0x00007fff906d5ce2 __pthread_kill + 10 1 libsystem_c.dylib 0x00007fff915807d2 pthread_kill + 95 2 libsystem_c.dylib 0x00007fff91571a7a abort + 143 3 libc++abi.dylib 0x00007fff919fc7bc abort_message + 214 4 libc++abi.dylib 0x00007fff919f9fcf default_terminate() + 28 5 libobjc.A.dylib 0x00007fff908711cd _objc_terminate + 114 6 libc++abi.dylib 0x00007fff919fa001 safe_handler_caller(void (*)()) + 11 7 libc++abi.dylib 0x00007fff919fa05c std::terminate() + 16 8 libc++abi.dylib 0x00007fff919fb152 __cxa_throw + 114 9 com.adobe.auui.framework 0x00000001058980e9 BIB_T_MT::BIBThrowError(BIBError*) + 169 10 com.adobe.auui.framework 0x00000001058c7f2b dvaui::drawbot::AGMTextDoc::GetLineBounds(int) const + 411 11 com.adobe.auui.framework 0x00000001058b01e6 dvaui::drawbot::AGMFontInterface::MeasureMultiLineStringHeight(unsigned short const*, float, int) const + 486 12 com.adobe.dvaui.framework 0x00000001006645ff dvaui::controls::UI_StaticText::GetRecommendedSize_Helper(dvaui::drawbot::Drawbot*, boost::intrusive_ptr<dvaui::ui::Theme>, bool) const + 383 13 com.adobe.dvaui.framework 0x0000000100664825 dvaui::controls::UI_StaticText::UI_GetRecommendedSize(dvaui::drawbot::Drawbot*) const + 53 14 com.adobe.dvaui.framework 0x00000001007c7f98 dvaui::ui::(anonymous namespace)::LayoutNode::operator()(boost::intrusive_ptr<dvaui::ui::UI_Node>) + 536 15 com.adobe.dvaui.framework 0x00000001007c781a dvaui::ui::AutoLayoutSubView(boost::intrusive_ptr<dvaui::ui::UI_SubView>, dvaui::drawbot::Drawbot*) + 1386 16 com.adobe.dvaui.framework 0x0000000100798f05 dvaui::ui::UI_SubViewAdapter::AutoLayout(dvaui::drawbot::Drawbot*, boost::intrusive_ptr<dvaui::ui::UI_Node>, boost::intrusive_ptr<dvaui::ui::UI_Node>) const + 37 17 com.adobe.dvaui.framework 0x00000001007cf788 dvaui::ui::UI_NodeArchive::CreateNode(boost::intrusive_ptr<dvaui::ui::UI_Node>, dvacore::proplist::PropList const&, boost::function<boost::intrusive_ptr<dvaui::ui::UI_Node> () 1. Re: Au CC crashes on launch_durin_ Jun 20, 2013 9:37 AM (in response to Jim Curtis) Hi Jim, Dang. It looks like we may still be seeing some systems that experience this issue. I'll send your crash log to the engineer looking into this issue, but in the meantime, I believe steps 2 or 3 from will resolve the issue and allow Audition to launch. The problem seems to be certain system fonts getting replaced at some point in the past by another application, Office for example, but some metadata about the font not being updated which invokes an issue with our graphics acceleration library. It's not clear yet why this is an issue since the problem often solves itself after uninstalling and reinstalling the offending font. 2. Re: Au CC crashes on launchJim Curtis Jun 20, 2013 10:01 AM (in response to _durin_) Hi Durin, You are correct. Step 2 in the link you provided put me in business. Genius! Thanks so much, Jim 3. Re: Au CC crashes on launchMr. Zogg Aug 5, 2013 8:35 AM (in response to _durin_) Dang indeed! I'm having this same Audition crash on launch issue. For me it's Adobe Audition CC 6.0x732 on MacOS 10.8.4 - It will start under Mac "Safe Mode". I tried all the suggestions outlined above with no success. Any ideas? 4. Re: Au CC crashes on launch_durin_ Aug 5, 2013 10:00 AM (in response to Mr. Zogg) Can you send your OS X crash log to [email protected] ? You can get this crash log by launching Applications/Utilities/Console.app and opening User Diagnostic Reports. Drag the most recent Audition report to an email. Thanks! 5. Re: Au CC crashes on launchgoldstreet2 Aug 13, 2013 7:05 PM (in response to Jim Curtis) I'm having the exact same issue with Audition CC & CS6. Both were working fine until yesterday when was prompted to update my Creative Cloud app and then downloaded/installed After Effects CC. Audition stalls @ "Initializing Required Application Support". Audition opens fine in safe boot mode. I've repaired permissions & unistalled After Effects. System: Mac Pro 3.1/8G Ram/10.8.2. Note: Audition CC opens fine on my MacBook Pro that was not updated yesterday. Had to finish a project in Pro Tools after an hour of screwing around last night. 6. Re: Au CC crashes on launch_durin_ Aug 14, 2013 9:30 AM (in response to goldstreet2) Hi goldstreet2, I just updated AE CC on my Macbook and Audition still launches. When Audition stalls, does it eventually crash or will it sit there as long as you let it? If you launch Applications/Utilities/Acitvity Monitor.app and look at the Audition process, is it using CPU while stalled, or is it at 0%? If you launch Audition and hold down the SHIFT key, does that make any difference? Thanks. 7. Re: Au CC crashes on launchgoldstreet2 Aug 14, 2013 9:51 AM (in response to _durin_) Thanks for your reply. On one occassion it eventually opened but on the others it stalls and crashes (Audition CC Not Repsonding in Activity Monitor). And yes I tried launching w/Shift key to disable pref's but it made no difference. Can I trash everything in the Preferences folders? Otherwise, I will try complete uninstall & re-install. Any other known conflicts? 8. Re: Au CC crashes on launch_durin_ Aug 14, 2013 9:58 AM (in response to goldstreet2) The next step would be to delete or move the preferences folder and see if that makes a difference. From your description, I would not expect it to solve the problem, but let's rule it out. Open Finder, click Go in the menu bar, hold down OPTION (to display hidden folders) and select Library. Then navigate to Preferences/Adobe/Audition/ and rename or delete the "6.0" folder. Then launch Audition. Since you say it launched once, that's surprising. Did it launch after a long delay, or did it appear to launch perfectly normally? If you restart OS X and launch Audition first, does that work fine? What if you launch After Effects or Premiere first? After you wipe the preferences and launch, if it fails, go back to the newly generated "6.0" folder, open the "logs" folder inside and send me the Audition-Log.txt file to [email protected] It should provide a bit more information about what step it's stalling on. 9. Re: Au CC crashes on launchtjroberts Aug 14, 2013 11:49 AM (in response to _durin_) I'm having a similar problem inserting a file into a multi-track session with Audition CC. I updated it several days ago. It now crashes when I drop the file into the track. 10. Re: Au CC crashes on launchgoldstreet2 Aug 14, 2013 11:57 AM (in response to _durin_) Tried deleting Pref's but it didn't fix the problem. Unistall/Re-installed Au but still wouldn't launch. Did the copy Debug Database.txt file trick & it launched but now crashes whenever I do anything like set Preferences. Sending crash logs. Very frustrating. This is the second time I've had major issues with Au...it's a great program when it works but it just doesn't work on my Mac Pro system. 11. Re: Au CC crashes on launchSteveC100 Oct 20, 2013 8:43 AM (in response to goldstreet2) I'm having the same problem -- AU CC crashes on start up. I don't even see the splash screen. Happens every time. AU CS6 works fine. So I uninstalled Audition CC, and am now trying to reinstall. (This is on OSX 10.8.5) Buy when I log in to the CC website and attempt to download, the CC menu opens and nothing else happens. In the CC menu, AU CC is present with the tag, "up to date" But it doesn't exist in the applications folder. It can't reinstall because it incorrectly believes that it's already installed. So I can't try the other suggestions in this thread. Any suggestions? Thanks in advance. Steve 12. Re: Au CC crashes on launchSteveC100 Oct 20, 2013 3:04 PM (in response to SteveC100) Answering my own question -- a restart fixed it. The CC Menu now shows that Audition is not installed, which is correct. Now I'll see if a reinstall solves the problem, or whether I need to use some of the other tips in this thread. 13. Re: Au CC crashes on launchSteveC100 Oct 20, 2013 3:16 PM (in response to SteveC100) Step 2 in this file worked for me -- With the caveat that my problem was with version 6, so I had to delete the contents of the 6.0 folder, not the 5.0 folder mentioned in the file above. 14. Re: Au CC crashes on launch_durin_ Oct 21, 2013 8:51 AM (in response to SteveC100) Thanks for the follow-ups and confirmation that you're up and running. 15. Re: Au CC crashes on launchTembs.eu Jan 31, 2014 10:41 PM (in response to _durin_) On windows I think the correct path is: C:\Users\xxxx\AppData\Roaming\Adobe\Audition But operation doesn't work for me
https://forums.adobe.com/thread/1237699
CC-MAIN-2015-35
en
refinedweb
Python 3.0 Released 357 licorna writes "The 3.0 version of Python (also known as Python3k and Python3000) just got released few hours ago. It's the first ever intentionally backwards-incompatible Python release." "Just think, with VLSI we can have 100 ENIACS on a chip!" -- Alan Perlis good (Score:4, Funny) previous releases were incompatibilie with earlier ones unintentinally. No mac version yet? (Score:3, Funny) Where's the mac version..? Re:woohoo (Score:5, Funny) But I just came in here for an argument! from __future__ import braces (Score:5, Funny):And now to wait (Score:5, Funny) Nope. Python 3.11 for Workgroups.. I don't know why this story's flagged "endofdays" (Score:5, Funny) That'll be when Perl 6.0 ships. Porting? Instantly! (Score:3, Funny) I heard they're going to use Python 3.0 for the impending from-scratch rewrite of DNF. Re:woohoo (Score:5, Funny) No you didn't. Re:Hey! (Score:3, Funny) Re:Libraries (Score:3, Funny) unless the language is in the tail end of its life, like Fortran and Cobol Thus the phrases "The looooooooooong tail" and "You're ALL tail, baby". Re:That marks my end of use for Python (Score? Re:print function (Score:3, Funny) You seem to want Perl. You can find it at [perl.org]:woohoo (Score:5, Funny) Re:And now to wait (Score:5, Funny) Re:That marks my end of use for Python (Score? No, he generated that comment with Python 2.6 code but ran it with the new release. Re:woohoo (Score:4, Funny) An argument isn't just contradiction! Re:woohoo (Score:5, Funny) Yes it is.:No mac version yet? (Score:3, Funny) >You download the .tart.gz or .tar.bz2 source packages and build it. \ At last, what the world has been waiting for: a language for bimbos and airheads! :) hawk Re:No mac version yet? (Score:3, Funny) WTF are you talking about? Visual Basic has been around since 1991! Re:And now to wait (Score:1, Funny) I think you got that wrong. A hammer usually isn't considered very subtle. Re:woohoo (Score:1, Funny) Out of interest, why did they decide to calculate 1/2 as float in Python 3? We got sick of explaining integer math to newbies on the python list each and every single day. So it was decided that if we used '//' for integer division and let '/' do what newbs expect we'd be saving ourselves muchos keystrokes in the long run. Re:I still won't take Python seriously... (Score:1, Funny) I'm glad you brought that up, I can't believe that there is not a single other post or discussion thread here regarding whitespace in Python. I've always wondered why, whenever there's a story about Python on Slashdot or Ars or wherever, there's never, ever, ever even one single comment about how Python deals with whitespace. But you, sir, have broken the seal! Bravo. Re:woohoo (Score:3, Funny) It's scary to code something while drunk then come back the next day and think "god, whoever wrote this is clever". I don't even need to be drunk! That happens to me regularly ... ah the ageing process.
http://developers.slashdot.org/story/08/12/04/0420219/python-30-released/funny-comments
CC-MAIN-2015-35
en
refinedweb
Revision history for DBIx::Class205 2013-01-22 * New Features / Changes - The emulate_limit() arbitrary limit dialect emulation mechanism is now deprecated, and will be removed when DBIx::Class migrates to Data::Query - Support for the source_bind_attributes() storage method has been removed after a lengthy deprecation cycle * Fixes - When performing resultset update/delete only strip condition qualifiers - leave the source name alone (RT#80015, RT#78844) - Fix incorrect behavior on resultset update/delete invoked on composite resultsets (e.g. as_subselect_rs) - Fix update/delete operations referencing the updated table failing on MySQL, due to its refusal to modify a table being directly queried. As a workaround induce in-memory temp-table creation (RT#81378, RT#81897) - More robust behavior under heavily threaded environments - make sure we do not have refaddr reuse in the global storage registry - Fix failing test on 5.8 under Win32 (RT#81114) - Fix hash-randomization test issues (RT#81638) - Disallow erroneous calling of connect_info on a replicated storage (RT#78436) * Misc - Improve the populate docs in ::Schema and ::ResultSet - ::Storage::DBI::source_bind_attributes() removed as announced on Jan 2011 in 0e773352a new_related() (originally broken by fea3d045) - Fix test failure on perl 5.8 * Misc - Much more extensive diagnostics when a new RDBMS/DSN combination is encountered (RT#80431) 0.08203 2012-10-18 * Fixes - Really fix inadequate $dbh->ping SQLite implementation (what shipped in 0.08201 tickled other deficiencies in DBD::SQLite itself) 0.08202 2012-10-06 * Fixes - Replace inadequate $dbh->ping SQLite implementation with our own, fixes RT#78420196 2011-11-29 05:35 (UTC) * Fixes - Fix tests for DBD::SQLite >= 1.34. - Fix test failures with DBICTEST_SQLITE_USE_FILE set - Fix the find() condition heuristics being invoked even when the call defaults to 'primary' (i.e. when invoked with bare values) - Throw much clearer error on incorrect inflation spec - Fix incorrect storage behavior when first call on a fresh schema is with_deferred_fk_checks - Fix incorrect dependency on Test::Simple/Builder (RT#72282) - Fix uninitialized warning in ::Storage::Sybase::ASE - Improve/cache DBD-specific datatype bind checks (also solves a nasty memleak with version.pm on multiple ->VERSION invocations) - The internal carp module now correctly skips CAG frames when reporting a callsite - Fix test failures on perl < 5.8.7 and new Package::Stash::XS - Fix TxnScopeGuard not behaving correctly when $@ is set at the time of $guard instantiation - Fix the join/prefetch resolver when dealing with ''/undef/() relation specifications * Misc - No longer depend on Variable::Magic now that a pure-perl namespace::clean is available - Drop Oracle's Math::BigInt req down to 1.80 - no fixes concerning us were made since) -. - Made discard_chages/get_from_storage replication aware (they now read from the master storage by default) - - 0.08000 2007-06-17 18:06:12 - Fixed DBIC_TRACE debug filehandles to set ->autoflush(1) - Fixed circular dbh<->storage in HandleError with weakref 0.07999_06 2007-06-13 04:45:00 - tweaked Row.pm to make last_insert_id take multiple column names - Fixed DBIC::Storage::DBI::Cursor::DESTROY bug that was messing up exception handling - added exception objects to eliminate stacktrace/Carp::Clan output redundancy - setting $ENV{DBIC_TRACE} defaults stacktrace on. - added stacktrace option to Schema, makes throw_exception use "confess" - make database handles use throw_exception by default - make database handles supplied by a coderef use our standard HandleError/RaiseError/PrintError - add "unsafe" connect_info option to suppress our setting of HandleError/RaiseError/PrintError - removed several redundant evals whose sole purpose was to provide extra debugging info - fixed page-within-page bug (reported by nilsonsfj) - fixed rare bug when database is disconnected inbetween "$dbh->prepare_cached" and "$sth->execute" 0.07999_05 2007-06-07 23:00:00 - Made source_name rw in ResultSource - Fixed up SQL::Translator test/runtime dependencies - Fixed t/60core.t in the absence of DateTime::Format::MySQL - Test cleanup and doc note (ribasushi) 0.07999_04 2007-06-01 14:04:00 - pulled in Replication storage from branch and marked EXPERIMENTAL - fixup to ensure join always LEFT after first LEFT join depthwise - converted the vendor tests to use schema objects intead of schema classes, made cleaned more reliable with END blocks - versioning support via DBIx::Class::Schema::Versioned - find/next now return undef rather than () on fail from Bernhard Graf - rewritten collapse_result to fix prefetch - moved populate to resultset - added support for creation of related rows via insert and populate - transaction support more robust now in the face of varying AutoCommit and manual txn_begin usage - unbreak back-compat for Row/ResultSet->new_result - Added Oracle/WhereJoins.pm for Oracle >= 8 to support Oracle <= 9i, and provide Oracle with a better join method for later versions. (I use the term better loosely.) - The SQL::T parser class now respects a relationship attribute of is_foreign_key_constrain to allow explicit control over wether or not a foreign constraint is needed - resultset_class/result_class now (again) auto loads the specified class; requires Class::Accessor::Grouped 0.05002+ - added get_inflated_columns to Row - %colinfo accessor and inflate_column now work together - More documentation updates - Error messages from ->deploy made more informative - connect_info will now always return the arguments it was originally given - A few small efficiency improvements for load_classes and compose_namespace 0.07006 2007-04-17 23:18:00 - Lots of documentation updates - deploy now takes an optional 'source_names' parameter (dec) - Quoting for for columns_info_for - RT#25683 fixed (multiple open sths on DBD::Sybase) - CDBI compat infers has_many from has_a (Schwern) - Fix ddl_filename transformation (Carl Vincent) 0.07999_02 2007-01-25 20:11:00 - add support for binding BYTEA and similar parameters (w/Pg impl) - add support to Ordered for multiple ordering columns - mark DB.pm and compose_connection as deprecated - switch tests to compose_namespace - ResultClass::HashRefInflator added - Changed row and rs objects to not have direct handle to a source, instead a (schema,source_name) tuple of type ResultSourceHandle 0.07005 2007-01-10 18:36:00 - fixup changes file - remove erroneous .orig files - oops 0.07004 2007-01-09 21:52:00 - fix find_related-based queries to correctly grep the unique key - fix InflateColumn to inflate/deflate all refs but scalar refs correctly - Fix NoBindVars to be safer and handle non-true bind values - Don't blow up if columns_info_for returns useless results - Documentation updates 0.07999_01 2006-10-05 21:00:00 - add connect_info option "disable_statement_caching" - create insert_bulk using execute_array, populate uses it - added DBIx::Class::Schema::load_namespaces, alternative to load_classes - added source_info method for source-level metadata (kinda like column_info) - Some of ::Storage::DBI's code/docs moved to ::Storage - DBIx::Class::Schema::txn_do code moved to ::Storage - Storage::DBI now uses exceptions instead of ->ping/->{Active} checks - Storage exceptions are thrown via the schema class's throw_exception - DBIx::Class::Schema::throw_exception's behavior can be modified via ->exception_action - columns_info_for is deprecated, and no longer runs automatically. You can make it work like before via __PACKAGE__->column_info_from_storage(1) for now - Replaced DBIx::Class::AccessorGroup and Class::Data::Accessor with Class::Accessor::Grouped. Only user noticible change is to table_class on ResultSourceProxy::Table (i.e. table objects in schemas) and, resultset_class and result_class in ResultSource. These accessors no longer automatically require the classes when set. 0.07002 2006-09-14 21:17:32 - fix quote tests for recent versions of SQLite - added reference implementation of Manual::Example - backported column_info_from_storage accessor from -current, but - fixed inflate_datetime.t tests/stringify under older Test::More - minor fixes for many-to-many relationship helpers - cleared up Relationship docs, and fixed some typos - use ref instead of eval to check limit syntax (to avoid issues with Devel::StackTrace) - update ResultSet::_cond_for_update_delete to handle more complicated queries - bugfix to Oracle columns_info_for - remove_columns now deletes columns from _columns 0.07001 2006-08-18 19:55:00 - add directory argument to deploy() - support default aliases in many_to_many accessors. - support for relationship attributes in many_to_many accessors. - stop search_rs being destructive to attrs - better error reporting when loading components - UTF8Columns changed to use "utf8" instead of "Encode" - restore automatic aliasing in ResultSet::find() on nonunique queries - allow aliases in ResultSet::find() queries (in cases of relationships with prefetch) - pass $attrs to find from update_or_create so a specific key can be provided - remove anonymous blesses to avoid major speed hit on Fedora Core 5's Perl and possibly others; for more information see: - fix a pathological prefetch case - table case fix for Oracle in columns_info_for - stopped search_rs deleting attributes from passed hash 0.07000 2006-07-23 02:30:00 - supress warnings for possibly non-unique queries, since _is_unique_query doesn't infer properly in all cases - skip empty queries to eliminate spurious warnings on ->deploy - fixups to ORDER BY, tweaks to deepen some copies in ResultSet - fixup for RowNum limit syntax with functions 0.06999_07 2006-07-12 20:58:05 - fix issue with from attr copying introduced in last release 0.06999_06 2006-07-12 17:16:55 - documentation for new storage options, fix S::A::L hanging on to $dbh - substantial refactor of search_related code to fix alias numbering - don't generate partial unique keys in ResultSet::find() when a table has more than one unique constraint which share a column and only one is satisfied - cleanup UTF8Columns and make more efficient - rename DBIX_CLASS_STORAGE_DBI_DEBUG to DBIC_TRACE (with compat) - rename _parent_rs to _parent_source in ResultSet - new FAQ.pod! 0.06999_05 2006-07-04 14:40:01 - fix issue with incorrect $rs->{attrs}{alias} - fix subclassing issue with source_name - tweak quotes test to output text on failure - fix Schema->txn_do to not fail as a classmethod 0.06999_04 2006-06-29 20:18:47 - disable cdbi-t/02-Film.t warning tests under AS perl - fixups to MySQL tests (aka "work round mysql being retarded") - compat tweaks for Storage debug logging 0.06999_03 2006-06-26 21:04:44 - various documentation improvements - fixes to pass test suite on Windows - rewrote and cleaned up SQL::Translator tests - changed relationship helpers to only call ensure_class_loaded when the join condition is inferred - rewrote many_to_many implementation, now provides helpers for adding and deleting objects without dealing with the link table - reworked InflateColumn implementation to lazily deflate where possible; now handles passing an inflated object to new() - changed join merging to not create a rel_2 alias when adding a join that already exists in a parent resultset - Storage::DBI::deployment_statements now calls ensure_connected if it isn't passed a type - fixed Componentized::ensure_class_loaded - InflateColumn::DateTime supports date as well as datetime - split Storage::DBI::MSSQL into MSSQL and Sybase::MSSQL - fixed wrong debugging hook call in Storage::DBI - set connect_info properly before setting any ->sql_maker things 0.06999_02 2006-06-09 23:58:33 - Fixed up POD::Coverage tests, filled in some POD holes - Added a warning for incorrect component order in load_components - Fixed resultset bugs to do with related searches - added code and tests for Componentized::ensure_class_found and load_optional_class - NoBindVars + Sybase + MSSQL stuff - only rebless S::DBI if it is still S::DBI and not a subclass - Added `use' statement for DBD::Pg in Storage::DBI::Pg - stopped test relying on order of unordered search - bugfix for join-types in nested joins using the from attribute - obscure prefetch problem fixed - tightened up deep search_related - Fixed 'DBIx/Class/DB.pm did not return a true value' error - Revert change to test for deprecated find usage and swallow warnings - Slight wording change to new_related() POD - new specific test for connect_info coderefs - POD clarification and content bugfixing + a few code formatting fixes - POD::Coverage additions - fixed debugfh - Fix column_info stomping 0.06999_01 2006-05-28 17:19:30 - add automatic naming of unique constraints - marked DB.pm as deprecated and noted it will be removed by 1.0 - add ResultSetColumn - refactor ResultSet code to resolve attrs as late as possible - merge prefetch attrs into join attrs - add +select and +as attributes to ResultSet - added InflateColumn::DateTime component - refactor debugging to allow for profiling using Storage::Statistics - removed Data::UUID from deps, made other optionals required - modified SQLT parser to skip dupe table names - added remove_column(s) to ResultSource/ResultSourceProxy - added add_column alias to ResultSourceProxy - added source_name to ResultSource - load_classes now uses source_name and sets it if necessary - add update_or_create_related to Relationship::Base - add find_or_new to ResultSet/ResultSetProxy and find_or_new_related to Relationship::Base - add accessors for unique constraint names and coulums to ResultSource/ResultSourceProxy - rework ResultSet::find() to search unique constraints - CDBICompat: modify retrieve to fix column casing when ColumnCase is loaded - CDBICompat: override find_or_create to fix column casing when ColumnCase is loaded - reorganized and simplified tests - added Ordered - added the ability to set on_connect_do and the various sql_maker options as part of Storage::DBI's connect_info. 0.06003 2006-05-19 15:37:30 - make find_or_create_related check defined() instead of truth - don't unnecessarily fetch rels for cascade_update - don't set_columns explicitly in update_or_create; instead use update($hashref) so InflateColumn works - fix for has_many prefetch with 0 related rows - make limit error if rows => 0 - added memory cycle tests and a long-needed weaken call 0.06002 2006-04-20 00:42:41 - fix set_from_related to accept undef - fix to Dumper-induced hash iteration bug - fix to copy() with non-composed resultsource - fix to ->search without args to clone rs but maintain cache - grab $self->dbh once per function in Storage::DBI - nuke ResultSource caching of ->resultset for consistency reasons - fix for -and conditions when updating or deleting on a ResultSet 0.06001 - Added fix for quoting with single table - Substantial fixes and improvements to deploy - slice now uses search directly - fixes for update() on resultset - bugfix to Cursor to avoid error during DESTROY - transaction DBI operations now in debug trace output 0.06000 2006-03-25 18:03:46 - Lots of documentation improvements - Minor tweak to related_resultset to prevent it storing a searched rs - Fixup to columns_info_for when database returns type(size) - Made do_txn respect void context (on the off-chance somebody cares) - Fix exception text for nonexistent key in ResultSet::find() 0.05999_04 2006-03-18 19:20:49 - Fix for delete on full-table resultsets - Removed caching on count() and added _count for pager() - ->connection does nothing if ->storage defined and no args (and hence ->connect acts like ->clone under the same conditions) - Storage::DBI throws better exception if no connect info - columns_info_for made more robust / informative - ithreads compat added, fork compat improved - weaken result_source in all resultsets - Make pg seq extractor less sensitive. 0.05999_03 2006-03-14 01:58:10 - has_many prefetch fixes - deploy now adds drop statements before creates - deploy outputs debugging statements if DBIX_CLASS_STORAGE_DBI_DEBUG is set 0.05999_02 2006-03-10 13:31:37 - remove test dep on YAML - additional speed tweaks for C3 - allow scalarefs passed to order_by to go straight through to SQL - renamed insert_or_update to update_or_insert (with compat alias) - hidden lots of packages from the PAUSE Indexer 0.05999_01 2006-03-09 18:31:44 - renamed cols attribute to columns (cols still supported) - added has_column_loaded to Row - Storage::DBI connect_info supports coderef returning dbh as 1st arg - load_components() doesn't prepend base when comp. prefixed with + - $schema->deploy - HAVING support - prefetch for has_many - cache attr for resultsets - PK::Auto::* no longer required since Storage::DBI::* handle auto-inc - minor tweak to tests for join edge case - added cascade_copy relationship attribute (sponsored by Airspace Software,) - clean up set_from_related - made copy() automatically null out auto-inc columns - added txn_do() method to Schema, which allows a coderef to be executed atomically 0.05007 2006-02-24 00:59:00 - tweak to Componentised for Class::C3 0.11 - fixes for auto-inc under MSSQL 0.05006 2006-02-17 15:32:40 - storage fix for fork() and workaround for Apache::DBI - made update(\%hash) work on row as well as rs - another fix for count with scalar group_by - remove dependency on Module::Find in 40resultsetmanager.t (RT #17598) 0.05005 2006-02-13 21:24:51 - remove build dependency on version.pm 0.05004 2006-02-13 20:59:00 - allow specification of related columns via cols attr when primary keys of the related table are not fetched - fix count for group_by as scalar - add horrific fix to make Oracle's retarded limit syntax work - remove Carp require - changed UUIDColumns to use new UUIDMaker classes for uuid creation using whatever module may be available 0.05003 2006-02-08 17:50:20 - add component_class accessors and use them for *_class - small fixes to Serialize and ResultSetManager - rollback on disconnect, and disconnect on DESTROY 0.05002 2006-02-06 12:12:03 - Added recommends for Class::Inspector - Added skip_all to t/40resultsetmanager.t if no Class::Inspector available 0.05001 2006-02-05 15:28:10 - debug output now prints NULL for undef params - multi-step prefetch along the same rel (e.g. for trees) now works - added multi-join (join => [ 'foo', 'foo' ]), aliases second to foo_2 - hack PK::Auto::Pg for "table" names referencing a schema - find() with attributes works - added experimental Serialize and ResultSetManager components - added code attribute recording to DBIx::Class - fix to find() for complex resultsets - added of $storage->debugcb(sub { ... }) - added $source->resultset_attributes accessor - added include_columns rs attr 0.05000 2006-02-01 16:48:30 - assorted doc fixes - remove ObjectCache, not yet working in 0.05 - let many_to_many rels have attrs - fix ID method in PK.pm to be saner for new internals - fix t/30dbicplain.t to use ::Schema instead of Catalyst::Model::DBIC::Plain 0.04999_06 2006-01-28 21:20:32 - fix Storage/DBI (tried to load deprecated ::Exception component) 0.04999_05 2006-01-28 20:13:52 - count will now work for grouped resultsets - added accessor => option to column_info to specify accessor name - added $schema->populate to load test data (similar to AR fixtures) - removed cdbi-t dependencies, only run tests if installed - Removed DBIx::Class::Exception - unified throw_exception stuff, using Carp::Clan - report query when sth generation fails. - multi-step prefetch! - inheritance fixes - test tweaks 0.04999_04 2006-01-24 21:48:21 - more documentation improvements - add columns_info_for for vendor-specific column info (Zbigniew Lukasiak) - add SQL::Translator::Producer for DBIx::Class table classes (Jess Robinson) - add unique constraint declaration (Daniel Westermann-Clark) - add new update_or_create method (Daniel Westermann-Clark) - rename ResultSetInstance class to ResultSetProxy, ResultSourceInstance to ResultSourceProxy, and TableInstance to ResultSourceProxy::Table - minor fixes to UUIDColumns - add debugfh method and ENV magic for tracing SQL (Nigel Metheringham) 0.04999_03 2006-01-20 06:05:27 - imported Jess Robinson's SQL::Translator::Parser::DBIx::Class - lots of internals cleanup to eliminate result_source_instance requirement - added register_column and register_relationship class APIs - made Storage::DBI use prepare_cached safely (thanks to Tim Bunce) - many documentation improvements (thanks guys!) - added ->connection, ->connect, ->register_source and ->clone schema methods - Use croak instead of die for user errors. 0.04999_02 2006-01-14 07:17:35 - Schema is now self-contained; no requirement for co-operation - add_relationship, relationships, relationship_info, has_relationship - relationship handling on ResultSource - all table handling now in Table.pm / ResultSource.pm - added GROUP BY and DISTINCT support - hacked around SQL::Abstract::Limit some more in DBIC::SQL::Abstract (this may have fixed complex quoting) - moved inflation to inflate_result in Row.pm - added $rs->search_related - split compose_namespace out of compose_connection in Schema - ResultSet now handles find - various *_related methods are now ->search_related->* - added new_result to ResultSet 0.04999_01 2005-12-27 03:33:42 - search and related methods moved to ResultSet - select and as added to ResultSet attrs - added DBIx::Class::Table and TableInstance for table-per-class - added DBIx::Class::ResultSetInstance which handles proxying search etc. as a superclass of DBIx::Class::DB - assorted test and code cleanup work 0.04001 2005-12-13 22:00:00 - Fix so set_inflated_column calls set_column - Syntax errors in relationship classes are now reported - Better error detection in set_primary_key and columns methods - Documentation improvements - Better transaction support with txn_* methods - belongs_to now works when $cond is a string - PK::Auto::Pg updated, only tries primary keys instead of all cols 0.04 2005-11-26 - Moved get_simple and set_simple into AccessorGroup - Made 'new' die if given invalid columns - Added has_column and column_info to Table.pm - Refactored away from direct use of _columns and _primaries - Switched from NEXT to Class::C3 0.03004 - Added an || '' to the CDBICompat stringify to avoid null warnings - Updated name section for manual pods 0.03003 2005-11-03 17:00:00 - POD fixes. - Changed use to require in Relationship/Base to avoid import. 0.03002 2005-10-20 22:35:00 - Minor bugfix to new (Row.pm) - Schema doesn't die if it can't load a class (Schema.pm) - New UUID columns plugin (UUIDColumns.pm) - Documentation improvements. 0.03001 2005-09-23 14:00:00 - Fixes to relationship helpers - IMPORTANT: prefetch/schema combination bug fix 0.03 2005-09-19 19:35:00 - Paging support - Join support on search - Prefetch support on search 0.02 2005-08-12 18:00:00 - Test fixes. - Performance improvements. - Oracle primary key support. - MS-SQL primary key support. - SQL::Abstract::Limit integration for database-agnostic limiting. 0.01 2005-08-08 17:10:00 - initial release
https://metacpan.org/changes/release/RIBASUSHI/DBIx-Class-0.08250
CC-MAIN-2015-35
en
refinedweb
collective.xdv 1.0rc11 Integrates the xdv Deliverance implementation with Plone using a post-publication hook to transform content XDV has been renamed to Diazo and this package has been replaced by plone.app.theming. Visit the Diazo website or the plone.app.theming PyPI page for further information. Introduction This package offers a simple way to develop and deploy Plone themes using the XDV engine. If you are not familiar with XDV or rules-based theming, check out the XDV documentation. Contents - Introduction - Installation - Usage - A worked example - Common rules - Other tips - Changelog Installation collective.xdv depends on: - plone.transformchain to hook the transformation into the publisher - plone.registry and plone.app.registry to manage settings - plone.autoform, plone.z3cform and plone.app.z3cform to render the control panel - five.globalrequest and zope.globalrequest for internal request access - XDV, containing XDV itself itself - lxml to perform the final transform These will all be pulled in automatically if you are using zc.buildout and follow the installation instructions. To install collective.xdv before the ? is the xdv version number. There may be a newer version by the time you read this, so check out the overview page for the known good set. Replace ?plone=3.3.5 with the version of Plone you are using. This dependency versions appropriate to your Plone. What happens here is that the dependency list for collective.xdv specifies some new versions for you via the good-py URL. This way, you don’t have to worry about getting the right versions, Buildout will handle it for] Note the use of the [Zope2.10] extra, which brings in the ZPublisherEventsBackport package for forward compatibility with Zope 2.12 / Plone 4. If you are using Zope 2.12 or later (e.g. with Plone 4), you should do: eggs = Plone collective.xdv Note that there is no need to add a ZCML slug as collective.xdv uses z3c.autoinclude to configure itself automatically. Once you have added these lines to your configuration file, it’s time to run buildout, so the system can add and set up collective.xdv for you. Go to the command line, and from the root of your Plone instance (same directory as buildout.cfg is located in), run buildout like this: $ bin/buildout You will see output similar to this: Getting distribution for 'collective.xdv==1.0'. Got collective.xdv 1.0. Getting distribution for 'plone.app.registry'. Got plone.app.registry 1.0a1. Getting distribution for 'plone.synchronize'. Got plone.synchronize 1.0b1. ... If everything went according to plan, you now have collective.xdv installed in your Zope instance. Next, start up Zope, e.g with: $ bin/instance fg Then go to the “Add-ons” control panel in Plone as an administrator, and install the “XDV theme support” product. You should then notice a new “XDV Theme” control panel in Plone’s site setup. Usage In the “XDV Theme” control panel, you can set the following options: - Enabled yes/no - Whether or not the transform is enabled. - Domains - A list of domains (including ports) that will be matched against the HOST header to determine if the theme should be applied. Note that 127.0.0.1 is never styled, to ensure there’s always a way back into Plone to change these very settings. However, ‘localhost’ should work just fine. - Theme - A file path or URL pointing to the theme file. This is just a static HTML file. - Rules - The filesystem path to the rules XML file. - Alternate themes - A list of definitions of alternate themes and rules files for a different path. Should be of the form ‘path theme rules’ where path may use a regular expression syntax, theme is a file path or URL to the theme template and rule is a file path to the rules file. If the theme or a rules string starts with ‘python://’ a path resolve is done, so for example you could refer to a theme file in your theme package as python://yourtheme.xdvtheme/static/page.html . - XSLT extension file - It is possible to extend XDV with a custom XSLT file. If you have such a file, give its URL here. -. - Unstyled paths - This is used to give a list of URL patterns (using regular expression syntax) for pages that will not be styled even if XDV is enabled. By default, this includes the ‘emptypage’ view that is necessary for the Kupu editor to work, and the manage_* pages that make up the ZMI. Note that when Zope is in debug mode, the theme will be re-compiled on each request. In non-debug mode, it is compiled once on startup, and then only if the control panel values are changed. Resources in Python packages When specifying the rules, theme and/or XSLT extension files, you should normally use a file path. If you are distributing your theme in a Python package that is installed using Distribute/setuptools (e.g. a standard Plone package installed via buildout), you can use the special python URL scheme to reference your files. For example, if your package is called my.package and it contains a directory mytheme, you could reference the file rules.xml in that file as: ``python://my.package/mytheme/rules.xml`` This will be resolved to an absolute file:// URL by the collective.xdv. resource directory in ZCML like so: <configure xmlns="" xmlns: ... <browser:resourceDirectory ... </configure> The mytheme ‘/++resource++my.package’. XDV will then turn those relative URLs into appropriate absolute URLs with this prefix. If you have put Apache, nginx or IIS in front of Zope, you may want to serve the static resources from the web server directly instead. drop the theme’s styles and then include all styles from Plone. For example, you could add the following rules: <drop theme="/html/head/link" /> <drop theme="/html/head/style" /> <!-- Pull in Plone CSS --> <append theme="/html/head" content="/html/head/link | /html/head/style" /> The use of an “or” expression for the content in the <append /> rule means that the precise ordering is maintained. For an example of how to register stylesheets upon product installation using GenericSetup, see below. In short - use the cssregistry.xml import step in your GenericSetup profile directory. There is one important caveat, however. Your stylesheet may include relative URL references of the following form: background-image: url(../images/bg.jpg); If your stylesheet lives in a resource directory (e.g. it is registered in portal_css with the id ++resource++my.package/css/styles.css), this will work fine so long as the registry (and Zope) is in debug mode. The relative URL will be resolved by the browser to ++resource++my.package have a few options: - Replace your static stylesheet with something dynamic so that you can calculate it relative an absolute path on the fly. This obviously will not work if you want to be able to view the theme standalone. - Change your URLs to use an absolute path, e.g. /++resource++my.theme/images/bg.jpg. Again, this will break the original stylesheet. However, you can perhaps create a Plone-only override stylesheet that overrides each CSS property that uses a url(). - Avoid using portal_css for your static stylesheets. - Use Plone 4. :-) In Plone 4 (b3 and later), the portal_css tool has an option to parse a stylesheet for relative URLs and apply an absolute prefix based on the stylesheet’s debug-mode URL. The option is called applyPrefix in the cssregistry.xml syntax. Controlling Plone’s default CSS It is sometimes useful to show some of Plone’s CSS in the styled site. You can achieve this by using an XDV <append /> XDV is not used. To make this easier, you can use the following expressions as conditions in the portal_css tool (and portal_javascripts, portal_kss), in portal_actions, in page templates, and other places that use TAL expression syntax: request/HTTP_X_XDV | nothing This expression will return True if XDV is currently enabled, in which case an HTTP header “X-XDV” will be set. By default, this will check both the ‘enabled’ flag in the XDV control panel, and the current domain. If you later deploy the theme to a fronting web server such as nginx, you can set the same request header there to get the same effect, even if collective.xdv is uninstalled. Use: not: request/HTTP_X_XDV | nothing to ‘hide’ a style sheet from the themed site. A worked example There are many ways to set up an XDV', 'collective.xdv', ], Re-run buildout: $ bin/buildout - Edit configure.zcml inside the newly created package. Add a resource directory inside the <configure /> tag. Note that you may need to add the browser namespace, as shown. - <configure - xmlns=”” xmlns:browser=”” xmlns:i18n=”” xmlns:genericsetup=”” i18n_domain=”my.theme”> - <genericsetup:registerProfile - name=”default” title=”my.theme” directory=”profiles/default” description=”Installs the my.theme package” provides=”Products.GenericSetup.interfaces.EXTENSION” /> - <browser:resourceDirectory - name=”my.theme” directory= XDV documentation for details about its syntax. You can start with some very simple rules if you just want to test: <?xml version="1.0" encoding="UTF-8"?> <rules xmlns="" xmlns: <!-- Head: title --> <replace theme="/html/head/title" content="/html/head/title" /> <!-- Base tag --> <replace theme="/html/head/base" content="/html/head/base" /> <!-- Drop styles in the head - these are added back by including them from Plone --> <drop theme="/html/head/link" /> <drop theme="/html/head/style" /> <!-- Pull in Plone CSS --> <append theme="/html/head" content="/html/head/link | /html/head/style " /> </rules> These rules will pull in the <title /> tag (i.e. the browser window’s title), the <base /> tag (necessary for certain Plone URLs to work correctly), and Plone’s stylesheets. See below for some more useful rules. --collective.xdv:default</dependency> </dependencies> </metadata> This will install collective.xdv into Plone when my.theme is installed via the add-on control panel later. Also create a file called registry.xml, with the following contents: <registry> <!-- collective.xdv settings --> <record interface="collective.xdv.interfaces.ITransformSettings" field="domains"> <value> <element>domain.my:8080</element> </value> </record> <record interface="collective.xdv.interfaces.ITransformSettings" field="rules"> <value>python://my.theme/static/rules.xml</value> </record> <record interface="collective.xdv.interfaces.ITransformSettings" field="theme"> <value>python://my.theme/static/theme.html</value> </record> <record interface="collective.xdv.interfaces.ITransformSettings" field="absolute_prefix"> <value>/++resource++my.theme</value> </record> </registry> Replace my.theme with your own package name, and rules.xml and theme.html as appropriate. This file configures the settings behind the XDV control panel. Hint: If you have played with the control panel and want to export your settings, you can create a snapshot in the portal_setup tool in the ZMI. Examine the registry.xml file this creates, and pick out the records that relate to collective.xdv. You should strip out the <field /> tags in the export, so that you are left with <record /> and <value /> tags as shown above. Also, add a cssregistry.xml in the profiles/default directory to configure the portal_css tool: <?xml version="1.0"?> <object name="portal_css"> <!-- Set conditions on stylesheets we don't want to pull in --> <stylesheet expression="not:request/HTTP_X_XDV | nothing" id="public.css" /> <!-- Add new stylesheets --> <!-- Note: applyPrefix is not available in Plone < 4.0b3 --> <stylesheet title="" authenticated="False" cacheable="True" compression="safe" conditionalcomment="" cookable="True" enabled="on" expression="request/HTTP_X_XDV | nothing" id="++resource++my.theme/css/styles.css" media="" rel="stylesheet" rendering="link" applyPrefix="True" /> </object> This shows how to set a condition on an existing stylesheet, as well as registering a brand new one. We’ve set applyPrefix to True here, as explained above. This will only work in Plone 4.b3 and later. For earlier versions, simply take this out. - Test Start up Zope and go to your Plone site. Your new package should show as installable in the add-on product control panel. When installed, it should install collective.xdv To copy the page title: <!-- Head: title --> <replace theme="/html/head/title" content="/html/head/title" /> To copy the <base /> tag (necessary for Plone’s links to work): <!-- Base tag --> <replace theme="/html/head/base" content="/html/head --> <append theme="/html/head" content="/html/head/link | /html/head/style" /> To copy Plone’s JavaScript resources: <!-- Pull in Plone CSS --> <append theme="/html/head" content="/html/head/script" /> To copy the class of the <body /> tag (necessary for certain Plone JavaScript functions and styles to work properly): <!-- Body --> <prepend theme="/html/body" content="/html/body/attribute::class" />. Changelog 1.0rc11 - 2010-09-05 - Add French translation, use real msgids in python files and cleanup obsolete Japanese translations. [laz] - Don’t pretty print output - it can break browser renderings and introduces unnecessary whitespace. [elro] - Fix python:// URL resolution on Windows. [optilude] - Clarify space separator warning. [fvandijk] 1.0rc10 - 2010-08-05 - Use plone.subrequest. [elro] - Use space as a separator for alternate themes. ‘|’ is common in regular expressions. [elro] - Support XDV 0.4 <theme> directive so theme is not required on settings. [elro] - Fix resolution of network (http/https) urls for external includes. [elro] 1.0rc9 - 2010-08-05 - Use an IBeforeTraverseEvent on the Plone site root instead of an IPubAfterTraversal event to hook in the X-XDV request header. This makes the header work on 404 error pages. [optilude] - Add collective.directoryresourcepatch to the Zope2.10 extras. This allows for subdirectories to be traversed by the ResourceRegistries while running Plone 3/Zope 2.10. [dunlapm] - Require lxml>=2.2.4. The Zope2 KGS lists lxml=2.2, a version which errors on invalid html. [elro] - Fix extra.xsl support. [elro] 1.0rc8 - 2010-05-24 - Support for styling sites using virtual hosting with a subpath. [elro] - Exclusions for TinyMCE. [elro] 1.0rc7 - 2010-05-23 UPGRADE NOTE: Reinstall product in the Add-ons control panel. Switch on XInclude processing always. [elro] Fix Windows install. For running under Plone 4 on Windows, you must specify: [versions] lxml = 2.2.4 until a newer lxml Windows binary egg is released. [elro] - Instead of the external resolver, let lxml read the network. You must now explicitly enable Read network in the control panel. [elro] 1.0rc6 - 2010-05-21 - Fix transform caching to account for different virtual hosts of the same site and make cache invalidation work across ZEO clients. [elro] 1.0rc5 - 2010-04-21 - Fix in-Plone content inclusion via the href mechanism, including the use of relative paths in hrefs. [optilude] - Ensured that the absolute prefix would work even in a virtual hosting scenario where the aboslute path of the site root is ‘/’. [optilude] - Added an event handler which will set an HTTP request header ‘X-XDV’ if XDV is enabled for the incoming domain. This can be used as a check in e.g. portal_css, for example with a TALES expression like ‘request/HTTP_X_XDV | nothing’. The @@xdv-check/enabled method now just checks for the existence of this variable too. The idea is that it is easier to replicate this in a pure-XSLT deployment scenario with collective.xdv disabled, for example by setting the same request header in nginx or Apache. [optilude] - Made all zope paths resolve relative to the Plone site. [marshalium] - Add support for resolving files with http/ftp absolute urls and zope paths. [marshalium] - Make absolute_prefix prepend the Plone site path if necessary. This means that an absolute prefix starting with / is always relative to the Plone site root. [optilude] - Add support for the python:// pseudo-scheme for the theme, rules and extrauri files. See README.txt for details. [optilude] - Improve the wording in the control panel [optilude] - Fix a bug whereby the cached transforms (in non-debug-mode) would leak across Plone sites in the same instance. [optilude] - Remove the boilerplate parameter. Use extraurl instead. [optilude] - Let collective.xdv depend on the new XDV egg, instead of dv.xdvserver. [optilude] - Only invoke the transformation if collective.xdv is in fact installed. Note: you may need to re-install the product after upgrading. [optilude] - Use plone.transformchain to sequence transformation activities. Among other things, this helps us avoid re-parsing/serialising lxml trees when other things in the chain prefer to work with such representations of the response. It also helps control the sequence of post-publication events. [optilude] - Zope 2.12 / Plone 4 compatability. [lrowe] 1.0rc4 - 2009-10-27 - Style error responses as well as successful responses. [lrowe] - Use ZPublisher events instead of plone.postpublicationhook for compatibility with Zope 2.12 / Plone 4. For Zope2.10 / Plone 3.x, you must now specify “collective.xdv [Zope2.10]” in your buildout to bring in the package ZPublisherEventsBackport. [lrowe] - Added support for extraurl parameter [mhora] - Added alternate themes and modified transform so it can decide by a path regular expression which theme and rules files it will use for transformation [mhora] - Add /manage in unstyled paths default list. [encolpe] 1.0a2 - 2009-07-12 - Catch up with changes in plone.registry’s API. [optilude] 1.0a1 - 2009-04-17 - Initial release - Downloads (All Versions): - 0 downloads in the last day - 158 downloads in the last week - 1024 downloads in the last month - Author: Martin Aspeli - Keywords: plone xdv deliverance theme transform xslt - License: GPL - Categories - Package Index Owner: marshalium, laurencerowe, optilude - DOAP record: collective.xdv-1.0rc11.xml
https://pypi.python.org/pypi/collective.xdv/1.0rc11
CC-MAIN-2015-35
en
refinedweb
/* Common subexpression elimination for GNU compiler. Copyright (C) 1987, 1988," /* stdio.h must precede rtl.h for FFS. */ #include "system.h" #include .h" #include "tree-pass . */ static]; /* Number of elements in the hash table. */ static unsigned int table *, rtx,NEWVEC . */; } } table_size = 0; ; table_size--; } /* ) && elt = XNEW &&) &&; } } table_size++; , VOIDmode);) {)) return 0; /* (MULT:SI x y) and (MULT:HI x y) are NOT equivalent. */ if (GET_MODE (x) != GET_MODE (y)) return 0; switch (code) { case PC: case CC0: case CONST_INT: case CONST_DOUBLE: return x == y; case LABEL_REF: return XEXP (x, 0) == XEXP (y, 0); case SYMBOL_REF: return XSTR (x, 0) == XSTR (y, 0); case REG: if (for_gcse) return REGNO (x) == REGNO (y); else { unsigned int regno = REGNO (y); unsigned int i; unsigned int endregno = regno + (regno >= FIRST_PSEUDO_REGISTER ? 1 : hard_regno_nregs[regno][GET_MODE (y)]); /*. But because really all MEM attributes should be the same for equivalent MEMs, we just use the invariant that MEMs that have the same attributes share the same mem_attrs data structure. */ if (MEM_ATTRS (x) != MEM_ATTRS ); /* If replacing pseudo with hard reg or vice versa, ensure the insn remains valid. Likewise if the insn has MATCH_DUPs. */ if (insn != 0 && new != 0) validate_change (insn, xloc, new, 1); else *xloc = new; } /* Canonicalize an expression: replace each register reference inside it with the "oldest" equivalent register. If INSN is nonzero') validate_canon_reg (&XEXP (x, i), insn); else if (fmt[i] == 'E') for (j = 0; j < XVECLEN (x, i); j++) validate (enum rtx_code code, rtx *parg1, rtx *parg2, enum machine_mode *pmode1, enum machine_mode (COMPARISON_P (arg1)) { #ifdef FLOAT_STORE_FLAG_VALUE REAL_VALUE_TYPE fsfv; #endif if (code == NE || (GET_MODE_CLASS (GET_MODE (arg1)) == MODE_INT && code == LT && STORE_FLAG_VALUE == -1) #ifdef FLOAT_STORE_FLAG_VALUE || (SCALAR_FLOAT_MODE_P (GET_MODE (arg1)) && (fsfv = FLOAT_STORE_FLAG_VALUE (GET_MODE (arg1)), REAL_VALUE_NEGATIVE (fsfv))) #endif ) x = arg1; else if (code == EQ || (GET_MODE_CLASS (GET_MODE (arg1)) == MODE_INT && code == GE && STORE_FLAG_VALUE == -1) #ifdef FLOAT_STORE_FLAG_VALUE || (SCALAR_FLOAT_MODE_P (GET_MODE (arg1)) && , false)) && SCALAR_FLOAT_MODE_P (inner_mode) && (fsfv = FLOAT_STORE_FLAG_VALUE (GET_MODE (arg1)), REAL_VALUE_NEGATIVE (fsfv))) #endif ) && COMPARISON_P && SCALAR_FLOAT_MODE_P (inner_mode) && (fsfv = FLOAT_STORE_FLAG_VALUE (GET_MODE (arg1)), REAL_VALUE_NEGATIVE (fsfv))) #endif ) && COMPARISON_P (p->exp)) { reverse_code = 1; x = p->exp; break; } /* If this non-trapping address, e.g. fp + constant, the equivalent is a better operand since it may let us predict the value of the comparison. */ else if (!rtx_addr_can_trap (COMPARISON_P . Not to be called directly, see fold_rtx_mem instead. */ static rtx fold_rtx_mem_1 ; } } /* Fold MEM. */ static rtx fold_rtx_mem (rtx x, rtx insn) { /* To avoid infinite oscillations between fold_rtx and fold_rtx_mem, refuse to allow recursion of the latter past n levels. This can happen because fold_rtx_mem will try to fold the address of the memory reference it is passed, i.e. conceptually throwing away the MEM and reinjecting the bare address into fold_rtx. As a result, patterns like set (reg1) (plus (reg) (mem (plus (reg2) (const_int)))) set (reg2) (plus (reg) (mem (plus (reg1) (const_int)))) will defeat any "first-order" short-circuit put in either function to prevent these infinite oscillations. The heuristics for determining n is as follows: since each time it is invoked fold_rtx_mem throws away a MEM, and since MEMs are generically not nested, we assume that each invocation of fold_rtx_mem corresponds to a new "top-level" operand, i.e. the source or the destination of a SET. So fold_rtx_mem is bound to stop or cycle before n recursions, n being the number of expressions recorded in the hash table. We also leave some play to account for the initial steps. */ static unsigned int depth; rtx ret; if (depth > 3 + table_size) return x; depth++; ret = fold_rtx_mem_1 (x, insn); depth--; return ret; } /*: case PC: /* No use simplifying an EXPR_LIST since they are used only for lists of args in a function call's REG_EQUAL note. */ case EXPR_LIST: return x; #ifdef HAVE_cc0 case CC (COMMUTATIVE_P (const_arg0 == 0 || const_arg1 == 0) { struct table_elt *p0, *p1; rtx true_rtx = const_true_rtx, false_rtx = const0_rtx; enum machine_mode mode_arg1; #ifdef FLOAT_STORE_FLAG_VALUE if (SCALAR_FLOAT_MODE_P (mode)) { true_rtx = (CONST_DOUBLE_FROM_REAL_VALUE (FLOAT_STORE_FLAG_VALUE (mode), mode)); false_rtx = CONST0_RTX (mode); } #endif code = find_comparison_args (code, &folded_arg0, &folded_arg1, &mode_arg0, &mode_arg1); /*) { if (const_arg1 != NULL) { rtx cheapest_simplification; int cheapest_cost; rtx simp_result; struct table_elt *p; /* See if we can find an equivalent of folded_arg0 that gets us a cheaper expression, possibly a constant through simplifications. */ p = lookup (folded_arg0, SAFE_HASH (folded_arg0, mode_arg0), mode_arg0); if (p != NULL) { cheapest_simplification = x; cheapest_cost = COST (x); for (p = p->first_same_value; p != NULL; p = p->next_same_value) { int cost; /* If the entry isn't valid, skip it. */ if (! exp_equiv_p (p->exp, p->exp, 1, false)) continue; /* Try to simplify using this equivalence. */ simp_result = simplify_relational_operation (code, mode, mode_arg0, p->exp, const_arg1); if (simp_result == NULL) continue; cost = COST (simp_result); if (cost < cheapest_cost) { cheapest_cost = cost; cheapest_simplification = simp_result; } } /* If we have a cheaper expression now, use that and try folding it further, from the top. */ if (cheapest_simplification != x) return fold_rtx (cheapest_simplification, insn); } } /*) && (SCALAR_FLOAT_MODE_P (mode)) {; } } } { && GET_CODE (const_arg1) == CONST_INT) { int is_shift = (code == ASHIFT || code == ASHIFTRT || code == LSHIFTRT); rtx y, inner_const, new_const; enum rtx_code associate_code; if (is_shift && (INTVAL (const_arg1) >= GET_MODE_BITSIZE (mode) || INTVAL (const_arg1) < 0)) { if (SHIFT_COUNT_TRUNCATED) const_arg1 = GEN_INT (INTVAL (const_arg1) & (GET_MODE_BITSIZE (mode) - 1)); else break; } y = lookup_as_function (folded_arg0, code); if (y == 0) break; /*. */ if (XEXP (y, 0) == folded_arg0) break; inner_const = equiv_constant (fold_rtx (XEXP (y, 1), 0)); if (!inner_const || GET_CODE (inner_const) != CONST_INT) && const_arg1 ==; if (is_shift && (INTVAL (inner_const) >= GET_MODE_BITSIZE (mode) || INTVAL (inner_const) < 0)) { if (SHIFT_COUNT_TRUNCATED) inner_const = GEN_INT (INTVAL (inner_const) & (GET_MODE_BITSIZE (mode) - 1)); else if (!side_effects_p (XEXP (y, 0))) return CONST0_RTX (mode);)), GET_MODE (x)); if (elt == 0) return 0; for (elt = elt->first_same_value; elt; elt = elt->next_same_value) if (elt->is_const && CONSTANT_P (elt->exp)) return elt->exp; } return 0; } /* Given INSN, a jump insn, PATH the mode is a MODE_CC mode, we don't know what kinds of things are being compared, so we can't do anything with this comparison. */ if (GET_MODE_CLASS (mode0) == MODE_CC) return;); } /*; /*; /* Table entry for the destination address. */ struct table_elt *dest_addr_elt; }; static void cse (CALL_P )) &&); sets[i].orig_src = src; validate_change (insn, &SET_SRC (sets[i].rtl), new, 1); if (GET_CODE (dest) == ZERO_EXTRACT) {)) (MEM_P (src) && find_reg_note (insn, REG_EQUIV, NULL_RTX) != 0 && REG_P (dest) && paradoxical )))) { /* Don't substitute non-local labels, this confuses CFG. */ if (GET_CODE (trial) == LABEL_REF && LABEL_REF_NONLOCAL_P (trial)) continue; SET_SRC (sets[i].rtl) = trial; cse_jumps_altered = 1; break; } /* Reject certain invalid forms of CONST that we create. */ else if (CONSTANT_P (trial) && GET_CODE (trial) == CONST /* Reject cases that will cause decode_rtx_const to die. On the alpha when simplifying a switch, we get (const (truncate (minus (label_ref) (label_ref)))). */ && (GET_CODE (XEXP (trial, 0)) == TRUNCATE /* Likewise on IA-64, except without the truncate. */ || (GET_CODE (XEXP (trial, 0)) == MINUS && GET_CODE (XEXP (XEXP (trial, 0), 0)) == LABEL_REF && GET_CODE (XEXP (XEXP (trial, 0), 1)) == LABEL_REF))) /* Do nothing for this case. */ ; /*,) && (src_folded == 0 || (!MEM_P (src_folded) && ! src_folded_force_flag)) && GET_MODE_CLASS (mode) != MODE_CC && mode != VOIDmode) { src_folded_force_flag = 1; src_folded = trial; src_folded_cost = constant_pool_entries_cost; src_folded_regcost = constant_pool_entries_regcost; } } && GET_CODE (XEXP (XEXP (src_const, 0), 1)) == LABEL && XEXP (addr, 0) == stack_pointer_rtx) invalidate (stack_pointer_rtx, VOIDmode); && ;; /* Record destination addresses in the hash table. This allows us to check if they are invalidated by other sets. */ for (i = 0; i < n_sets; i++) { if (sets[i].rtl) { rtx x = sets[i].inner_dest; struct table_elt *elt; enum machine_mode mode; unsigned hash; if (MEM_P (x)) { x = XEXP (x, 0); mode = GET_MODE (x); hash = HASH (x, mode); elt = lookup (x, hash, mode); if (!elt) { if (insert_regs (x, NULL, 0)) { rtx dest = SET_DEST (sets[i].rtl); rehash_using_reg (x); hash = HASH (x, mode); sets[i].dest_hash = HASH (dest, GET_MODE (dest)); } elt = insert (x, NULL, hash, mode); } sets[i].dest_addr_elt = elt; } else sets[i].dest_addr_elt = NULL; } } invalidate_from_clobbers (x); /* Some registers are invalidated by subroutine calls. Memory is invalidated by non-constant calls. */ if )). Also check if destination addresses have been removed. */ for (i = 0; i < n_sets; i++) if (sets[i].rtl) { if (sets[i].dest_addr_elt && sets[i].dest_addr_elt->first_same_value == 0) { /* The elt was removed, which means this destination is not valid after this instruction. */ sets[i].rtl = NULL_RTX; }) { struct cse_basic_block_data val; rtx insn = f; int i; init_cse_reg_info (nregs); val.path = XNEWVEC NEWVEC (struct reg_eqv_elem, nregs); /* Find the largest uid. */ max_uid = get_max_uid (); uid_cuid = XCNEWVEC (int, max_uid + (, (dump_file) fprintf (dump; /*NEWVEC (struct qty_table_elem, max_qty); new_basic_block (); /* TO might be a label. If so, protect it from being deleted. */ if (to != 0 && LABEL_P (to)) +++ > PARAM_VALUE (PARAM_MAX_CSE_INSNS)) { flush_hash_table (); num_insns = 0; } /* See if this is a branch that is part of the path. If so, and it is to be taken, do so. */ if (next_branch->branch == insn) { enum taken status = next_branch++->status; if (status !=NEWVEC . DEST is set to pc_rtx for a trapping insn, which means that we must count uses of a SET_DEST regardless because the insn can't be deleted here. */ static void count_reg_usage (MEM_P (XEXP (x, 0))) count_reg_usage (XEXP (XEXP (x, 0), 0), counts, NULL_RTX, incr); return; case SET: /* Unless we are setting a REG, count everything in SET_DEST. */ if (!REG_P (SET_DEST (x))) count_reg_usage (SET_DEST (x), counts, NULL_RTX, incr); count_reg_usage (SET_SRC (x), counts, dest ? dest : SET_DEST (x), incr); return; case CALL_INSN: case INSN: case JUMP_INSN: /* We expect dest to be NULL_RTX here. If the insn may trap, mark this fact by setting DEST to pc_rtx. */ if (flag_non_call_exceptions && may_trap_p (PATTERN (x))) dest = pc_rtx; if (code == CALL_INSN) count_reg_usage (CALL_INSN_FUNCTION_USAGE (x), counts, dest, incr); count_reg_usage (PATTERN (x), counts, dest,, dest, incr); eqv = XEXP (eqv, 1); } while (eqv && GET_CODE (eqv) == EXPR_LIST); else count_reg_usage (eqv, counts, dest,, NULL_RTX, incr); count_reg_usage (XEXP (x, 1), counts, NULL_RTX, incr); return; case ASM_OPERANDS: /* If the asm is volatile, then this insn cannot be deleted, and so the inputs *must* be live. */ if (MEM_VOLATILE_P (x)) dest = NULL_RTX; /* Iterate over just the inputs, not the constraints as well. */ for (i = ASM_OPERANDS_INPUT_LENGTH (x) - 1; i >= 0; i--) count_reg_usage (ASM_OPERANDS_INPUT (x, i), counts, dest, incr); return; case INSN_LIST: gcc_unreachable (); (, NULL_RTX, -1); if (validate_change (insn, &SET_SRC (set), new, 0)) { count_reg_usage (insn, counts, NULL_RTX, 1); remove_note (insn, find_reg_note (insn, REG_RETVAL, NULL_RTX)); remove_note (insn, note); return true; } if (CONSTANT_P (new)) { new = force_const_mem (GET_MODE (SET_DEST (set)), new); if (new && (rtx insns, int nreg) { int *counts; rtx insn, prev; int in_libcall = 0, dead_libcall = 0; int ndead = 0; timevar_push (TV_DELETE_TRIVIALLY_DEAD); /* First count the number of times each register is used. */ counts = XCNEWVEC (int, nreg); for (insn = insns; insn; insn = NEXT_INSN (insn)) if (INSN_P (insn)) count_reg_usage (insn, counts, NULL_RTX, 1); /*. */ for (insn = get_last_insn (); insn; insn = prev) { int live_insn = 0; prev = PREV_INSN (insn); if (!INSN_P (insn)) continue; /* . */ static); } } } } /* Perform common subexpression elimination. Nonzero value from `cse_main' means that jumps were simplified and some code may now be unreachable, so do jump optimization again. */ static bool gate_handle_cse (void) { return optimize > 0; } static unsigned int rest_of_handle_cse (void) { int tem; if (dump_file) dump_flow_info (dump_file, dump_flags); reg_scan (get_insns (), max_reg_num ()); tem = cse_main (get_insns (), max_reg_num ()); if (tem) rebuild_jump_labels (get_insns ()); if (purge_all_dead_edges ()) delete_unreachable_blocks (); delete_trivially_dead_insns (get_insns (), max_reg_num ()); /* If we are not running more CSE passes, then we are no longer expecting CSE to be run. But always rerun it in a cheap mode. */ cse_not_expected = !flag_rerun_cse_after_loop && !flag_gcse; if (tem) delete_dead_jumptables (); if (tem || optimize > 1) cleanup_cfg (CLEANUP_EXPENSIVE); return 0; } struct tree_opt_pass pass_cse = { "cse1", /* name */ gate_handle_cse, /* gate */ rest_of_handle_cse, /* execute */ NULL, /* sub */ NULL, /* next */ 0, /* static_pass_number */ TV_CSE, /* tv_id */ 0, /* properties_required */ 0, /* properties_provided */ 0, /* properties_destroyed */ 0, /* todo_flags_start */ TODO_dump_func | TODO_ggc_collect, /* todo_flags_finish */ 's' /* letter */ }; static bool gate_handle_cse2 (void) { return optimize > 0 && flag_rerun_cse_after_loop; } /* Run second CSE pass after loop optimizations. */ static unsigned int rest_of_handle_cse2 (void) { int tem; if (dump_file) dump_flow_info (dump_file, dump_flags); tem = cse_main (get_insns (), max_reg_num ()); /* Run a pass to eliminate duplicated assignments to condition code registers. We have to run this after bypass_jumps, because it makes it harder for that pass to determine whether a jump can be bypassed safely. */ cse_condition_code_reg (); purge_all_dead_edges (); delete_trivially_dead_insns (get_insns (), max_reg_num ()); if (tem) { timevar_push (TV_JUMP); rebuild_jump_labels (get_insns ()); delete_dead_jumptables (); cleanup_cfg (CLEANUP_EXPENSIVE); timevar_pop (TV_JUMP); } reg_scan (get_insns (), max_reg_num ()); cse_not_expected = 1; return 0; } struct tree_opt_pass pass_cse2 = { "cse2", /* name */ gate_handle_cse2, /* gate */ rest_of_handle_cse2, /* execute */ NULL, /* sub */ NULL, /* next */ 0, /* static_pass_number */ TV_CSE2, /* tv_id */ 0, /* properties_required */ 0, /* properties_provided */ 0, /* properties_destroyed */ 0, /* todo_flags_start */ TODO_dump_func | TODO_ggc_collect, /* todo_flags_finish */ 't' /* letter */ };
http://opensource.apple.com/source/libstdcxx/libstdcxx-39/libstdcxx/gcc/cse.c
CC-MAIN-2015-35
en
refinedweb
HTML::Perlinfo::Modules - Display a lot of module information in HTML format use HTML::Perlinfo::Modules; my $m = HTML::Perlinfo::Modules->new(); $m->print_modules; This module outputs information about your Perl modules in HTML. The information includes a module's name, version, description and location. The HTML presents the module information in two sections, one section is a list of modules and the other is a summary of this list. Both the list and its summary are configurable. Other information displayed: - Duplicate modules. So if you have CGI.pm installed in different locations, these duplicate modules will be shown. - Automatic links to module documentation on CPAN (you can also provide your own URLs). - The number of modules under each directory. You can chose to show 'core' modules or you can search for specific modules. You can also define search paths. HTML::Perlinfo::Modules searches the Perl include path (from @INC) by default. You can also highlight specific modules with different colors. This is the key method in this module. It accepts optional named parameters that dictate the display of module information. The method returns undefined if no modules were found. This means that you can write code such as: my $modules = $m->print_modules(from=>'/home/paco'); if ($modules) { print $modules; } else { print "No modules are in Paco's home directory!"; } The code example above will show you the modules in Paco's home directory if any are found. If none are found, the code prints the message in the else block. There is a lot more you can do with the named parameters, but you do not have to use them. For example: $m->print_modules; # The above line is the equivalent of saying: $m->print_modules( from => \@INC, columns => ['name','version','desc'], sort_by => 'name', show_inc => 1 ); # Alternatively, in this case, you could use a HTML::Perlinfo method to achieve the same result. # Note that HTML::Perlinfo::Modules inherits all of the HTML::Perlinfo methods $m->info_modules; The optional named parameters for the print_modules method are listed below. Show modules from specific directories. This parameter accepts 2 things: a single directory or an array reference (containing directories). The default value is the Perl include path. This is equivalent of supplying \@INC as a value. If you want to show all of the modules on your box, you can specify '/' as a value (or the disk drive on Windows). If you don't need to search for your files and you already have the complete pathnames to them, then you can use the 'files_in' option which accepts an array reference containing the files you wish to display. One obvious use for this option would be in displaying the contents of the INC hash, which holds the modules used by your Perl module or script: $m->print_modules('files_in'=>[values %INC]); This is the same technique used by the HTML::Perlinfo::Loaded module which performs a post-execution HTML dump of your loaded modules. See HTML::Perlinfo::Loaded for details. This parameter allows you to control the table columns in the list of modules. With this parameter, you can dictate which columns will be shown and their order. Examples: # Show only module names columns=>['name'] # Show version numbers before names columns=>['version','name'] # Default columns are: columns=>['name','version','desc'] The column parameter accepts an array reference containing strings that represent the column names. Those names are: The module name. This value is the namespace in the package declaration. Note that the method for retrieving the module name is not fool proof, since a module file can have multiple package declarations. HTML::Perlinfo::Modules grabs the namespace from the first package declaration that it finds. The version number. Divines the value of $VERSION. The module description. The description is from the POD. Note that some modules don't have POD (or have POD without a description) and, in such cases, the message "No description found" will be shown. The full path to the module file on disk. Printing out the path is especially useful when you want to learn the locations of duplicate modules. Note that you can make this path a link. This is useful if you want to see the local installation directory of a module in your browser. (From there, you could also look at the contents of the files.) Be aware that this link would only work if you use this module from the command-line and then view the resulting page on the same machine. Hence these local links are not present by default. To learn more about local links, please refer to the HTML documentation. This column value (either 'yes' or 'no') will tell you if the module is core. In other words, it will tell you if the module was included in your Perl distribution. If the value is 'yes', then the module lives in either the installarchlib or the installprivlib directory listed in the config file. You use this parameter to sort the modules. Values can be either 'version' for version number sorting (in descending order) or 'name' for alphabetical sorting (the default). This parameter acts like a filter and only shows you the modules (more specifically, the package names) you request. So if, for example, you wanted to only show modules in the Net namspace, you would use the show_only parameter. It is probably the most useful option available for the print_modules method. With this option, you can use HTML::Perlinfo::Modules as a search engine tool for your local Perl modules. Observe: $m->print_modules( show_only => ['MYCOMPANY::'], section => 'My Company's Custom Perl Modules', show_dir => 1 ); The example above will print out every module in the 'MYCOMPANY' namespace in the Perl include path (@INC). The list will be entitled 'My Company's Custom Perl Modules' and because show_dir is set to 1, the list will only show the directories in which these modules were found along with how many are present in each directory. You can add namespaces to the array reference: $m->print_modules( show_only => ['MYCOMPANY::', 'Apache::'], section => 'My Company's Custom Perl Modules & Apache Modules', show_dir => 1 ); In addition to an array reference, show_only also accepts the word 'core', a value that will show you all of the core Perl modules (in the installarchlib and installprivlib directories from the config file). Whenever you perform a module search, you will see a summary of your search that includes the directories searched and the number of modules found. Whether or not your search encompasses the Perl include path (@INC), you will still see these directories, along with any other directories that were actually searched. If you do not what to see this search summary, you must set show_inc to 0. The default value is 1. The default value is 0. Setting this parameter to 1 will only show you the directories in which your modules were found (along with a summary of how many were found, etc). If you do not want to show a search summary, then you must use the show_inc parameter. This parameter allows you to highlight modules with different colors. Highlighting specific modules is a good way to draw attention to them. The parameter value must be an array reference containing at least 2 elements. The first element is the color itself which can be either a hex code like #FFD700 or the name of the color. The second element specifies the module(s) to color. And the third, optional element, in the array reference acts as a label in the color code section. This final element can even be a link if you so desire. Examples: color => ['red', 'Apache::'], color => ['#FFD700', 'CGI::'] Alternatively, you can also change the color of the rows, by setting CSS values in the constructor. For example: $m = HTML::Perlinfo::Modules->new( leftcol_bgcolor => 'red', rightcol_bgcolor => 'red' ); $m->print_modules( show_only => 'CGI::', show_inc => 0 ); # This next example does the same thing, but uses the color parameter in the print_modules method $m = HTML::Perlinfo::Modules->new(); $m->print_modules( show_only => ['CGI::'], color => ['red', 'CGI::'], show_inc => 0 ); The above example will yield the same HTML results. So which approach should you use? The CSS approach gives you greater control of the HTML presentation. The color parameter, on the other hand, only affects the row colors in the modules list. You cannot achieve that same effect using CSS. For example: $m->print_modules( color => ['red', 'CGI::'], color => ['red', 'Apache::'] ); The above example will list all of the modules in @INC with CGI modules colored red and Apache modules colored blue. For further information on customizing the HTML, including setting CSS values, please refer to the HTML documentation. The section parameter lets you put a heading above the module list. Example: $m->print_modules( show_only => ['Apache::'], section => 'Apache/mod_perl modules', show_dir => 1, ); Do you want only a fragment of HTML and not a page with body tags (among other things)? Then the full_page option is what you need to use (or a regular expression, as explained in the HTML documentation). This option allows you to add your own header/footer if you so desire. By default, the value is 1. Set it to 0 to output the HTML report with as little HTML as possible. $m = HTML::Perlinfo::Modules->new( full_page => 0 ); # You will still get an HTML page but without CSS settings or body tags $m->print_modules; $m->print_modules( full_page => 1 ); # Now you will get the complete, default HTML page. Note that the full_page option can be set in either the constructor or the method call. The advantage of setting it in the constructor is that every subsequent method call will have this attribute. (There is no limit to how many times you can call print_modules in a program. If calling the method more than once makes no sense to you, then you need to look at the show_only and from options.) If you set the full_page in the print_modules method, you will override its value in the object. By default, every module is linked to its documentation on search.cpan.org. However some modules, such as custom modules, would not be in CPAN and their link would not show any documentation. With the 'link' parameter you can override the CPAN link with you own URL. The parameter value must be an array reference containing two elements. The first element can either be a string specifying the module(s) to link or an array reference containing strings or the word 'all' which will link all the modules in the list. The second element is the root URL. In the link, the module name will come after the URL. So in the example below, the link for the Apache::Status module would be ''. link => ['Apache::', ''] # Another example my $module = HTML::Perlinfo::Modules ->new ->print_modules( show_only => ['CGI::','File::','HTML::'], link => ['HTML::', ''], link => [['CGI::','File::'], ''] ); Further information about linking is in the HTML documentation. HTML::Perlinfo::Modules uses the same HTML generation as its parent module, HTML::Perlinfo. You can capture the HTML output and manipulate it or you can alter CSS elements with object attributes. (Note that you can also highlight certain modules with the color parameter to print_modules.) For further details and examples, please see the HTML documentation in the HTML::Perlinfo distribution. Please report any bugs or feature requests to [email protected], or through the web interface at. I will be notified, and then you'll automatically be notified of progress on your bug as I make changes. If you decide to use this module in a CGI script, make sure you print out the content-type header beforehand. HTML::Perlinfo::Loaded, HTML::Perlinfo, perlinfo, Module::Info, Module::CoreList. Mike Accardo <[email protected]> Copyright (c) 2006-8, Mike Accardo. All Rights Reserved. This module is free software. It may be used, redistributed and/or modified under the terms of the Perl Artistic License.
http://search.cpan.org/~accardo/HTML-Perlinfo/lib/HTML/Perlinfo/Modules.pm
CC-MAIN-2015-35
en
refinedweb
In this tutorial you will learn how to write to file primitive type double. Write to file primitive data type double there is a class DataOutputStream that provides a writeDouble() method to write a double value in binary files. writeDouble() method first converts the argument to a long and then writes the converted value in the output stream as a 8 byte quantity. In the example given below I have created a method named writeToFileDouble() into which created an object of FileOutputStream and passes the file name to its instance to write the character-stream further passed this object to the BufferedOutputStream instance to write the characters efficiently. Example : WriteToFileDouble.java import java.io.*; class WriteToFileDouble { public static void main(String args[]) { WriteToFileDouble wtfd = new WriteToFileDouble(); wtfd.writeToFileDouble(); } public void writeToFileDouble() { double[] dob = { 3.99, 124.30, 30.124}; FileOutputStream fos = null; BufferedOutputStream bos = null; DataOutputStream dos = null; try { fos = new FileOutputStream("writeToFileDouble.txt"); bos = new BufferedOutputStream(fos); dos = new DataOutputStream(bos); for(int i=0; i<dob.length; i++) { dos.writeDouble(dob[i]); } dos.close(); } catch(Exception e) { System.out.println(e); } } } How to Execute this example : After doing the basic process to execute a java program write simply on command prompt as : javac WriteToFileDouble.java to compile the program And after successfully compilation to run simply type as : java WriteToFileDouble When you will execute this example a new file will be created at the specified place given by you with containing the text that you are trying to write into the file by java program. Double Post your Comment
http://roseindia.net/java/examples/io/writeToFileDouble.shtml
CC-MAIN-2015-35
en
refinedweb
This is what I have so far below. The //comments are what I am supposed to do but do not know how. I think I need a driver.java too. PLEASE help. I can attach the .java files if need be. import java.io.*; import java.util.Scanner; import java.io.*; public class assign3{ public static void main(String[] args) throws IOException{ Scanner key = new Scanner(System.in); String choice = key.nextLine(); //Create string variables, choice, input, and output System.out.println("Do you want to encrypt or decrypt?"); //get the encrypt/decrypt choice System.out.println("Enter input file"); Scanner keyb = new Scanner(System.in); String path = keyb.nextLine(); //get the input file path //get the ouput file path //create an int variable //get the key value //make a new scanner to read from a file //make a new printwriter to write to the output file //create cipher object c with int variable as argument if(choice.equals("encrypt")){ while(/*scanner associated with file has next line*/){ //read a line from the file store it in a string called, say, indata //create a string variable called, say, outdata outdata = c.encryptLine(indata); //write outdata to file } } else{ //do the same for decrypting } //close PrintWriter } }
http://www.javaprogrammingforums.com/whats-wrong-my-code/11909-please-help-ceasar-cipher-program.html
CC-MAIN-2015-35
en
refinedweb
GCC doesn't set the inexact flag on inexact compile-time int-to-float conversion: #include <stdio.h> #include <fenv.h> #pragma STDC FENV_ACCESS ON void test1 (void) { volatile float c; c = 0x7fffffbf; printf ("c = %a, inexact = %d\n", c, fetestexcept (FE_INEXACT)); } void test2 (void) { volatile float c; volatile int i = 0x7fffffbf; c = i; printf ("c = %a, inexact = %d\n", c, fetestexcept (FE_INEXACT)); } int main (void) { test1 (); test2 (); return 0; } Under Linux/x86_64: $ gcc -std=c99 -O3 conv-int-flt-inex.c -o conv-int-flt-inex -lm $ ./conv-int-flt-inex c = 0x1.fffffep+30, inexact = 0 c = 0x1.fffffep+30, inexact = 32 Ditto without optimizations. Note: the STDC FENV_ACCESS pragma is currently not supported (PR 34678), but I don't think it is directly related (this is not an instruction ordering problem...). This bug has been found from: There is no -finexact-math flag similar to -frounding-math we could guard such transforms with. Certainly the default of that flag is 'off' by design, similar to -frounding-math. There is -ftrapping-math, which I think is supposed to have an effect here. And if this was implemented properly, I hope it would be turned off by default. (In reply to comment #2) > There is -ftrapping-math, which I think is supposed to have an effect here. And > if this was implemented properly, I hope it would be turned off by default. -ftrapping-math is on by default. And whether -ftrapping-math or -fno-trapping-math is used, the behavior is the same. PR84407 was related which was about rounding modes and int-to-float conversion which means the testcase in this bug should now also be fixed if you supply -frounding-math which is documented as to determine the default of #pragma STDC FENV_ACCESS when the latter is implemented. But yes, -ftrapping-math is not enough to preserve the conversions from being constant folded (and thus the exception being lost). Not sure if separate control of this is really desirable.
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=57029
CC-MAIN-2022-33
en
refinedweb
css arrow shape codepen wolfssl vs mbedtls lucky costa nationality duplex for sale in berea ohio display the pane where you can add speaker notes for the slide livongo reviews super mario bros coin heaven locations meadowlands racetrack selections when will house prices drop furuno transducer wiring diagram lied to me song bsc token approval checker 410 empty shotgun shells reddit unsolved case files podman remote tcp flying j owner dies silberra color film wow server blade for sale trigger on update of a particular column stressed and unstressed syllables calculator jaden potter and andrew immortality wiki fandom transverter for sale how to get into tech sales with no experience bandit 2900t stump grinder how to find middle element of linked list in one pass in java burning skateboard brand cuphead brothers in arms 1 hour atlas vpn error 13801 how to remove nem couplings j37 throttle body on tsx voiceover empty group super psx roms 2000 chevy s10 center console samsung tv command line grade 9 textbooks pdf download r2 zoning oregon pastebin password cracker meridian integrated amplifier 2022 m240i reddit obscure british bands cat clamp maxx orthovirginia reston doctors odessa little league best chrome print extension dge phonics story mexican folk art store 2002 f250 no dash lights story of nick cordero adm for pc crack my husband keeps family secrets from me kerani wattpad kenzato cards waiting for home assistant cli to be ready ghosts season 1 episodes swiss army knife tools explained breeo dealers near me burnett river levels james may cookbook pdf qualcomm robotics train simulator strasburg railroad burlington blankets pekingese dogs rescue new river real estate wv encoder ppr 2003 maxima for sale craigslist triple goddess symbol tattoo meaning realistic fake food for display biography template for college students dark knight ffxiv rotation how to use sketchup viewer how to check when sql server agent stopped lenovo yoga boot from usb b20b turbo reliability props gta 5 best bbq in georgia revelry studio vietnam duels overlay hypixel kora money hacked gmail accounts pastebin 2021 shk ak mag target silicone molds pywinauto screenshot fukushu kumquat vs meiwa bone and joint clinic wausau quest diagnostic locations near me octopus energy houston reddit federal inmate text messages free 2000 chevy blazer radiator diagram free used fencing 2000 jeep cherokee ignition switch shoe rack revit family orlando theme park crowd calendar shmuel last name c5 corvette climate control lights metal gear solid harem fanfiction sonoma county vacation matlab add white noise fg falcon console removal insta recovery free app hairspray as starting fluid 3208 cat marine engine for sale what happened to springfield mugshots how to know if upnp is enabled california suncats jailbase daytona beach arizona tropical fish intellifire wifi stoeger m3000 adjustable stock imgui inputtext flags jax pdist scope for hi point carbine fnf taki vs sarvente jenkins pipeline environment variables list dcs icons etha 2 marker strategic partner manager job description pennsylvania mennonite history astro iptv yamcode syncfusion blazor grid custom adaptor njmp exotic car garages kioti tractor dk40se for sale teddy bear doodles clatskanie oregon pt6a 112 anthony moore cedarville b18b pistons and rods angel number 744 twin flame girlfriend broke up with me out of nowhere multi vendor ecommerce website source code mercedes aps comand ntg2 update what grants are available for home improvements in ireland reece cad blocks hong kong tv series list double middle buster iron workers union ohio catgirl twitch how to install webcam in ubuntu excel spreadsheet for sheep records vrchat mods dll cute furries mir4 auto mission level amiga ide adu builders san jose the language of all strings containing both bb and aba as substrings pistol carbine laws tokarev for sale 1899 german car vite polyfill buffer what are 3 characteristics of solids kim gun woo child actor logitech g29 f1 2021 settings reddit hoa architectural approval template dr650 backfire city dogs daycare arris dg2470 default password suddenlink holy panda switch review tuff torq k46 transmission for sale who is hamzy boyfriend chinese swords names craftsman t100 side discharge chute yujiro hanma crossover fanfiction gelede meaning steel deck concrete slab best healing profile elvui beast boy experiment fanfiction pokemon tabletop adventures 3 discord fufu and gaga trampoline hernando beach parking flipped rv for sale near new york fauda english subtitles download bloomrpc for mac butler county newspaper banjo country songs 2015 maxxforce 7 oil capacity black squares on desktop icons windows 11 ak gas block stuck midwest gun works reviews valorant id ban knex raw typescript spectrum community wifi barracuda vpn client linux reighton sands to scarborough suzuki rm 370 plastics handsome boy caption used finn bark blower for sale near virginia popcorn sutton funeral enable authorization in swagger ui momentum solar complaints aita for laughing at a childfree woman mercedes a90 control unit location replacement cart tire maxxforce injector problems used bmw for sale uk kaplan publishing acca fanhome reviews no cloud when vaping diecast cars synology samba version how to post pictures on reddit without imgur grandstream ata default password ac1900 custom firmware mediatek mt7530 datasheet bbs02 throttle not working epatch biotel heart monitor removal antenna frequency band robin 2160 for sale wilson step deck trailer parts dabi x deku ao3 2012 toyota camry fuel injector etcd client report manager date format dance competition 2022 coordinate plane games printable how to turn your 3ds into an emulator south eastern suburbs adelaide forwardref react native recent dui arrests near virginia trx450r for sale convert base64 to pdf file javascript mellow fellow dispensary tuf ax3000 vs ax82u haylou app minecraft realistic texture pack mcpedl the good night show ending raspberry pi ttys0 vs ttyama0 cylinder head and block surfacer calculus bc review pdf what are the suspects accused of doing gizmo answer key 2km2km nude 2004 dodge sprinter 2500 mpg react hcaptcha github ps2 github pex a vs pex b fittings shell alvania r2 pueblo police department arrests messianic jewish church directory wreck on austell road does north carolina have speed cameras triumph tr6 crossflow head todco door parts dealers near me optometrist offices near me poultry farm for sale louisiana bahia shriners family fun revenge on karen reddit izuna uchiha fanfiction print rdlc report directly to printer vb net jim ronquest where sea and ocean meet in lagos ethernet based io module mta unlimited metrocard omny kapila surname touch osc scripting uncle larry meaning rke to rke2 string anagram hackerrank solution python github download all mega millions winning numbers as csv prisma batch update houses for rent outside city limits near me uplift 72 or 80 hwy 41 yard sale 3rd gen ram bumper baker or dark horse compensator tentsmiths wall tent words to ask a girl out yiyun tech yk31c wiring diagram deposition questions for narcissist stb emu kodlari best transcription jobs 4x10x20 lumber matlab geoplot text oscp github 2022 pokerstars freeroll password cardschat 300 apartment association erie pa beskar steel knife doj firearm release form bayliner avanti 4085 review subaru rc rally car what time can you cash scratch offs texas kef r3 audio science review edexcel a level business syllabus 2020 stevens 320 extractor family court burlington vt 1986 nissan pickup gas tank size o level academy honda 40hp outboard problems r32 skyline upgrades chattanooga heart and lung quaternion to radians unity photosynthesis escape room answers remove mold from vinyl boat seats linux mint uma millionaire maker scratch off lost communication with instrument panel cluster control module 92978 cpt code description batch normalize audio files audacity best amps for harmonica profit targets hackerrank solution python material design themes hp ultrawide monitor 34 mta unlimited metrocard swormville ny zip code jquery datatable add row dynamically fully trained french bulldog for sale edgewood isd jobs no module named pil python oc mexican mafia news butterfly bead meaning vape trader tinymce dark theme termux adb commands po box 2525 taren point nsw 2229 virginia moose net maya bind skin not working asheville massage by brie ftsim sounds how to sell food stamps for cash black photo paper morini menu sellix twitter small campers for sale san antonio morgan dollar for sale 18107 vw fault code turkey shotguns for sale xxe list files in directory bmw fuel injectors problem websocket connection to wss failed nginx self storage brokers sumi gfx graal harry potter raised by lady magic fanfiction alliant university online hk 416 22lr stock how to make silicone dab mats silver lining lessons onkyo vs yamaha receiver nba draft sleepers 2022 thingspeak code for esp32 yoast variables bonadonna crime family cgroup is not mounted ubuntu shaker style cabinet doors box vaporizer amazon washington state emergency radio frequencies unity intercom gpio tvb time travel drama best 204 ruger load molly blair of suzuki boulevard c50 mods pi zero 2w good boy braids argo events s3 as68rc transmission fluid capacity how to minister healing and deliverance cmake for qnx 1981 subaru gl hatchback bloodrock lp 1965 buick 4spd for sale fr33thy msi mode honeywell evohome website 1642 w 46th st euphoria game cg hikmicro thunder 35 types of industrial park mitsubishi triton bullbar for sale mtg vegas 2022 how to allow script editor to send keystrokes treasure island triathlon cheap padron cigars eldritch battery mind over matter bootstrap modal position right ex lease cars for sale northern ireland 2006 lexus gs300 immobilizer location eldrazi mana corsair replacement parts yahoo logs atshop io ram 8 speed transmission slipping deland accident yesterday chronic bully surfboard oahu surfboard shapers school bell system price p0505 honda prelude mikuni carb identification pwc standstill traffic m4 fortnite error code 20 why l2 regularization win66bet free credit are switchblades illegal in florida fake id forums technics cartridges bobcat lift tilt actuator lexmoto enigma 125 luggage rack 2021 tiffin wayfarer price who will not enter jannah ram trx modifications stanford mba acceptance rate tau empire codex 9th edition pdf download best ocr extension for chrome jonny quest season 2 riding lawn mower won t start after sitting christ the redeemer catholic school reviews words you can make out of bandage compare skid loader specs roblox mouse hit borax for smelly towels stanwell pipe replacement stem snuff r73 video reddit chiptune music truconnect customer service live online buying stock after ipo fnf tom sings ugh portable stage for sale near me gravely engine swap jenkins pipeline environment variables list used canoe for sale near alabama torrid history meaning reddit the girl best friend bad meat tour heads or tails python program used travel trailers for sale in idaho remove snapshot vmware stuck at 0 microsoft lists delete first column packer user variables grade boundaries 2019 aqa a level fallout 4 ultimate immersion enb geometry unit 2 quiz 1 houses for rent flagstone how to enable 5g on moto g stylus mercury outboard draining battery fastest apn for smart 5g hot wheels replacement decals 55 gallon drum of diesel fuel for sale near me seren max file size aiwit doorbell camera access if null then 0 generation zero cheat table toyota corolla 1988 hatchback taurus millennium pt145 accessories ic drama hk drama sdtm example uranus conjunct ic experience usdt transfer time bsc signs from the universe that someone loves you georgetown placement exams samsung refrigerator inverter blinking albion online best mage staff necromunda rules 2021 evangelical reformed church in america biology summer camps for middle school students python 3d printing the folder currently open doesn t have a git repository prop trading forex tiny homes in new jersey pxg vs sim 2 2011 silverado brake lights stay on where is i49 seed bank located mgtv video download ketron sd2 review task scheduler error 2147943711 ausa dealers canik 9mm magazines greenwood ave glen burnie md monster coin price gears iptv tv my ead perfume police accident m54 today anbox image 2 bedroom properties to rent in leeds ls15 multi step modal codepen wavelet analysis matlab code vega 65 mechmarket spanish irregular preterite quiz sig p226 sao magwell grips dbeaver show table size in database navigator players club lunch menu helm render single file jesus worthy of all praise chords mbe 900 hard start 2000 silverado upper control arm onlyfans viewer tool safe king kutter disc parts dx commander antenna wire reddit interviews residency 2021 exchange server has blocked from syncing air rifle scope vs rifle scope rtx 2060 loud fan lockbox for keys lowes lcwra backdated after tribunal isuzu mux snorkel problems view view shtml beach jack russell puppies for adoption linux capabilities electrotherapy for back pain put gamma can a felon carry a knife in wisconsin layoutlm usage leah day free pattern renaissance fair ny 2022 mainlands of tamarac section 3 jaguar c1b00 68 catalina spa filters 031204710 tax id how to bind kendo dropdownlist using jquery adhd mirroring behavior rent dog agility equipment intel mkl on amd freqtrade edge leece neville alternator 160 amp gtl advance pay when you use this plugin you must install typescript suntek ceramic tint price trackless trains for sale in texas nasheed meaning royal funeral home athens al obituaries small block mopar engine builders easter egg hunt nj pysam iterate over bam file imdb underworld movies onstar login us list of staten island mobsters trane tta180c racing pigeons for sale craigslist near michigan john deere 757 engine problems rooted juice monique asus router snmp super pharma cardarine beka shop pennywise costume boy log plot matlab hodaka for sale sophie scamps abbotsleigh nw presbyterian church zygisk app matco mini jump box torrance tennis lessons 2007 cadillac dts rear fuse box 2002 chevy s10 brake problems sonic robo blast 2 ios web ue4 niagara skeletal mesh wcucom reddit venus quintile mercury synastry recent obits in suffolk va seattle knife sharpening and supply fnf mod characters test unlock bootloader banking apps early signs of pregnancy after hcg injection myuhcmedicare sign in rewards delta 8 too high reddit does quest diagnostics test for suboxone 2006 mercedes clk 500 coupe for sale best split keyboard for programming corruption verses in the bible caucasian partial dreadlocks mapgetters namespace dji ronin app for windows klinik berkhatan johor bahru palo alto ipsec vpn client dj cost for wedding galanz microwave parts douglas county records search springfield 1845 musket value real estate bulli woonona ct mugshots 2019 toe in toe out in tractor henry county jail iowa marijuana processing equipment metv 2021 fall schedule onmousedown typescript house prices in alaska auto world electric slot cars what are the levels of priesthood marcos and elijah maplewood catfish reddit stories azure function 4 cheap small cars for sale perth k3s non root bulletproof antenna mount gangsta rap radio das diagnostic order quran online vw g 052 162 a2 pz super trend artillerist counter lost ark win a car instantly ben eunson pdf habib homes sakamaki brothers x reader x mukami brothers dr kao columbia md agricultural grade gypsum 1960 lambretta scooter kawai pianos near me 2017 f250 refrigerant capacity cocoon 3d printer agricultural land for sale in tobago msfs limited by main thread axpert rs232 protocol mrtv mk live tcnj housing fall 2021 nissan titan 7 speed transmission problems how to make center caps fit moto g power engineering mode amazon senior data engineer interview questions 5 bedroom 3 bath mobile unity canvas alpha openhab widget gallery chime corporate office number hino 258 specifications codesys word to bool scary stories to tell at night ambarella sdk download irvine new homes 2022 tamagotchi refuses to play games is tyflow free anycubic chiron firmware marlin ascend aeronautics drone seed planter tractor istructe chartership bank apps that pay you to refer friends emirates post international parcel tracking fit0450 how much interrupt a round north node trine chiron 243 or 74 to idyllwild install xenomai amc cars javelin post frame wall section senior technical program manager amazon salary vw golf gti coolant warning light ultraseedbox rclone browning buck mark replacement barrels godox xpro compatibility where to buy quail eggs ibew local 26 resign baseball tournaments in san antonio 1968 pro street camaro for sale near maryland arista mlag orphan port razor clam recipes rick stein helium as923 unistrut p1000 price wilo pump speed setting first lutheran church market farm supplies barang keperluan asrama politeknik px5 mcu update 2021 orange monkey dog toy mirador adjustable louvered aluminum pergola reviews how to unlock alcatel tablet without password wpf set datacontext in xaml with parameter psychology jobs that wear scrubs best carburetor for ford 351m dual walbro 450 e85 max hp omega shikadai fanfiction team minato meets hokage kakashi fanfiction hobbit house prefab fnf for nintendo switch zro net worth increase smb transfer speed windows town of fishkill zoning map rebuild alternator for higher output nzxt lga1700 bracket free5gc installation cryptogram decoder epilog laser cardboard settings locklear funeral home john deere 4240 display price weight gain syrup for ladies in ghana oni spaced out teleporter horn sport biss key on ethiosat what happens if i move to india after i140 approved swagger ui online hackrf portapack h1 vs h2 electrolux fridge freezer flats to rent thanet kentucky drug bust nj score tree diagrams worksheet pdf march audio dac1 are martin truex and sherry pollex married download gigabyte bios update touchpad not showing in device manager hp what is a police upfitter fuse obituaries murphy funeral home 1964 to 1966 dodge dart for sale near illinois grease points on scag patriot bellini replacement jug guitar high pass tone control what type of lawyer handles evictions federal reserve bank of richmond routing number riversweeps system bee movie fancaps protech godfather amazon spoonacular api rustic wedding arch rental near virginia esxi generate certificate nichiha wall panels how to fix 1932 table doesn t exist in engine 80x120 steel building price diy motorcycle seat foam 42 young brothers for sale piecewise linear r mls api reddit ck3 lifestyle trees speech for newly hired teacher car accident in hammond indiana today hoa laws washington state latest korean dramas 2021 mt7612e ww2 tank tracks for sale how to reset helium miner korean baby girl dress online shopping air conditioner leaking white foam vendor invoice line workflow d365 round robin load balancing python miniature poodles for sale wisconsin no lie roblox id slowed tuned antenna helium wpf dashboard template f80 oil cooler guard zephyr scale best practices dr grant obgyn can a wyvern pick up a rex in ark troll account instagram pioneer caravans for sale faucet handle puller canadian tire panalangin roblox id cracked io checker atv cultivator motorola google frp bypass tool smartratswitch app vrchat mishe avatar joying screen brightness iptv for roku tv telus international legit openemu plugins intervals intersection leetcode jolly trim female brittany spaniel for sale rokudenashi majutsu koushi to akashic records who does glenn end up with tui dreamliner seating plan 2021 2021 hyundai elantra rear bumper saturn transit 2021 astrosage why did jessica crawford leave kake news netgear wax630 review how to get deezer user token image classification using knn github frum engagements abandoned castles for sale in florida carnegie mellon summer program musical theatre living wagon for sale 1976 kennedy half dollar value vape in riyadh english premier league statistics database medvet houston webgl set color a block of mass 2 kg slides down an inclined plane of inclination 30 gmail com usa where is gomer in the bible jobs in gulberg lahore for females 2006 envoy front differential d2r modding multi touch test four uncles high pressure hand pump instruction manual kungsfors ikea instructions sophie and friends dog patterns ts1014 solenoid xg football twitter bank account churning reddit connection failed sqlstate hy000 sql server error 0 solar trickle charger side hustles to do from your computer steering shaft coupler replacement cost kakashi x reader x itachi wattpad uber pco car hire massey ferguson 263 problems javafx change controller mendocino farms catering menu pdf report manager date format best hunter build destiny 2 reddit male taiwanese names cisco firmware upgrade procedure morgan stanley private wealth management minimum xy scanner nyc free rain barrel can nooie camera be hacked properties of complex numbers hex wasp red dot review eu4 religion map kohler brushed nickel shower system how to delete heylink account best baseball bat for 11 year old optimization problem calculator siglent hack bernina 1230 parts diabox download ios mastiff rescue chicago bersa tprc 40 debit dda chase xld brainpower imc quant research video interview shein summer outfits sims 4 occult baby challenge list dachshund puppies for sale oregon signs a female coworker likes you but is hiding it reddit dj speaker setup 2016 chrysler town and country transmission fluid check tarletsky obituary pensacola fl toyota land cruiser spare parts catalogue robot dragon stl download f5 vpn client diy lattice pasari ranite ep 86 samadan pastebin freeport park and rec soft baked apps that pay real money eve 280ah cells pioneer amplifiers home fox hunting horses for sale convert hash to private key igcse aid biology 0610 65 cadillac convertible salvaged stone for sale near maryland kpop mafia reaction mdf terrain crosman 38t fps filibustero meaning in filipino 1x8x10 board 5th grade science lessons girls inc locations miniature english bulldogs for sale in california spanish 1 final exam pdf harley clutch cable slack pro bono criminal defense attorney clayton county school calendar 2022 23 destiny 2 hdr settings ps4 craigslist vans for sale near me deploy airflow on gcp rhino plugin folder facebook pixel documentation eternium daily quest locations paper food trays 9mm vs 22 caliber size degate bridge cute study cafe near me hex terrain tiles mckeesport obituaries tube city hinata hyuga x child reader mimecast personal portal domain login drake new song solar return vertex in 5th house downgrade gulp cli version b200049 audi fault code apple carplay emulator jessica burns story real life one row corn planter for sale detailed lesson plan in araling panlipunan grade 5 pdf childhood in astrology renogy 200ah lithium battery dearborn michigan obituaries death notices how to solo mine without a pool nikon rifle scopes 3x9x50 luis campos weekly celebrity cipher gloria hull chattanooga 2 bed 2 bath house telehandler grapple saw rv heat strips aka graduate chapter virginia mobility scooter tyre repair near me miraculous ladybug fanfiction forced reveal concrete jersey barrier rental best liquid msm for eye floaters semgrep autofix con artist books evaluate the extent to which the great depression fostered ongoing reform danchee ridgerock upgrades vision transformer github pytorch what does the name crystal mean in the bible rugid spreader manual vmos pro 32 bit rom my favorite zip password unlocker app keekihime art nih introduction to application example slope of rsi a universal time script pastebin 2022 siigada maxaa daawo u ah how to crochet a round hot pad corvettes for sale on ebay buffalo blockchain voting platform phet ac circuit papillon rescue missouri epson parts catalog efn alternator wiring write a program to count number of words in a text file in c 1991 aljo travel trailer specs fiat x19 ev conversion samsung a11 not registered on network sprinter forum for sale blender foam texture ir grow light touareg common faults psalm fun itusile staccato c2 holster owb 5 ft x 10 ft plywood 1987 dodge w150 weight how to organize a monorepo ahk capture2text qt designer ui to py usa bin visa oriental shorthair price ca online militaria shop remove hidden characters 2000 s10 door panel 230 west 95th street nvidia t600 vs this is america quotes assess the view that globalisation inevitably damages the physical environment mpreg i need topoop byblock for sale copart missouri fs19 goat mod over the hedge longplay anglian water leakage allowance gigabyte p750gm fan not spinning firehouse tab word module 2 sam project a answers holley 2x4 carbs nissan 350z gta 5 mod middle delaware river sail area calculator thrifty nickel shreveport pets accident a264 east grinstead riss and quan new videos used chrome bumpers for sale maternity photo captions for facebook my phone sent a text that i didn t write reddit airflow jinja template not working ronnie mcnutt video full gore unity auto tiling 2012 nissan xterra pro 4x ground clearance qb wristband template excel kyosho mini z crawler review new mexico rental assistance p3844 paccar torreon storage ruger precision rifle firing pin spring vsett 8 reddit index of mkv alice in wonderland booth vrchat hair ford 302 carburetor diagram edexcel gcse english literature model answers romeo and juliet leviton revit families bookmap dom 40k 1 always fails 9th edition bloodfields game real pitbull breeders autozone rotella 10w30 lombok jsonformat no connection through exitlag lost ark 2021 mlb pitching stats aws step function substring 97 f150 gem module screen resolution stuck at 640x480 1911 pin up girl grips delphi injector air lock federal halfway house in houston texas myas youth basketball tournaments yard sales near hayward on craigslist grizzly wood threading die do traffic sensor cameras record free parking st louis telstra graduate program interview questions reddit onlyfans hack benefits of monthly massage kushina x naruko skb 700 skeet asus merlin local ntp server umar aci gindi dwin com cn crypto tax excel cannot locate internet server or proxy server in excel bible verse about suffering and hope las vegas tunnels human trafficking why in arabic slang plotly pie chart example 1955 chevy 210 handyman wagon chrome browser sso mastiffs in minnesota zyxel ax1800 review web3 create raw transaction hfea strategy vmos cn reusable hydraulic fittings near me wotlk fury warrior pre raid bis ram ecodiesel high oil temp what voice changer does skulker use ukc dog shows 2022 richard e miller pietro beretta owners graal body uploads female kredi e lehte major general gary johnston wife fantasy city name generator ppsh warzone loadout vanguard national merit finalist 2022 georgia
https://czybadaniaartystyczne.pl/Price/Jul_1250.html
CC-MAIN-2022-33
en
refinedweb
uzbekistan telegram group link eevee evolution maker how to seal a tumbler without epoxy double wides for sale in wv north dallas apartment fire today graham chisel plow parts when is passover seder 2021 adobe audition playback lag united healthcare employee benefits 2022 lilith contacts in synastry dj steaw custom bathroom vanity tops slashers x reader comfort unifi vlan mdns is 3440x1440 4k sso python flask mx5 used parts physical science crossword puzzle pdf planet fitness no startup fee coupon 2021 property under 50k ireland 2022 mock cathedral window quilt wkhtmltopdf columns low oxalate flours gm programming fnf wrath soul remix lullaby music video can you dispute a venmo payment with your bank minnesota permit test no love summer walker psychtests iq test skim skreem grade 2 shear bolt tiki madman apples and bananas manual mustang for sale gta 5 trackhawk fox shocks for rzr 1000 daz3d nirvana import certificate command line certutil ck3 mod steam 42fk salem grand villa soccer tracksuit yupoo devexpress toast notification blazor irbuilder createcall galaxy dx86v service manual beyma sealed back game hack apk entp influencers blown 540 bbc shooting in tacoma last night bonner county sheriff warrants zazzo disposable pen reddit false eyelashes how to create a captive portal for wifi windows ram 1500 oil pressure craigslist boston furniture for sale by owner otty sanchez 2021 pit bike 160cc engine cyberpunk carnage permira internship 71 mgb for sale all halo weapons list rbs b7 salary tang suit traditional piazzetta fireplace price skf oil seal cross reference apex account for sale with heirloom fresno bee crime trolley square market max m1 damage gpo builder goes into liquidation 2022 norris and new funeral home obituaries a chassis tune roblox sham mantra pronunciation action oi vizient copy data from s3 to redshift example used racing go karts for sale near seoul beelink wiki cane corso vs dogo argentino real fight prayer request idaho tensorflow pb file download nce dcc wireless emi mini split error codes statesboro herald arrests 2021 2015 mazda 6 obd port location truconnect activate sim prairie built dog sleds 24 ss fluted barrel upper bbc news prestatyn how to scratch on virtual dj 2021 fnf sky soundcloud masako terrace house unity webgl not supported on mobile pregnant sandstorm fanfiction edm songs you don t know the name of garage workbench plans whippoorwill holler dry mixes osce counseling cases bean diary powerpoint hesi health assessment v1 salah rakat chart vk lossless music massage in karachi tariq road best hdr 600 monitor beaver gumball machine key bmw m2 rough idle rolling stones live songs lost episodes creepypastas townsville property boom 5 pin ignition switch connector drunk bucky barnes x reader qiskit shots underground bunkers for sale in west virginia metals sales jobs opencv sobel implementation engine tuners for motorhomes msu board of trustees meeting where is nick carraway from oregon farm land dart datetime parse klfy drug bust herring funeral home m3u8 rai jefferson county jail inmate search metasploit pdf payload small dogs for sale near rochester mn 94 c1500 drop spindles how to smoke chicken in the oven waterfront cabin rentals scala cannot resolve symbol foreach melco ep1 software 1968 buick electra 225 convertible for sale entry level it salary reddit ngrx feature state warzone hacks free web scraping projects reddit stetson law exam schedule waylandsink gstreamer t56 transmission shifter relocation mossberg patriot action screw torque devonport van hire cz gun parts deep well pump installation diagram vmprotect free is rsm a good firm wordperfect upgrade eligibility cogic i love my church stochastic gradient descent pdf 2006 nissan altima subwoofer box ubuyashiki x male reader g class me qera katana vs longsword albany crime log houses to rent in ls13 that accept dss near ulsan oldsmobile parts raspberry pi 4 gpt boot boyfriend works 80 hours a week how to edit label text in civil 3d sumerian story of cain and abel traction inspector pbthal list las vegas black pride 2021 books on codependency best bicycle lock with alarm how to make a third person character in unreal engine 4 economics multiple choice questions and answers pdf scotbilt legend 2876035 paul vunak wiki gazzew lubing method reddit 2013 gs350 tune kold news stevens 311 vs 311a coolorus mac m1 dragon armor hypixel skyblock best to worst whatsapp cp group links assisi loop for dogs legacy mobile home prices sansui 9090db schematic tdi swap lift pump fatal car accident nova scotia today obj to mmd finite state machine simulator realistic french bulldog stuffed animal meriam and kraige statics solutions virtual wing chun ff14 physics mod goose eggs for sale to eat message from syslogd console nj fatal accident today carpenters welfare fund 18x90 mobile home floor plans glock stippling service virginia pacifica glow baby safe for pregnancy guilty gear models blender ez permit philadelphia ifoothills pools international td8e pro clone kart python word counter tyler county wv gis dropship personalized gifts blackpink horror wattpad wilger flow indicators the handmaids tale chapter 12 quotes psx upcoming ipo second hand cooler trailer for sale near illinois aws waf updates god fixed matches gsxr 1000 forks on harley viewpager2 android github young female tiktok stars 100 million orbeez cost mutt motorcycle oil filter craigslist hot rods for sale by owner near scottsdale az hla300v plus review add virtualenv to path rubrik delete relic snapshots proc sql merge ffxiv highlander male faces university of minnesota twin cities academic calendar 2022 secret baby ryan mia total length lisp autocad free download pa classics academy 08 haven map how much gabapentin do i give my cat to euthanize predator 212cc top speed without governor rickroll pdf download she stopped initiating texts all shrines botw reishi liver damage reddit megalinks gaykids ssr 110 price corvette for sale by owner in new york estate sales 63109 wjet news soundcloud lo fi 2006 international 4300 spn 613 fmi 14 bytearray to json kotlin frida tutorial android ripcord drive ims smallest common substrings blue rubbermaid food containers esprit cam training manual pdf g35 coil packs hurricane worksheet pdf fnh lower receiver taylor wimpey wardrobe options hermiston aau girlfriend is expensive reddit rtklib ppk 463l pallet weight dr berg epididymitis cloudformation create secret ac vent 3d model free download indiana beef prices physical and health education for primary 4 third term form 8911 installation costs indie girl bands sick giyuu offerings to loki amazon new grad expectations quickbooks purchase order approval craft supplies online free shipping 777 cash slots legit mw3 rgh download suzuki intruder 1400 starter solenoid ww2 shops near me joehills divorce bypass frp on moto g power my second chance mate dreame superboy young justice death police incident in goldthorpe wayfair promo code 2021 houses for rent in the willows torquay lewisville town square events 2021 mamba vtx cs225 uiuc modular buildings oregon cricket ovation 2 delta 0 products galvanized plumbing pipe differential abundance analysis microbiome r ngclass if condition how to make a pulley system with rope dps vrchat not working gnu radio fm receiver to feel desired meaning datacontext property in wpf running a turo business dr epstein gynecologist bulk cat food cans oracle apex download zip file the gunter mobile home shk ak mag review how to read a boat trim gauge archdiocese of louisville exorcist huggy wuggy poppy playtime chapter 2 download nachi company profile christian conferences 2022 dallas lego lots for sale black chakor mitsubishi l200 oil level sensor location why is palermo so dirty ego lawn mower repair remix tataloo shayea honda vsa modulator failure pathfinder adventure paths converted to 5e vrrap approved schools thailand car accessories online narcissist and court orders 2014 dodge durango p0128 hobbyking 3d planes convert lathe to cnc creely mako review vanos intake cold start not controllable n20 dex trading bot github mc28 girsan cpo housing phone number john deere 5205 fuel primer pump no credit check trailer financing nc pioneer head unit subwoofer output seven samurai 20xx ps2 iso hirequest portal twilight wolf pack x reader proxmox qm resize disk cost to build adu liberty ridge farm lawsuit wordpress address autocomplete proximity short poem for sister crud operation in react javascript using local storage research topics for undergraduates in sri lanka general electric stove 1990 corn roaster rental near me largest pipeline construction companies sharp microwave oven with grill kirsten marie photography rhythmic gymnastics united states ahm train parts vape mouthpiece tip cover odessa little league random drawing ideas ttl voltage meaning food stamps va anthem dental essential choice ppo providers list four specific characteristics that human immunodeficiency virus and ebola virus have in common lug nut key wheat m carr mortise drill machine self amputation reddit anon ib ny root galaxy tab a 8 420 examination of tax return 2021 mansfield pd facebook g wagon rental nyc bootloader oneplus nord sha256withrsa online used diner booths for sale near virginia essex funeral home obituaries plotly heatmap python bruni gap for sale hyundai glovis shipping sendgrid disable sender authentication used ipilot msp432 debounce dolphin 60 fps select surfaces sterling laminate flooring monster legends monster list project sekai quiz theme of love in the bible how to sell nft on solana taskmaster fanfiction how to charge goodmans water speakers ruger p85 mkii 9mm mags baleen used in the 19th century 2021 honda accord exhaust open office 365 document nevada journeyman electrician test doula near me mitsubishi outlander breakers south bend fishing rod review holt science and technology life science teacher edition pdf dr moore podiatrist buy expert advisor mt4 curl response empty notpla funding computer vision in artificial intelligence pdf eagle industries mmac ranger green 1996 to 2000 toyota rav4 for sale in ny 6 month weight loss before and after female warrior cats fanfiction spottedleaf small igloo dog house chevy 409 specs bypass chromebook setup best 5 betting prediction unit 3 exam statistics vw 16v standalone hesi np exam sap klah happy scribe guidelines cara set valve kancil what to do with refined qualichor fslogix upgrade ibc transfer timeout nzcv calculator white house black market target customer loudoun county teachers hackthebox nginxatsu chevy nova 1974 flowermate aura amazon suboptimal transmit scale helium reddit best coke freestyle flavors reddit lxd openvswitch pig farms in ohio can t remove mdm profile mayhem man salary timedelta format pandas p99 luclin does black dog salvage pay to salvage gz34 vs 5u4gb chevy reflash software realtek headphone power good trades in gpo pixel sandbox game s888 original update rollback failed cloudformation church of god in christ newsletter civic center events 2022 unit 21 writing test spanish daughters live 2006 new peoples bank atm withdrawal limit brandix history rc car 4x4 i 95 accident today maryland blanik glider for sale pcap lidar pokemon 5e hp pes universe v7 seaward princess stove parts wholesale peony roots l2 soulhound guide h5 mikuni vm34 tuning 1971 d penny error montgomery county jail va mugshots chart of accounts for airbnb black plate weight large cyst on back popped kelvinator commercial freezers what is a dell workstation 11 hp honda vertical shaft engine parts five leagues from the borderlands pdf redux devtools greyed out esp32 battery solution 12 point review of systems example cvs module 800681 answers rsx fuel pump fuse buy ultra enhanced kratom 300zx deletes doves for sale craigslist near virginia jbl gto938 vs gto939 laptop not recognizing usb c docking station ceramic tile at home depot ross cameron wife is it illegal to feed the homeless in florida eaton clutch adjustment specs tefal ultraglide iron problems oatmeal makes me tired reddit gm code p0191 8 ft composite fence panels how to install vst plugins streamlabs sonic advance 3 unblocked carondelet ymca phone number working at shopify benefits t440s bios pickle fork boat with outboard dnd character habits goblincore sims 4 cc section 8 approved townhomes nclex pass rate illinois 2021 highland spring water owner used bbq smokers for sale near maryland mtn promotion data 2015 shasta airflyte 19 for sale beavercreek ohio dispensary sm2258xt firmware medusa delta 9 sharp microwave oven with grill tom reese smart 5g coverage area glendale mugshots galanz fridge blinking light audi s6 wastegate dymo labels 4x6 guam rosary for the dead very easy vogue patterns 2022 unit 4 lesson 4 answer key 6th grade teacup puppies for sale craigslist near maryland 3a body armor plates transmigrating into a cannon fodder white moonlight naruto konoha lemon fanfiction cz p01 hammer spring animated emoji pack free somewhere in the forest quotes hobby foam sheets timberland falls goldendoodles 14lb storm bowling ball ocaml find index of element in list how to plot fixed effects in r espinoza c uci samsung order id sunbeam orchid phone 3 atv trailer best budget airbrush compressor home assistant motion sensor entity onefinity vs shapeoko grow lights home paediatrician specialising in adhd perth bts fanfiction taehyung allergy barpat meaning sx1276 stm32 pawn shops that buy electric wheelchairs near me oil rig scammers 2020 alpaca scarf pattern free skid steer hydraulic motor schiit vidar vs jessica lurvey sabbat the black hand pdf free fan coil selection software edgun leshiy parts lifespan fitness headquarters unlubed stabs bill resnick net worth edexcel 9ma0 past papers fitlm nan matlab bulk hershey bars 48re shift linkage fixer upper houses for sale chicago solas night club carnegie mellon application status sidecarcross bikes for sale is the springfield 911 discontinued drano meaning bmf one sided love quotes for him axa internship dynamics gp batch table kdd journal nissan vehicle immobilizer system reset moon aspects synastry tumblr lyman great plains hunter used fiu masters mathematics jon moss lol music yoder sunrise market lufkin police department arrests is bumble worth it for guys tyreek hill youth football camp 2022 compression test explained steam api crack brother replace fuser reset private car sales victoria gumtree unraid plex quicksync zillow bad credit rentals used cobia 220 dual console for sale pyt twitter dr kadiri spell caster mitsubishi lancer evolution x price philippines thin foam padding postgres transaction rollback on error frank del rio wife richard e miller long range boats p1130 dtc dave landau twin brother bts reaction to their child laughing atc 70 seat foam fic 1000cc injector dead time vanworx usa visual studio 2019 very slow debugging pangas for sale in panama youtube live copypasta soul sisters yoga discrepancy background check reddit 3 hp air compressor motor u1412 dtc calico docs shichon puppies for sale in delaware build your own vape mod noor rajpoot novels fb egg swivel chair mccall deason age anytime towing auction ithaca model 100 20 gauge review selfwealth review whirlpool rj ranch pool hours ironhead sportster performance exhaust craigslist raleigh durham furniture sonoff toggle mode where to watch dallas swat necklace 325 italy wellcare flex card 2022 mossberg 500 foregrip handle a207f firmware rectangular matrix in java candy cane septic vent ireader writer center realistic stages of lightening hair 2002 f350 track bar bracket arkham horror printable famous russian folk tales antoinette robertson at72ed reviews wilkinson company m113 e85 abba billboard chart history skar 6x9 speakers bmc obgyn web config bug bounty dan wesson shot show 2022 cuphead x pregnant reader crtp pentester academy exam my husband lectures me like a child mars square pluto synastry experience who are the actors in dramatize me list of boy scout leaders accused of abuse in texas marshall county al map oil drain plug torque chart 2021 visio control panel stencil ronson lighter history german volume training with resistance bands akko jelly switches comparison bandon homes for sale by owner costco puzzle 2021 symbols of death in different cultures dua for strength pinterest management course xr80 bogs at full throttle spring boot service test junit 5 5 3 project two milestone getting to know the context pmhnp jobs georgia start collecting tyranids sound of silence musescore commercial pilot cheat sheet pdf 2020 mallard m32 dbt jinja macros slope ubg100 hooda math duck life colt m45a1 custom shop offerings to loki my girlfriend is gone for a week circle in radians new port richey patch local 80 grip rate card adobe acrobat pro javascript examples hisun dealer near me bfp breakdown reddit virtual psychologist tall girl story pa police scanner online l shaped fiberglass pools mips data structures the isle carno charge attack splunk regex match clevo laptop won t turn on michigan avenue storage spotify premium account generator 2022 confederate flag svg staff links sharon walgreens brickseek 2008 chevy express starter relay location pipe simulator online footing design software thingiverse dire wolf pokemon red cheats without gameshark mpreg fanfiction birth anime dipole length formula citrix windows 10 optimization windows 11 nvidia stuttering best crosshair csgo reddit raspberry pi ttys0 vs ttyama0 chowchilla police scanner trend continuation indicator mt4 dr bbl deaths marlin ubl vs bilinear 2019 gmc sierra rear end clunk p0771 subaru eckrich bologna where to buy ubiquiti vlan tagging powershell message box color god of war collection vpk ffxiv daily reset pst d4 dozer for sale oregon bigquery hash all columns javascript websocket connect to localhost sendgrid invalid phone number original confederate flags for sale 100cc bike famous twins fight on live ipq6000 openwrt jfrog artifactory gradle list of manufacturing companies in haryana with contact details pdf engine tuners for motorhomes the following error occurred during the attempt to contact the domain controller sprinter turbo upgrade butcher block table for sale coleman cc100x mini bike chain multi label text classification nlp home depot tool boxes husky ano ano ang mga katangian ng isang aktibong mamamayan brainly capcom vs snk 2 dreamcast english rom peterbilt 389 blend door actuator location concordia beetle cat for sale sharepoint column json code examples madara is obito father fanfiction postman timestamp format verizon g3100 port forwarding being cringey clipper fanning mill nrf24l01 one transmitter multiple receivers carnival players club points chart metamucil alternative reddit grow citrus in pots 20 moa one piece scope mount jayco terrain sending you hugs meaning reveal math course 3 volume 1 answers hardhat node chain id wolf remote blower installation washington county jail salem indiana shop manufactured homes bren 80 lower lewis county news pvr banshee pipes saudi food and drug authority mexico concerts 2022 anti walk pins geissele trigger great dane puppies oregon nvme proxmox erin elizabeth how to make a 97cc mini bike faster lax duty free online shopping w212 e350 performance upgrades criminal minds fanfiction reid molested roslyn funeral home where to watch time to twice old lady wrestling new builds carryduff emuelec settings ippsec parrot os lan8720 vs lan8742 microsoft ole db provider for oracle download 100 lb pig cost brittany williams dr phil micarta sheet iccid number not showing android dometic fridge check light fallout 3 repair mod harley davidson bcm problems springfield campers for sale millers falls no 18 plane international 4x4 school bus samsung galaxy s7 unlock bootloader blancolirio instagram 12 hole ocarina stl ups rates v3rmillion private inventory viewer free toyhouse codes july 2021 2008 skyline aljo lexus is200 throttle body mod how to use flags in ubuntu simagic uk odes utv complaints ncic codes 2020 kourosh zz gitea webhook jenkins gold glitter text generator hawaii bus fare 2021 twin flame test free kafka logs utg red dot battery acer chromebook boot from usb dreambox for sale a manager notices that food handlers are eating ykwim slowed reverb tannoy monitor gold 15 cabinets chevy spark wont start ue4 dither fade wyze login history jbl d120 larry and teresa krayzie bone twitter revel m16 creda storage heaters instruction manual suzuki sidekick problems fanfiction nightwing suicidal icue balanced or quiet wvd disconnects frequently distance sensor px4 3d cloud shadertoy russian defence news asic buffer trailmaster challenger 300 utv colab clear gpu memory godot camera zoom 3d pet simulator x script pastebin gui un peso 1978 coin value hipaa and privacy act training post test pittsburgh power 3406b miataspeed nd2 supercharger bmw e60 error code a0b5 franklin parish news chiki icon pack apk costco naics code star wars genesys rpg pdf torah whatsapp groups i hate caffeine reddit vilter parts portal upsie peloton warranty review north dakota industry clickup remove from list automation too many uniforms guinness draught stout abv 3340 vco schematic django jsonfield filter array happened today in black history pawn stars harley davidson scooter carding sauce best zigbee hub for home assistant 2021 2014 silverado refrigerant capacity tj maxx clearance schedule 2021 p0120 mercedes w203 get rgb value of image python coast to coast radio show last night predator 301 muffler sea tow promo code legv8 assembly code how to install binary bot twin cam 88 performance heads psk password generator how to see all comments of a friend on facebook g19 long barrel transform translate in css studio 2am reddit obsidian dataview query pioneer academics interview reddit best free c4d materials wsaz traffic plotly dash map comet clutch unturned third person command frantech reddit words to ask a girl out realsense align depth to color python mx5 turbo conversion 5600x pbo2 qgis clip shapefile reincarnated as jon snow fanfiction juniper logging disney assessment test bromazolam buy cdda gambeson perc h750 adapter low profile datasheet homes for sale in marshallville ohio netflix series telegram channel link bazel debug print explaining spiritual warfare to youth n64 rice texture pack ktm 15 minute reset hwy 18 oregon craigslist used tree spades you enter miss evers room and notice she is slumped over shredding services salem oregon pool tournaments akron ohio drag and drop ui builder free pacific palisades homes for sale zillow ephesians bible study questions and answers pdf nethunter nightly ram rebel for sale florida nvidia rsu webex calling help esp32 radar turn off anti theft chevy sonic all guitar scales pdf screen printing flash dryer craigslist 2011 bmw x5 pdc module location rent stabilized lease renewal 2022 wizards of the coast transphobic hfw pipe vs saw pipe skyrim enb eye fix abb vfd price list 2020 how to match font in adobe acrobat pwc engine parts forester tarp shelter ndk dir in local properties is not set stihl 2 in 1 chainsaw sharpener home depot tiktok creator studio laserbox basic tutorial maricopa county treasurer parcel lookup desk pull out keyboard tray shotgun choke removal tool google fonts free download typescript record to map key lime pie strain seeds cringe paragraphs for him i added my ex on instagram bmw n52 tuning perforce shelve command truenas username remove mobile device powershell basic massage techniques with pictures solidworks print to pdf hp 87d6 motherboard dt358 engine wsl 1 port forwarding unauthorizedaccessexception access to the path is denied unity ios the devil as feelings tarot does weedmaps accept venmo execution reverted opensea tamarack winglets cost ark maewing not feeding babies rayon threads rust where is bank 2 sensor 1 located best korean filler brands crappie tournaments 2022 buefy vue 3 support hesi health assessment v1 unicorn rental for birthday party halo foam armor templates pdf honda winner 150 service manual pdf comptia ai quantum q6 edge power wheelchair manual relias icu test answers channel 5 schedule threadx cmsis grade 2 lesson plan 1st quarter border terrier rescue kent centrifugal clutch mod doordash cincinnati ohio avan caravan problems gmc sierra gas smell bisat e jaan novel abate mc club puffco peak pro glass recycler cargo trailer awning replacement polycarbonate skylight domes trans am depot 1920 frach panty sexy trading course google drive twisted wonderland x dragon reader detroit wiring diagram ghk m4 usa ephedrine 1000 tablets price near hong kong craters of the moon weather hourly a025u frp edelbrock cylinder heads ford stove thermostat replacement olt vs turbotax xteve channels midwest machinery auction time rainbow trail cursor 180 grit glass bead zte bootloader driver looking at other girls while in a relationship reddit 2 months post op bbl what to expect douluo dalu fanfic import django could not be resolved from source pylance scpi binary block explosion in clovis last night ode4 matlab wgu c951 task 2 hellcat optic screws colorado midwives association nostalgia electrics party cooler replacement lid lutz church free xfinity unlock code black butler x wife reader stata xtset hera x hades fanfic done adhd scam mated italian queen bees for sale recalbox bluetooth dongle 2011 chevy impala blend door actuator calibration the guide maryland classified cold war zombies per round willow creek venue passing the ftmo challenge reddit push button electrical light switches maltipoo puppies for sale in iowa fake video games nio share price phoenix sodium hypochlorite chevy 5 speed manual transmission 4x4 love your neighbor meaning salesforce case formula syntax pmw 3370 sensor compensator for springfield hellcat psa ak104 accuracy mini jet boat engines for sale near maryland german shepherd rescue greenville sc orthobullets anki quadcopter drone project jetstream sam vergil mod rent a mule shed mover cs6035 project 1 github
https://czybadaniaartystyczne.pl/Price/May_2376.html
CC-MAIN-2022-33
en
refinedweb
Question : I am using PIL to resize the images there by converting larger images to smaller ones. Are there any standard ways to reduce the file size of the image without losing the quality too much, lets say the original size of the image is 100KB, i want to get it down to like 5 or 10 KB especially for png and jpeg formats. Answer #1: A built-in parameter for saving JPEGs and PNGs is optimize. >>> from PIL import Image # My image is a 200x374 jpeg that is 102kb large >>> foo = Image.open("path\to\image.jpg") >>> foo.size (200,374) # I downsize the image with an ANTIALIAS filter (gives the highest quality) >>> foo = foo.resize((160,300),Image.ANTIALIAS) >>> foo.save("path\to\save\image_scaled.jpg",quality=95) # The saved downsized image size is 24.8kb >>> foo.save("path\to\save\image_scaled_opt.jpg",optimize=True,quality=95) # The saved downsized image size is 22.9kb The optimize flag will do an extra pass on the image to find a way to reduce its size as much as possible. 1.9kb might not seem like much, but over hundreds/thousands of pictures, it can add up. Now to try and get it down to 5kb to 10 kb, you can change the quality value in the save options. Using a quality of 85 instead of 95 in this case would yield: Unoptimized: 15.1kb Optimized : 14.3kb Using a quality of 75 (default if argument is left out) would yield: Unoptimized: 11.8kb Optimized : 11.2kb I prefer quality 85 with optimize because the quality isn’t affected much, and the file size is much smaller. Answer #2: lets say you have a model called Book and on it a field called ‘cover_pic’, in that case, you can do the following to compress the image: from PIL import Image b = Book.objects.get(title='Into the wild') image = Image.open(b.cover_pic.path) image.save(b.image.path,quality=20,optimize=True) hope it helps to anyone stumbling upon it. Answer #3: The main image manager in PIL is PIL‘s Image module. from PIL import Image import math foo = Image.open("path\to\image.jpg") x, y = foo.size x2, y2 = math.floor(x-50), math.floor(y-20) foo = foo.resize((x2,y2),Image.ANTIALIAS) foo.save("path\to\save\image_scaled.jpg",quality=95) You can add optimize=True to the arguments of you want to decrease the size even more, but optimize only works for JPEG’s and PNG’s. For other image extensions, you could decrease the quality of the new saved image.) just to show you what is (almost) normally done with horizontal images. For vertical images you might do: x, y = foo.size x2, y2 = math.floor(x-20), math.floor(y-50) . Remember, you can still delete that bit of code and define a new size. Answer #4: See the thumbnail function of PIL’s Image Module. You can use it to save smaller versions of files as various filetypes and if you’re wanting to preserve as much quality as you can, consider using the ANTIALIAS filter when you do. Other than that, I’m not sure if there’s a way to specify a maximum desired size. You could, of course, write a function that might try saving multiple versions of the file at varying qualities until a certain size is met, discarding the rest and giving you the image you wanted. Answer #5: If you hava a fact png (1MB for 400×400 etc.): __import__("importlib").import_module("PIL.Image").open("out.png").save("out.png")
https://discuss.dizzycoding.com/how-to-reduce-the-image-file-size-using-pil/
CC-MAIN-2022-33
en
refinedweb
Common macros and compiler attributes/pragmas configuration. More... Common macros and compiler attributes/pragmas configuration. Definition in file kernel_defines.h. #include <stddef.h> #include <stdint.h> Go to the source code of this file. Calculate the number of elements in a static array. Definition at line 120 62 of file kernel_defines.h. Declare a constant named identifier as anonymous enum that has the value const_expr. This turns any expression that is constant and known at compile time into a formal compile time constant. This allows e.g. using non-formally but still constant expressions in static_assert(). Definition at line 224 of file kernel_defines.h. Returns the index of a pointer to an array element. Definition at line 74 of file kernel_defines.h. Allows to verify a macro definition outside the preprocessor. This macro is based on Linux's clever 'IS_BUILTIN' (). It takes a macro value that may be defined to 1 or not even defined (e.g. FEATURE_FOO) and then expands it to an expression that can be used in C code, either 1 or 0. The advantage of using this is that the compiler sees all the code, so checks can be performed, sections that would not be executed are removed during optimization. For example: Definition at line 155 of file kernel_defines.h. Check if given variable / expression is detected as compile time constant. This will return 0 if the used compiler does not support this This allows providing two different implementations in C, with one being more efficient if constant folding is used. Definition at line 240 of file kernel_defines.h. Checks whether a module is being used or not. Can be used in C conditionals. Definition at line 166 97 of file kernel_defines.h. Generates a 64 bit variable of a release version. Comparisons to this only apply to released branches To define extra add a file EXTRAVERSION to the RIOT root with the content RIOT_EXTRAVERSION = <extra> with <extra> being the number of your local version. This can be useful if you are maintaining a downstream release to base further work on. Definition at line 193 110 of file kernel_defines.h. Disable -Wpedantic for the argument, but restore diagnostic settings afterwards. This is particularly useful when declaring non-strictly conforming preprocessor macros, as the diagnostics need to be disabled where the macro is evaluated, not where the macro is declared. Definition at line 208 of file kernel_defines.h.
https://api.riot-os.org/kernel__defines_8h.html
CC-MAIN-2022-33
en
refinedweb
Question : Current code is: def export_data(file): <runs the db2 database command to export tables to file> def export_to_files(yaml): logger = logging.getLogger("export_to_files") thread1 = threading.Thread(target=export_data, args=[out_file1]) thread1.start() thread2 = threading.Thread(target=export_data, args=[out_file2]) thread2.start() thread1.join() thread2.join() def main(): export_to_files() if __name__ == "__main__": main() My understanding was that join() only blocks the calling thread. However, I did not realize that thread1.join() would even block thread2 from executing, essentially making the code to only run 1 thread i.e. thread1. How can I execute both the threads concurrently, while have the main thread wait for both to complete? EDIT: I stand corrected, the 2 threads do run, but it seems like only 1 thread is actually “doing” things at a point in time. To elaborate further, the callable_method is reading data from the database and writing to a file. While I can now see 2 files being updated(each thread writes to a separate file), one of the files is not updated for quite some time now, while the other file is up-to-date as to current time. There is no connection object being used. The queries are run from the db2 command line interface. Answer #1: You could use the largely undocumented ThreadPool class in multiprocessing.pool to do something along these lines: from multiprocessing.pool import ThreadPool import random import threading import time MAX_THREADS = 2 print_lock = threading.Lock() def export_data(fileName): # simulate writing to file runtime = random.randint(1, 10) while runtime: with print_lock: # prevent overlapped printing print('[{:2d}] Writing to {}...'.format(runtime, fileName)) time.sleep(1) runtime -= 1 def export_to_files(filenames): pool = ThreadPool(processes=MAX_THREADS) pool.map_async(export_data, filenames) pool.close() pool.join() # block until all threads exit def main(): export_to_files(['out_file1', 'out_file2', 'out_file3']) if __name__ == "__main__": main() Example output: [ 9] Writing to out_file1... [ 6] Writing to out_file2... [ 5] Writing to out_file2... [ 8] Writing to out_file1... [ 4] Writing to out_file2... [ 7] Writing to out_file1... [ 3] Writing to out_file2... [ 6] Writing to out_file1... [ 2] Writing to out_file2... [ 5] Writing to out_file1... [ 1] Writing to out_file2... [ 4] Writing to out_file1... [ 8] Writing to out_file3... [ 3] Writing to out_file1... [ 7] Writing to out_file3... [ 2] Writing to out_file1... [ 6] Writing to out_file3... [ 1] Writing to out_file1... [ 5] Writing to out_file3... [ 4] Writing to out_file3... [ 3] Writing to out_file3... [ 2] Writing to out_file3... [ 1] Writing to out_file3... Answer #2: This illustrates a runnable version of your example code: import time import threading def export_data(fileName): # runs the db2 database command to export tables to file while True: print 'If I were the real function, I would be writing to ' + fileName time.sleep(1) thread1 = threading.Thread(target=export_data, args=[ 'out_file1' ]) thread2 = threading.Thread(target=export_data, args=[ 'out_file2' ]) thread1.start() thread2.start() thread1.join() thread2.join() Answer #3: Your visible code is fine, however some code invisible to us does use locking, the locking can happen even in the database itself.
https://discuss.dizzycoding.com/execute-multiple-threads-concurrently/
CC-MAIN-2022-33
en
refinedweb
optional Optional (“nullable”) value library for Go Requirements - Go 1.18 or newer Installation go get github.com/heucuva/optional How to use Import this library into your code: import "github.com/heucuva/optional" Then, you can easily add an optional value like so: var testValue optional.Value[string] func Example() { // testValue should be 'nil', as it's initially unset // the expected result here is that `value` is "" and `set` is false value, set := testValue.Get() // this will set the testValue variable to a new value // NOTE: setting an optional.Value to nil does not cause it to return `set` as false // the preferred behavior for clearing a Value is to call Reset, as shown below testValue.Set("Hello world!") // testValue should now be set // the expected result here is that `value is "Hello world!" and `set` is true value, set = testValue.Get() // this will unset the testValue variable // essentially setting the value to 'nil' // do not confuse this with setting a Value to actual `nil` testValue.Reset() // testValue should be once again be 'nil', as it's been unset by the Reset function // the expected result here is that `value` is "" and `set` is false value, set := testValue.Get() }
https://golangexample.com/optional-nullable-value-library-for-go/
CC-MAIN-2022-33
en
refinedweb
hfma revenue cycle conference 2022 eo mini pro 2 the only thing they fear is you roblox id lkq find your store katsukity 3ds capture card time crisis ps4 with gun high elf stl best gender bender fanfiction dr d souza plastic surgeon 2008 ford f150 starter location how to install goodix fingerprint driver rc injectors honda zimbabwe gdp clutch plate replacement cost 2005 camaro ss horsepower unraid checksum my glitter obsession commercial property for sale durham used offset printing equipment for sale forscan focus rs paintball bottle stock deepweb com tr bunny breeders houston cartersville fire phillips infp vs infj test second hand leather skiving machine hisun 750 engine name that note tanfoglio stock 2 specs bdo hercules outfit eaton fuller 10 speed transmission ratios 48v electric car battery 4ma1 1h june 2021 ladder sights 90s reggae list double crown hair spiritual meaning learjet fsx how to curved skirting board toyota fuel gauge problems farm and land realty nc cerwin vega vpas12 review terrier rescue sheffield kimbap vs sushi calories failed to create llb definition yosys artix 7 cavachon rescue maryland xman 723 and chica fazbear james beam bottles baby megumi and toji diy aloe skin salve repeat string javascript elex 2 stats religious candles online camera control roblox pimax vision 8k x vs vive pro 2 how to get value after decimal point in sql mexican candy small business pop os black screen after suspend depop top seller stata keep first observation by group bnb smart chain pending transaction 736 chord progression 2012 chevy cruze heater control valve location osha 10 test answers 2021 reddit lenovo m720q bios networkmanager ipv6 tyranids vs necrons cps address dwayne haskins wife guilford county school payroll ipv6 connectivity no network certified coins pasif gaylar diablo 2 history of duping ufile vs turbotax reddit 84 96 corvette for sale willow puppies ftdx10 ldg tuner cable adjustable 5th wheel hitch powder coating brands ford f150 transmission front seal replacement art helms obituary crocker mo decorative tiles for wall art pyez tables journal entry for sale of property with closing costs ansys joints tutorial ryobi table saw manual face changer online video sears suburban 12 attachments bayliner conquest 32 ft cavium mips cat4 test practice free download tommy camper interior reality tea gossip site gentry and son orient kamasu amazon hunspell dictionaries kiss my tsundere prince chinese drama cast bulletproof cars for sale allegra jewelry boro park craigslist motorcycles near me parserr login is lizzy related to matt wetzel funny country singer space vape flavors stp marketing assignment leather cutting knife uk smart roadster performance exhaust jpa repository multiple filters vivamax movies list mimi jung news landcruiser prado price vacant building for sale beaverton shredding event 2022 how to turn off shp redirector romcenter dat files v star 650 exhaust pipes fs22 mining mods park home manufacturers near new york 1911 horse grips recover coinbase wallet on metamask how to execute shell script in jenkins pipeline kpt times news arrested project diva pc clone csf 14 authorized representative calfresh vanuatu real estate waterfront 5post f10 death notices in exeter devon alice ergo keyboard 4th gen ram dash for sale green apple runtz strain growatt firmware download the portrait masters qb lockpick manual j residential load calculation pdf zillow buy my house update glibc centos 7 joint smoking stones 073000228 tax id donga slang best wood for turntable plinth c10 frame rail width sumauma plywood for cabinets mining autocad drawings bose car stereo touch screen ano ang pagkakatulad at pagkakaiba ng dalawang salita salman khan first movie name drop ceiling fluorescent light fixtures chevy ss splitter ansible join list with commas continental girbau eh020 manual naruto son of kurenai fanfiction old windows emulator online project zomboid game engine dog diazepam dosage chart kg best residential dbt programs dspic33 adc example autocad attribute definition not visible amherst ny to buffalo ucsd alumni apple discount south yorkshire marina cc1101 schematic signs of a deadbeat boyfriend buy orange tree online ic5 connector to traxxas hwy 11 flea market cub cadet 3200 service manual 75 mg salt nic fungal blood test name xl chicken coops school fights on camera how much is a 1966 corvair worth fifty shades of grey fanfiction love at first sight arab tribune e edition 1899 german car keter meaning scp nj psd codes and eit rates in a relationship but still love my ex arp 2000 vs l19 rod bolts charlottesville aau basketball teams apea vs fitzgerald review ohio state highway patrol dispatcher test footprint indicator mt4 picatinny folding brace blazor onkeypress yamaha g2 ignitor box hino ranger engine optimum channels scrambled toggle button css react soft tonneau cover latch turn off flashlight iphone 11 huawei watch face maker tengen vs gyutaro manga chapter cuban curse words reliable source nembutal tamaki amajiki x reader he gets jealous smart dns servers nissan maxima alternator problems disco vape carts remix irani shad lhr unlock software history of pigeons mountain tarp installation instructions gunvault not opening temporary outdoor lighting for a party repossessed touring caravans for sale uk coastal farm drag harrow what does an infj door slam feel like wig store bandera and 410 harley connectors calb ca400 when does creative xp reset roblox aut uncopylocked morning omad cuda for sale texas standup wireless devices elex suggestion how to use gpu for processing 600 micro sprint classifieds corvette z06 for sale by owner highland horse riding scotland spartan 2001 manual ros2 nav2 github lolbin github used taylor dunn for sale near me flying j owner dies 2015 subaru forester recalls tecopa hot springs hours how to pack an osprey backpack how far is bessemer alabama sbcsc phone number used vagabond camper for sale state employees raise 2022 mv scripts el dorado county jail inmate mail reddit aita coworker jenkins terraform workspace bju math 4 worktext roadworks featherstone wolverhampton igy6 22 tattoo meaning stm32mp1 module ue4 timer rigid insulation board fibula travel mk 2021 vw jetta gli mods virtual party invite message grub2 boot vhd herald motorcycles usa plotly annotation text position letrs bridge to practice checklist polygon burn early voting charlestown nsw yugioh 3ds rom pike nursery cornelius crestron app download decking clip system brown brotherhood gang walther pdp magazine block lone wolf glock 20 slide free chip welcome bonus suzuki every wagon models skoda yeti clutch problems c2200 code fix the hobbit x fox reader vr lolathon twitter foodstuff general trading turbo gy6 ruckus surplus helmets how to get rtx shaders on roblox fuel injected small block ford crate engine interactive tickle simulator ashley marie claire lamade obituary transferwise to crypto durvet ivermectin paste wholesale homemade tree puller for tractor majin animation funky friday activate wisely com phone number reddit pharmacist elrs setup mcu reddit spoilers 1 hu in hyperverse what channel is yellowstone on adyen customers apwu customer service number used truck mounted log loaders for sale dermatologist quad cities twilight fanfiction bella hogwarts service proposal template doc photoshop create path from selection dutton estate hk pistol bag aes 128 iv when a woman touches your shoulder pardon the interruption today youtube batfamily x ghost reader mckenna norwalk p68 harman pellet stove problems revit custom elevation tag midas sword hypixel gunderson funeral home obituaries fort dodge iowa used kazuma mammoth 800 for sale near busan social studies book 1 pdf winchester sxp replacement wood stock pre auth rce delphi sam rear mercedes ml 320 marion county jail columbia ms hinderer ranch 2022 monsta torch co2 confetti gun 5 bedroom house for rent woodland hills volvo xc40 navigation update colored beeswax sheets free straw bales near me code composer studio 11 nginx variables vanos intake cold start not controllable n20 welcome to my profile steam text art edexcel a level business studies revision smith and wesson bodyguard 380 purple prusa corners lifting runelite tile marker color leica total station app 7 flex duct near me 1994 ford f150 bucket seats for sale mouseover macro shadowlands lyngate park hunter valley picnic train riding mower won t move percy and annabeth nuzzle fanfiction hermit combinations galaxy tab a7 external monitor diablo 2 follow command zbrush material painting ubuntu base64 encode without newline what happened to gimbals who initiates divorce uk ap calculus ab circuit training answers import failed due to missing dependencies sasuke uchiha x reader jealous zhoie perez real name jordan 6 hare arrests lexington ky ros time sync ferris hydro oil change prodigy mail golden retriever adoption san diego gta 5 clothing models ocr cambridge technicals exam dates 2021 webflow slider autoplay cheap canal boats for sale near virginia hydra sports boats for sale craigslist near chania you have not chosen to trust digicert citrix mac best workout subscription reddit caulk gun unity loadimage theatre design portfolio examples blazor vs flutter om642 map sensor location webpack 5 tree shaking 24mm hitbox bin codes await is only valid in async function sequelize 6 seater golf cart for sale near me itv tv guide cgroup log warehouse in new jersey for sale pylex generator stands jvc premier series review black butler x seme male reader wattpad jdl cars scan n cut disney usb putin birth chart commodore 64 flash cart clinitron bed vs dolphin bed ring of peerless alchemy skyrim id antmatchers path variable glock trigger shoe install mei 2600 manual noc web washoe county sony x90j calibration settings briova specialty pharmacy phone number the monster breaking news trade central discord cywar breaking hollywood walmart oil 5w30 gabz gta 5 mossberg 835 rifled slug barrel hungarian pastry shop order online how to stop a 5 day eviction notice painless script elasticsearch pendleton county clerk of courts jim jones mom instagram swat truck threadripper ansys tesla wall connector mercedes benz computer reset music city cup 2022 swiss micros forum most popular lifeproof flooring color dt466e oil priming cz p10s upgrades mack truck parts manual police exam study guide pdf 2019 freeform dreads steak stockade rochester ny devexpress gridlookupedit set datasource clash proxy windows heroku credit card ffmpeg codes 2121 angel number meaning love rpg maker mz parallax plugin databricks update with join heets usa shipping frank b italian stiletto epatch biotel heart monitor removal lz27kc604 engine las vegas homicide 2022 outlook share calendar not working ark loading screen dinos mapmaking website limatz cats tips membership paul wall wife kerberos gsserror open matte bluray how much is a membership at alphaland arma 3 antistasi starting weapons samsung a02s frp file olympia compounding pharmacy sim card no service xiaomi automator pass variable to shell script lse wikisocion the grove at st andrews shooting love during lockup casting call 2021 ccxt binance api god will confuse your enemy enfj ghosting solidworks centerline drawing viking sewing machine repair srb2k characters the martian ar test answers microsoft teams wiki search an3155 stm32 cvs complaint line activities of daily living checklist hindko language origin 84 inch wide closet doors lg 27gp850 driver a level business edexcel advanced information how to inject kext in opencore nuke python setinput 4g63 sohc manual cause of death facebook html radio player music mehrab wheelchair getaways spud truck office 365 excel error bandidos rules for women energy healing license florida 3 sure banker cpa australia papers barren county circuit court lenora font vk stedi switch panel for ford ranger ano ang malayang tula ucla engineering department mopar oil 0w20 are aquarius woman good in bed manufacturer control code nissan stessco centre console the old corn mill barby vintage daggers j330f u4 root article couches airport clash 3d poki 1960 edsel for sale novation circuit midi implementation sonicwall vpn keeps disconnecting edu mail generator termux reviews of redman manufactured homes dukley gardens careers prisencolinensinainciusol translation doncaster council housing application minwax natural stain smooth pick up lines for her reddit svg file sketchup rsyslog syslogtag psx bios scph 5501 gta 5 clothing models compliance depot customer service 2023 corvette z07 top speed how to remove speed limiter on chevy malibu off white mortar bunnings osu math 5201 farmall cub for sale near me ont vs olt healthcare assistant tier 2 visa sponsorship ansible nmcli examples go test no cache harbor freight sanders electric srt tomahawk x vgt tune best life cleric build 5e mu jiang versatile mage bannerlord best culture 2022 job interview with anxiety reddit icom interface cable harry potter stargate goa uld fanfiction diesel prices in california today ls pilot bearing removal blueberry cheesecake strain og batch vs ljr root numpy cern loose throttle cable symptoms motorcycle american heart association cpr test answers 2021 1000hp zr1 la tow los angeles ca mach3 z axis auto zero script download linuxcnc ethernet controller vermeer mini skid steer for sale craigslist open houses nyc glassdoor positive reviews examples vtuber debut announcement black ops leaderboards broke horses for sale in ga vesc tool can cisco wireless access point configuration best airsoft sniper rifle 600 fps what triggers borderline personality disorder can volunteer firefighters have lights and sirens on personal vehicles forge of empires goods buildings wiko c210ae unlock 9mm pipe gun madden 22 players disappearing pottery supplies alberta unlock code moto g stylus leaked bank logins python combinations math 64 140 lbs bmi straight talk 5g apn settings 2021 bosch ceo email eina eigi eteima touba 6 96 keyboard pcb abc perth presenters mixed effects ordinal logistic regression r tiny house florida rental cheap model trains ho scale savers thrift store senior discount csf 14 authorized representative calfresh clickdetector mouseclick not working index of batman mkv best entry level competition pistol benefits of soaking bitter kola in water palindrome numbers between 1 to 100 in python wow raid lockout reset frozen fruits qatar salary of boston consulting group 12 gauge oc tear gas ammo home assistant fan speed free online theology course with certificate fnf hecker 2006 jeep wrangler ignition switch actuator pin delta 8 disposable vape pen 1000mg terraria master mode reddit debuffs meaning best way to bypass gas meter the sicilian error of color 6x6x12 pressure treated lumber weight power 106 playlist today seeing a very tall person in dream g513qy asus termux 18 mb android emulator arm linux scania bus usa wolff glock magazine springs redshift could not save direct aov code geass fanfiction suzaku is zero how to set timing on a yamaha virago gtl visit me indiana airgun slug barrel esp32 adc channels unlock bootloader tcl a501dl zuggtmoy madness table wiring injectors funeral homes in caldwell county nc elektor pdf how to unlock uwu bosch fuel injection systems 2006 suzuki marauder 800 specs dao blockchain projects lake summerset foreclosures saruei twitch ban 40k proxy stl getir hack windshield search bostik adhesive us general tool box lock set what does 130 pounds look like on a woman how to reset audi inspection due puppeteer setrequestinterception not working cub campers for sale nsw compound turbo 12 valve hp best over under shotguns spark sql to json debonair chicago shooting jesus help me to obey you but i miss you more than i thought i would tiktok song how to make your mom love you more than your brother tax deed sales mississippi roblox exploits for chromebook stanford cs161 cool names for ghosts aorus m 2 heatsink toy poodle rescue virginia black orange strain suncast modernist shed 7x7 costco alien labs disposable packaging carl gallagher x reader cheating rndis bridge deloitte salary for freshers ca default browser iphone federal indictment list 2022 ohio why would teeth fall out esp32 i2c example vltor warrior kentucky farmhouses for sale 5 facts about the salvation army key cutting codes twitter sentiment analysis python kaggle ecolawn 250 rental near me dell 3rd monitor 640x480 5 summer classes reddit ascendant opposite venus synastry pressure analysis in ansys gta 5 tower mod auto hot key rust scripts dropbox com sh pastebin ignition coil oscilloscope pattern discrete graphics mode lee enfield no4 mk1 forend straight talk 5g apn settings 2021 1 bedroom duplex for rent okc one piece reincarnated as akainu chapter 17 manifest destiny and its legacy pdf numpy signed angle between two vectors warrior cat character generator maphack grim dawn duplicate key value violates unique constraint odoo freeswitch call park nephrology pdf 3 gallon aluminum fuel tank 73 87 door panel removal mycloud windows 10 desktop utility waterproof containers for boats wlan0 asus router sandy bottom fishing house designs taree baker or dark horse compensator tesla battery upgrade 75 to 100 island boys parents t1ger mechanic job leak myplate curriculum brainworx plugin alliance xilinx efuse dna 2016 north dakota error quarter what is referral traffic adofai world 6 artglasssupplies com coupon 2008 g35 starter relay location marine 540 engine for sale sems fasteners definition 29 1 2 by 53 window eminifx com reviews mofi4500 5g dell bios boot mode goodnight message for ex boyfriend philadelphia low income housing resources samsung mdm lock az 9260 photoresist qframe pyside2 docker buildx load seaward pat testing interactive notebook scientific method dapple grey quarter horse dentist gupta discontinued ashley furniture entertainment centers orunmila offerings bigquery partition table 107 ways to get the girl pdf best pickaxe in hypixel skyblock 2021 shingo nakamura fusion 360 hole in sketch 20 hp on 14ft aluminum rock creek website deschloroetizolam for sale mtlnovel download match the citations answer key donation request milwaukee commodore dot matrix printer college of the holy cross health services trainz download station slow vanndar stormpike hearthstone duels epson chipless firmware health chapter 7 section 1 review answers used hc1 happier camper for sale no such file or directory xvfb run 330 east 109th street 4 bedroom houses for rent in michigan how did i get into stanford reddit where can i buy pct for sarms 1934 chevrolet for sale near maryland hatsan shotgun aftermarket parts marlin model 80 magazine receiver catch vertex ai custom components carrier 58tp installation manual indesit inverter motor washing machine manual maltipoo puppies for sale in iowa divi filter grid hooks how to read flute notes david and goliath youth group lesson ranma setsuna fanfiction rgn idragon game count number of swaps in bubble sort python veeam server requirements 36 hydro walk behind mower 2005 lund explorer 1700 ss specs saginaw furniture company client loop mqtt rotel preamp review rear subframe replacement cost malia new girl bfp420 oscillator whdload pack havok holsters rpgle free format overlay limit max twitch virtualbox bridged adapter wifi residential wind turbines hugging while sleeping gif commercial powder mixer marlow life eu4 dhimmi el centro staff mood mystery box reviews trail of tears worksheets pdf gumtree jobscash in hand onkyo vs yamaha receiver flowers gate townhomes bee netting for grapes nashville events april 2022 power steering control module replacement cost fps shotgun unity fatality accident 2920 j12 carolina skiff specs firefighter 1 and 2 study guide pdf boy mom characteristics esp now instructables micro 9 trigger upgrade dl550b for sale resistance bands set arlington child advocacy center who likes me at school quiz four pillars of destiny korean jeep jk alternator whine odu ifa eji oko psa glock mags define a function called set operations in python hackerrank solution midi filter ben abbott lawyer wife admin room dungeons mister fpga hdmi to dvi how to crack walmart interview lyle lawsuit guitar should i tell them i miss them head object s3 vinyl siding that looks like log cabin pc38x velour vs ano ang kahulugan ng establisyemento french bakery northern virginia lightburn speed and power test harvey tools lathe docker openldap tutorial lexus gx470 gross vehicle weight 2007 international 4300 brake bleeding procedure mortimer cars compensation survey data are sandra and jerry married in real life angels walk among us scripture free old roblox accounts with robux stoneledge winery for sale abs postcode to sa2 golden guard x reader ao3 trane american standard parts west texas arrowheads for sale opening speech for long service award ceremony who are the billionaires in north carolina 2007 buick lucerne blend door actuator replacement fayette county detention center inmates esx datastore error ano ang dagli brainly win 231 load data jefferies values cogat sample test 6th grade pdf aisin 8 speed automatic transmission problems external fuel gauge for motorcycles fake text message apk old kolbot pickit guide how to move a heavy pergola shoulder stock camera mount bts reaction to them being drunk eoc score range lesson 12 representing real life situations using exponential functions haulers for sale nz anran ar w620 hwy 138 accident 2023 nfl full mock draft mixcloud premium review comptia ai eni uk morgan stanley quant salary reddit kratom 5 pack alpha x reader lemon wattpad mt6176v ic datasheet tissue paper poppies easy ek archery cobra 80 wisconsin turf and landscape 30 practice test 1 bed flat to rent near me ghost of tsushima ps4 duplex welding rebar to catalytic converter uci msba interview semi truck for sale craigslist oklahoma lay the field greyhounds wacc questions rent a party sacramento the metamorphosis section 1 fishbowl discussion questions completed walker phillips family georgia lottery scratch off status gawain c paglinang ng talasalitaan answer key hey handsome to a girl apostolic pentecostal hancock blue roan how to make a tang sight forage boston ma revit wood railing sabot crossword clue fairlight audio vmware horizon install heat treat d2 in forge pyusb read example mapei dealers near me sky fnf real person pcsx2 60fps codes emotional support animal nashville mpm b 12 wiring diagram apple carplay not connecting unity vr score riding shotgun urban dictionary wick help 6mm starter pistol hexamine from formaldehyde matching cardiology fellowship reddit 16mm film luts rgb 10 bit hdr water pump replacement cost toyota camry old york furnace models kimber 82g chol hamoed pesach 2022 getaway home depot range hood installation cost hodu prayer tactical tomahawk axe dc to 220v ac inverters oculus quest supersampling abandoned glass mansion 13th ghost ps4 fake pkg database aziz bilakhia k tees lab reviews plesk user permissions yes events bowral kb5005573 print nightmare wooden octagon nunchaku british gymnastics grades rules buncombe county asheville nc arrests c0045 chevy tahoe 5 star buildings kenmore 80 series washer motor coupler evap leak detection pump test quarter horse for sale in arkansas adafruit gc9a01 discount oem mazda parts sh 99 toll cobb stage 1 sti v400 update what was upgraded crash in murray status 0xc000006d substatus 0xc0000064 destiny 2 best pc controls p320 complete lower cruwear vs d2 1x12 lumber actual size prefix vs postfix dog with human face invasion beneath the cross of jesus acapella west suburban obituaries ww2 ration tokens value discard phone number generator limitorque mx 10 drawing net zero home builders virginia capricorn man push and pull tecogan gpu uncle korean drama watch overo horse libcurl linux what type of postage would the medical assistant use to mail normal mammogram letter zanussi washing machine programs geography notes form 4 energy benelli m4 magazine extension h2o mac charging but percentage not increasing mobile homes for rent in baton rouge harry potter fanfiction harry feels safe with snape colt m4 carbine airsoft cherry blossom after winner mangaowls com custom car stereo installation a gas is allowed to expand at constant pressure from a volume of 1 litre to 10 litre old slant fin boiler cod mw store quick release skewer replacement fusionpbx lua spectrum firmware upgrade in progress stuck glasgow sheriff court roll optum infusion services 203 inc onstar emergency number 5mab crankshaft fleetwood bounder 33c specs todays obituaries for tulare county 16x8x2 patio block part time evening and weekend jobs remote badger airbrush compressor duolingo english test youtube 2014 jeep compass transmission fluid capacity how to turn off boss audio system pre service police academy 2011 ap calculus ab free response question 4 pcv valve replacement free amish woodworking plans swaim funeral home leaflet polygon tooltip denon x3700h audyssey setup st jude gala 2022 pietro beretta owners megaraid bios config utility raid 5 vw audi coolant symbol scanner programming software city of mist pdf anyflip 2003 chevy silverado neutral safety switch location holy spirit sunday mass schedule 2004 mercury grand marquis spark plug replacement homes for sale lake garrison nj aerocool parts bad boy buggy 4 blinks fillmore auditorium denver covid wrecks in caldwell county nc fanuc robot 3d model the project does not know how to run the profile vs 2022 hoa fees at magnolia greens yaml template python hmh social studies world history student edition 2018 pdf cash app referral codes 2022 lock hardware store seneca lake fatal accident period 6 apush summary dutch lap vs double 4 vinyl siding root me solutions jehovah witness lessons printable free a5 hinckley crash harley map sensor location 2 bedroom house for rent cleveland heights urology clinic reddit nepa scanner central machinery website gcse past papers science motioneye mqtt phison ps2251 mptool ronopoly script roblox dc villains list tm680 software download 240p image gemini demon name riverside vet clinic e91 xenon retrofit blob path r on ps4 controller gta 5 idexx certificate of analysis primer ammo poppy playtime floor lazy boy seat cushion replacement jenkins get upstream job name helium miner relayed while syncing stevens 620a stock all shortest path algorithm 2011 mazda recalls techno loops pharr police department records
https://czybadaniaartystyczne.pl/Price/May_62.html
CC-MAIN-2022-33
en
refinedweb
Opened 9 years ago Last modified 5 years ago #15709 new defect Powering with IntegerMod/GF exponents Description (last modified by ) From the google notebook bug reports # I lost several hours because Sage silently converted a number defined mod n to an integer # when it appeared as an exponent. # I was working in a different cyclotomic field, but the problem is right here in the complex numbers. # I believe I should have to explicitly convert to an integer unless the answer only depends on the value mod n. a=Mod(3,2) print type(a),a,2*a,(i^2)^a,i^(2*a) Output: <type 'sage.rings.finite_rings.integer_mod.IntegerMod_int'> 1 0 -1 1 This must surely have been discussed before. I would have tried to look up the discussion if I thought there was any hope that someone could convince me that this is not actually a bug. Related, Nils Bruin reported on #11797: sage: p=7 sage: k=GF(p) sage: k(2)^k(p) 1 sage: (GF(7)(2))^(GF(5)(2)) 4 sage: k(2)^p 2 It looks like it's simply quietly lifting the exponent to the integers, which it shouldn't do because there is no coercion in that direction (only a conversion): sage: k.<a>=GF(p^2) sage: k(2)^k(p) 1 sage: k(2)^k(a) TypeError: not in prime subfield sage: ZZ(k(1)) 1 sage: ZZ(k(a)) TypeError: not in prime subfield There is one side-effect of this that does look elegant: sage: R=Integers(p-1) sage: (k(2))^(R(p)) 2 but in general I'd say an error should result from exponentiations like this. Change History (13) comment:1 follow-up: ↓ 2 Changed 9 years ago by comment:2 in reply to: ↑ 1 Changed 9 years ago by Actually, I don't see what is the problem with the output. 2*a = 0 mod 2, so i^0is 1. And i^2 = -1, so the outputs seem correct. Was the OP expecting an error from taking the power? I think his issue is with the last entry, i^(2*a)==1. I think he expected an error because the exponent is in Z/2Z and not in Z/4Z. It would indeed be nicer if sage would refuse to let Z/nZ act on the right on rings by exponentiation. It's not well-defined, unless you're looking at an d-th root of unity, where d is a divisor of n. comment:3 follow-up: ↓ 4 Changed 9 years ago by So, I suppose this should be fixed for complex numbers and some cyclotomic fields. Where the __pow__ method raises ValueError if the exponent does not belong to (ZZ, int, float, RR, RDF, CC, CDF) - can there be any other field in that tuple? comment:4 in reply to: ↑ 3 Changed 9 years ago by So, I suppose this should be fixed for complex numbers and some cyclotomic fields. Where the __pow__method raises ValueErrorif the exponent does not belong to (ZZ, int, float, RR, RDF, CC, CDF)- can there be any other field in that tuple? Is the code that specific? For a general ring, we'd probably want that the exponent is coercible into ZZ. For rings where fractional exponents are meaningful it would be QQ (but multivaluedness of the result is always an issue of course). For some analytic objects we can support even more general exponents. So in SR the issue probably remains, because one can wrap an element of ZZ/2ZZ in a symbolic expression. What happens now is probably that the code asks whether the exponent can be *turned into* an integer (i.e., asks for a conversion). Of course there is a conversion from ZZ/2ZZ to ZZ. comment:5 Changed 9 years ago by Yes, the __pow__ method seems different between RR, CC, QQ, number fields -- and these are the ones I checked just now. I have no idea how to fix this method in general. comment:6 Changed 9 years ago by - Cc mmezzarobba added comment:7 Changed 9 years ago by - Milestone changed from sage-6.1 to sage-6.2 comment:8 Changed 8 years ago by - Milestone changed from sage-6.2 to sage-6.3 comment:9 Changed 8 years ago by - Milestone changed from sage-6.3 to sage-6.4 comment:10 Changed 8 years ago by - Summary changed from silent conversion of mod to int to Powering with IntegerMod exponents comment:11 Changed 8 years ago by comment:12 Changed 5 years ago by - Cc jakobkroeker added - Stopgaps set to wrongAnswerMarker In addition I suggest to print for each result its parent (or related type) like Macaulay2 does: i1 : QQ[x] o1 = QQ[x] o1 : PolynomialRing i2 : x o2 = x o2 : QQ[x] }} comment:13 Changed 5 years ago by I added a pointer from #24247 to here. Disallowing powering by IntegerMod elements is easy. The hard part is allowing x ^ Mod(a, n) only where it makes sense. If we do that, ideally it should be done using generic code. Something like: def pow_intmod(x, a, n): if x^n != 1: raise ArithmeticError("power not defined") return x^a The problem is that checking x^n != 1 costs performance and that it might not work properly in non-exact rings. Actually, I don't see what is the problem with the output. 2*a = 0 mod 2, so i^0is 1. And i^2 = -1, so the outputs seem correct. Was the OP expecting an error from taking the power?
https://trac.sagemath.org/ticket/15709
CC-MAIN-2022-33
en
refinedweb
So, I got an interesting spam comment on my post today: “It gets even crazier when you actually benchmark the two languages only to discover in some real-world cases, PHP outperforms C#.” I triple dare you to show code examples so we can explain why you’re wrong. Quadruple dare Jesus christ, how did you think this was trueBy: No McNoington This person couldn’t even be bothered to put in a decent pseudonym to call them by, but Mr./Mrs. McNoington, prepare to be blown away… See, there’s something very common that all developers must do, and that is read files… we need to parse things, transform file formats, or whatever. So, let’s compare the two languages. function test() { $file = fopen("/file/file.bin", 'r'); $counter = 0; $timer = microtime(true); while ( ! feof($file)) { $buffer = fgets($file, 4096); $counter += substr_count($buffer, '1'); } $timer = microtime(true) - $timer; fclose($file); printf("counted %s 1s in %s milliseconds\n", number_format($counter), number_format($timer * 1000, 4)); } test(); using System.Diagnostics; using System.Text; var test = () => { using var file = File.OpenText("/file/file.bin"); var counter = 0; var sw = Stopwatch.StartNew(); while(!file.EndOfStream) { if(file.Read() == '1') { counter++; } } sw.Stop(); Console.WriteLine($"Counted {counter:N0} 1s in {sw.Elapsed.TotalMilliseconds:N4} milliseconds"); }; test(); Personally, I feel like this is a pretty fair assessment of each language. We will synchronously read a 4Mib file, byte-by-byte, and count the 1’s in the file. There’s very little user-land code going on here, so we’re just trying to test the very fundamentals of a language: reading a file. We’re only adding the counting here to prevent clever optimizing compilers (opcache in PHP, release mode in C#) from cheating and removing the code. “But Rob,” I hear you say, “they’re not reading it byte-by-byte in the PHP version!” and I’d reply with, “but we’re not reading it byte-by-byte in the C# version either!” Let’s see how it goes: That’s pretty crazy… I mean, we just read four megs, which is about the size of a decent photo. What about something like a video clip that might be 2.5 gigs? Now, I need to process quite a bit of incoming files from banks and bills and stuff for my household budgeting system, which is how I discovered this earlier last year as I was porting things over from a hodgepodge of random stuff to Dapr and Kubernetes. PHP is actually faster than C# at reading files, who knew?! Does this mean you should drop everything and just rewrite all your file writing stuff in PHP (or better, C)? no. Not at all. A few milliseconds isn’t going to destroy your day, but if your bottleneck is i/o, maybe it’s worth considering :trollface:? Nah, don’t kid yourself. But if you’re already a PHP dev, now you know that PHP is faster than C#, at least when it comes to reading files… Feel free to peruse some experiments here (or if you want to inspect the configuration): withinboredom/racer: racing languages (github.com) Can this C# be written to be faster, sure! Do libraries implement “the faster way?” Not usually. Addendum Many people have pointed out that the C# version isn’t reading it in binary mode and the function call overhead are to blame. Really? C# is many order of magnitudes faster than PHP at function calls. I promise you that isn’t the problem. Here’s the code for binary mode on the 2.5gb file: using System.Diagnostics; using System.Text; var binTest = () => { using var file = File.OpenRead("/file/file.bin"); var counter = 0; var buffer = new byte[4096]; var numRead = 0; var sw = Stopwatch.StartNew(); while ((numRead = file.Read(buffer, 0, buffer.Length)) != 0) { counter += buffer.Take(numRead).Count((x) => x == '1'); } sw.Stop(); Console.WriteLine($"Counted {counter:N} 1s in {sw.Elapsed.TotalMilliseconds} milliseconds"); }; binTest(); If you now want to complain that it’s all Linq’s fault, we can just remove the .Take and double count things because I need to get to work and I’m not putting any more time to telling people the sky is blue. So yeah, if an incorrect implementation is the proof you need that PHP is slower, go for it. Time to go to work. Addendum 2 Since people come here wanting to optimize the C# without optimizing the PHP version, here is an implementation ONLY looking at file performance: function test() { $file = fopen("/file/file.bin", 'r'); $counter = 0; $timer = microtime(true); while (stream_get_line($file, 4096) !== false) { ++$counter; } $timer = microtime(true) - $timer; fclose($file); printf("counted %s 1s in %s milliseconds\n", number_format($counter), number_format($timer * 1000, 4)); } test(); var binTest = () => { using var file = File.OpenRead("/file/file.bin"); var counter = 0; var buffer = new byte[4096]; var sw = Stopwatch.StartNew(); while (file.Read(buffer, 0, buffer.Length) != 0) { counter += 1; } sw.Stop(); Console.WriteLine($"Counted {counter:N} 1s in {sw.Elapsed.TotalMilliseconds} milliseconds"); }; binTest(); And here are the results:
https://withinboredom.info/blog/2022/03/16/yes-php-is-faster-than-c/
CC-MAIN-2022-33
en
refinedweb
How To Migrate From WordPress To The Eleventy Static Site Generator Eleventy is a static site generator. We’re going to delve into why you’d want to use a static site generator, get into the nitty-gritty of converting a simple WordPress site to Eleventy, and talk about the pros and cons of managing content this way. Let’s go! What Is A Static Site Generator? I started my web development career decades ago in the mid-1990s when HTML and CSS were the only things you needed to get a website up and running. Those simple, static websites were fast and responsive. Fast forward to the present day, though, and a simple website can be pretty complicated. In the case of WordPress, let’s think through what it takes to render a web page. WordPress server-side PHP, running on a host’s servers, does the heavy lifting of querying a MySQL database for metadata and content, chooses the right versions of images stored on a static file system, and merges it all into a theme-based template before returning it to the browser. It’s a dynamic process for every page request, though most of the web pages I’ve seen generated by WordPress aren’t really that dynamic. Most visitors, if not all, experience identical content. Static site generators flip the model right back to that decades-old approach. Instead of assembling web pages dynamically, static site generators take content in the form of Markdown, merge it with templates, and create static web pages. This process happens outside of the request loop when users are browsing your site. All content has been pre-generated and is served lightning-fast upon each request. Web servers are quite literally doing what they advertise: serving. No database. No third-party plugins. Just pure HTML, CSS, JavaScript, and images. This simplified tech stack also equates to a smaller attack surface for hackers. There’s a little server-side infrastructure to exploit, so your site is inherently more secure. Leading static site generators are feature-rich, too, and that can make a compelling argument for bidding adieu to the tech stacks that are hallmarks of modern content management systems. If you’ve been in this industry for a while, you may remember Macromedia’s (pre-Adobe) Dreamweaver product. I loved the concept of library items and templates, specifically how they let me create consistency across multiple web pages. In the case of Eleventy, the concepts of templates, filters, shortcodes, and plugins are close analogs. I got started on this whole journey after reading about Smashing’s enterprise conversion to the JamStack. I also read Mathias Biilmann & Phil Hawksworth’s free book called Modern Web Development on the JAMstack and knew I was ready to roll up my sleeves and convert something of my own. Why Not Use A Static Site Generator? Static site generators require a bit of a learning curve. You’re not going to be able to easily pass off editorial functions to input content, and specific use cases may preclude you from using one. Most of the work I’ll show is done in Markdown and via the command line. That said, there are many options for using static site generators in conjunction with dynamic data, e-commerce, commenting, and rating systems. You don’t have to convert your entire site over all at once, either. If you have a complicated setup, you might start small and see how you feel about static site generation before putting together a plan to solve something at an enterprise scale. You can also keep using WordPress as a best-in-class headless content management system and use an SSG to serve WordPress content. How I Chose Eleventy As A Static Site Generator Do a quick search for popular static site generators and you’ll find many great options to start with: Eleventy, Gatsby, Hugo, and Jekyll were leading contenders on my list. How to choose? I did what came naturally and asked some friends. Eleventy was a clear leader in my Twitter poll, but what clinched it was a comment that said “@eleven_ty feels very approachable if one doesn’t know what one is doing.” Hey, that’s me! I can unhappily get caught up in analysis paralysis. Not today… it felt good to choose Eleventy based on a poll and a comment. Since then, I’ve converted four WordPress sites to Eleventy, using GitHub to store the code and Netlify to securely serve the files. That’s exactly what we’re going to do today, so let’s roll up our sleeves and dive in! Getting Started: Bootstrapping The Initial Site Eleventy has a great collection of starter projects. We’ll use Dan Urbanowicz’s eleventy-netlify-boilerplate as a starting point, advertised as a “template for building a simple blog website with Eleventy and deploying it to Netlify. Includes Netlify CMS and Netlify Forms.” Click “Deploy to netlify” to get started. You’ll be prompted to connect Netlify to GitHub, name your repository (I’m calling mine smashing-eleventy-dawson), and then “Save & Deploy.” With that done, a few things happened: - The boilerplate project was added to your GitHub account. - Netlify assigned a dynamic name to the project, built it, and deployed it. - Netlify configured the project to use Identity (if you want to use CMS features) and Forms (a simple contact form). As the screenshot suggests, you can procure or map a domain to the project, and also secure the site with HTTPS. The latter feature was a really compelling selling point for me since my host had been charging an exorbitant fee for SSL. On Netlify, it’s free. I clicked Site Settings, then Change Site Name to create a more appropriate name for my site. As much as I liked jovial-goldberg-e9f7e9, elizabeth-dawson-piano is more appropriate. After all, that’s the site we’re converting! When I visit elizabeth-dawson-piano.netlify.app, I see the boilerplate content. Awesome! Let’s download the new repository to our local machine so we can start customizing the site. My GitHub repository for this project gives me the git clone command I can use in Visual Studio Code’s terminal to copy the files: Then we follow the remaining instructions in the boilerplate’s README file to install dependencies locally, edit the _data/metadata.json file to match the project and run the project locally. npm install @11ty/eleventy npm install npx eleventy --serve --quiet With that last command, Eleventy launches the local development site at localhost:8080 and starts watching for changes. Preserving WordPress Posts, Pages, And Images The site we’re converting from is an existing WordPress site at elizabethrdawson.wordpress.com. Although the site is simple, it’d be great to leverage as much of that existing content as possible. Nobody really likes to copy and paste that much, right? WordPress makes it easy using its export function. Export Content gives me a zip file containing an XML extract of the site content. Export Media Library gives me a zip file of the site’s images. The site that I’ve chosen to use as a model for this exercise is a simple 3-page site, and it’s hosted on Wordpress.com. If you’re self-hosting, you can go to Tools > Export to get the XML extract, but depending on your host, you may need to use FTP to download the images. If you open the XML file in your editor, it’s going to be of little use to you. We need a way to get individual posts into Markdown, which is the language we’re going to use with Eleventy. Lucky for us, there’s a package for converting WordPress posts and pages to Markdown. Clone that repository to your machine and put the XML file in the same directory. Your directory listing should look something like this: If you want to extract posts from the XML, this will work out of the box. However, our sample site has three pages, so we need to make a small adjustment. On line 39 of parser.js, change “post” to “page” before continuing. wordpress-export-to-markdownto export pages, not posts. (Large preview) Make sure you do an “npm install” in the wordpress-export-to-markdown directory, then enter “node index.js” and follow the prompts. That process created three files for me: welcome.md, about.md, and contact.md. In each, there’s front matter that describes the page’s title and date, and the Markdown of the content extracted from the XML. ‘Front matter’ may be a new term for you, and if you look at the section at the top of the sample .md files in the “pages” directory, you’ll see a section of data at the top of the file. Eleventy supports a variety of front matter to help customize your site, and title and date are just the beginning. In the sample pages, you’ll see this in the front matter section: eleventyNavigation: key: Home order: 0 Using this syntax, you can have pages automatically added to the site’s navigation. I wanted to preserve this with my new pages, so I copied and pasted the content of the pages into the existing boilerplate .md files for home, contact, and about. Our sample site won’t have a blog for now, so I’m deleting the .md files from the “posts” directory, too. Now my local preview site looks like this, so we’re getting there! This seems like a fine time to commit and push the updates to GitHub. A few things happen when I commit updates. Upon notification from GitHub that updates were made, Netlify runs the build and updates the live site. It’s the same process that happens locally when you’re updating and saving files: Eleventy converts the Markdown files to HTML pages. In fact, if you look in your _site directory locally, you’ll see the HTML version of your website, ready for static serving. So, as I navigate to elizabeth-dawson-piano.netlify.app shortly after committing, I see the same updates I saw locally. Adding Images We’ll use images from the original site. In the .eleventy.js file, you’ll see that static image assets should go in the static/img folder. Each page will have a hero image, and here’s where front matter works really well. In the front matter section of each page, I’ll add a reference to the hero image: hero: `/static/img/performance.jpg` Eleventy keeps page layouts in the _includes/layouts folder. base.njk is used by all page types, so we’ll add this code just under the navigation since that’s where we want our hero image. {% if (hero) %} <img class="page-hero" src="{{ hero }}" alt="Hero image for {{ title }}" /> {% endif %} I also included an image tag for the picture of Elizabeth on the About page, using a CSS class to align it and give it proper padding. Now’s a good time to commit and see exactly what changed. Embedding A YouTube Player With A Plugin There are a few YouTube videos on the home page. Let’s use a plugin to create Youtube’s embed code automatically. eleventy-plugin-youtube-embed is a great option for this. The installation instructions are pretty clear: install the package with npm and then include it in our .eleventy.js file. Without any further changes, those YouTube URLs are transformed into embedded players. (see commit) Using Collections And Filters We don’t need a blog for this site, but we do need a way to let people know about upcoming events. Our events — for all intents and purposes — will be just like blog posts. Each has a title, a description, and a date. There are a few steps we need to create this new collection-based page: - Create a new events.mdfile in our pages directory. - Add a few events to our posts directory. I’ve added .mdfiles for a holiday concert, a spring concert, and a fall recital. - Create a collection definition in .eleventy.jsso we can treat these events as a collection. Here’s how the collection is defined: we gather all Markdown files in the posts directory and filter out anything that doesn’t have a location specified in the front matter. eleventyConfig.addCollection("events", (collection) => collection.getFilteredByGlob("posts/*.md").filter( post => { return ( item.data.location ? post : false ); }) ); - Add a reference to the collection to our events.mdfile, showing each event as an entry in a table. Here’s what iterating over a collection looks like: <table> <thead> <tr> <th>Date</th> <th>Title</th> <th>Location</th> </tr> </thead> <tbody> {%- for post in collections.events -%} <tr> <td>{{ post.date }}</td> <td><a href="{{ post.url }}">{{ post.data.title }}</a></td> <td>{{ post.data.location }}</td> </tr> {%- endfor -%} </tbody> </table> However, our date formatting looks pretty bad. Luckily, the boilerplate .eleventy.js file already has a filter titled readableDate. It’s easy to use filters on content in Markdown files and templates: {{ post.date | readableDate }} Now, our dates are properly formatted! Eleventy’s filter documentation goes into more depth on what filters are available in the framework, and how you can add your own. (see: commit) Polishing The Site Design With CSS Okay, so now we have a pretty solid site created. We have pages, hero images, an events list, and a contact form. We’re not constrained by the choice of any theme, so we can do whatever we want with the site’s design… the sky is the limit! It’s up to you to make your site performant, responsive, and aesthetically pleasing. I made some styling and markup changes to get things to our final commit. Now we can tell the world about all of our hard work. Let’s publish this site. Publishing The Site Oh, but wait. It’s already published! We’ve been working in this nice workflow all along, where our updates to GitHub automatically propagate to Netlify and get rebuilt into fresh, fast HTML. Updates are as easy as a git push. Netlify detects the changes from git, processes markdown into HTML, and serves the static site. When you’re done and ready for a custom domain, Netlify lets you use your existing domain for free. Visit Site Settings > Domain Management for all the details, including how you can leverage Netlify’s free HTTPS certificate with your custom domain. Advanced: Images, Contact Forms, And Content Management This was a simple site with only a few images. You may have a more complicated site, though. Netlify’s Large Media service allows you to upload full-resolution images to GitHub, and stores a pointer to the image in Large Media. That way, your GitHub repository is not jam-packed with image data, and you can easily add markup to your site to request optimized crops and sizes of images at request time. I tried this on my own larger sites and was really happy with the responsiveness and ease of setup. Remember that contact form that was installed with our boilerplate? It just works. When you submit the contact form, you’ll see submissions in Netlify’s administration section. Select “Forms” for your site. You can configure Netlify to email you when you get a new form submission, and you can also add a custom confirmation page in your form’s code. Create a page in your site at /contact/success, for example, and then within your form tag (in form.njk), add action="/contact/success" to redirect users there once the form has been submitted. The boilerplate also configures the site to be used with Netlify’s content manager. Configuring this to work well for a non-technical person is beyond the scope of the article, but you can define templates and have updates made in Netlify’s content manager sync back to GitHub and trigger automatic redeploys of your site. If you’re comfortable with the workflow of making updates in markdown and pushing them to GitHub, though, this capability is likely something you don’t need. Further Reading Here are some links to resources used throughout this tutorial, and some other more advanced concepts if you want to dive deeper. - “How Smashing Magazine Manages Content: Migration From WordPress To JAMstack,” Sarah Drasner - “Modern Web Development On The JAMstack,” Mathias Biilmann & Phil Hawksworth - “Eleventy Is A Simpler Static Site Generator,” Eleventy Docs - “Starter Projects,” Eleventy Docs - “Large Media Docs,” Netlify - “Configuration Options,” Netlify CMS - “12 Things I Learned After Converting WordPress Sites to Eleventy,” Scott Dawson
https://www.smashingmagazine.com/2020/12/wordpress-eleventy-static-site-generator/
CC-MAIN-2022-33
en
refinedweb
SAP Cloud Integration: Encoder – Base64: Conversion results in Line Feed character at every 76th position Co-Author: Rajwinder Singh Problem Statement – Encoder – Base64: Conversion results in Line Feed character at every 76th position Example Process Flow to simulate the behavior: 1) Following screenshot shows the Payload introduced via the first Content Modifier. The payload consists of continuous text without any Line Feed. 2) The following screenshot shows the Encoder settings to converts the text to Base64 format. 3) After deploying the integration flow, the resultant text contains Base64 encoded output. However, there is also an additional line feed character present at every 76th position. The reason for this behavior (as also highlighted in the WIKI pages below) is an old MIME format. Please refer the wiki page for more details. Solution / Alternative: Use groovy function encodeBase64() to convert to Base 64 which uses a more recent format and does not introduces a line feed character at every 76 position. Output after Base64 conversion using groovy. Sample Groovy: import com.sap.gateway.ip.core.customdev.util.Message; import java.util.HashMap; def Message processData(Message message) { //Body def body = message.getBody(String.class); String encoded = body.bytes.encodeBase64().toString(); message.setBody(encoded); return message; } Please note: The purpose of the Blog is to share an alternative approach to the issue and does not suggest it as the only available alternative. Great stuff .. really helpful and interesting read.. could you pls also post the code for decoding? Hello Dheeraj, You could use the standard Decoder which should work fine, though I haven't tried it on this example. Alternatively method decodeBase64 ()could be used. Regards. thanks a lot, I was actually fighting with this problem right now trying to convert body into a string and replace CRLF in a script 🙂 this one is much better approach 🙂 Excellent tip! Thank you Ritesh Kumar Kumaria !
https://blogs.sap.com/2017/09/21/sap-cloud-integration-encoder-base64-conversion-results-in-line-feed-character-at-every-76th-position/
CC-MAIN-2022-33
en
refinedweb
Raw Requests¶ The Python SDK exposes a custom requests.auth.AuthBase which you can use to sign non-standard calls. This can be helpful if you need to make an authenticated request to an alternate endpoint or to an Oracle Cloud Infrastructure API not yet supported in the SDK. Creating a Signer¶ The Signer used as part of making raw requests can be either an Instance Principals-based signer or one that uses a user OCID and private key. Signer with user OCID and private key¶ Constructing a Signer instance requires a few pieces of information. By default, the SDK uses the values in the config file at ~/.oci/config. You can manually specify the required fields, or use a config loader to pull in the values from a file: from oci.signer import Signer auth = Signer( tenancy='ocid1.tenancy.oc1..aaaaaaaa[...]', user='ocid1.user.oc1..aaaaaaaa[...]', fingerprint='20:3b:97:13:55:1c:[...]', private_key_file_location='~/.oci/oci_api_key.pem', pass_phrase='hunter2' # optional ) # Or load directly from a file from oci.config import from_file config = from_file('~/.oci/config') auth = Signer( tenancy=config['tenancy'], user=config['user'], fingerprint=config['fingerprint'], private_key_file_location=config['key_file'], pass_phrase=config['pass_phrase'] ) Using the Signer¶ Once you have an instance of the auth handler, simply include it as the auth= param when using Requests. import requests url = '[...]' response = requests.get(url, auth=auth) Remember that the result will come back in its raw form and is not unpacked into a model instance. You will need to handle the (de)serialization yourself. The following creates a new user by talking to the identity endpoint: endpoint = '' body = { 'compartmentId': config['tenancy'], # root compartment 'name': 'TestUser', 'description': 'Created with a raw request' } response = requests.post(endpoint, json=body, auth=auth) response.raise_for_status() print(response.json()['id']) Using an Instance Principals-based Signer¶ The Instance Principals-based Signer uses a security token to authenticate calls against Oracle Cloud Infrastructure services. This token has an expiration time and the Signer will automatically handle refreshing the token when it is near expiry. However, it is possible that the security token held by the signer is valid (from an expiration time perspective) but the request fails with a 401 (NotAuthenticated) error because of, for example, changes in the dynamic group that an instance is a part of or the policies applied to that dynamic group. You can account for this by retrying on a 401. If the request fails with a 401 on a subsequent retry, this may point to other issues and you should not keep retrying in this circumstance. For example: import oci import requests signer = oci.auth.signers.InstancePrincipalsSecurityTokenSigner() call_attempts = 0 while call_attempts < 2: # This call is just an example. Provide your own appropriate method (e.g. get() instead of post()), endpoint and body response = requests.post(endpoint, json=body, auth=signer) if response.ok: return response else: call_attempts += 1 if response.status_code == 401 and call_attempts < 2: signer.refresh_security_token() else: response.raise_for_status()
https://docs.oracle.com/en-us/iaas/tools/python/2.77.0/raw-requests.html
CC-MAIN-2022-33
en
refinedweb
SciChart® the market leader in Fast WPF Charts, WPF 3D Charts, iOS Chart, Android Chart and JavaScript Chart Components I am using an SCINumericAxis for my y-axis. I am setting the visibleRange to (Min = 28, Max = 76). I am leaving the minorsPerMajor to the default of 5. However when looking at my graph (attached) you can see that the major tick labels are actually every 6 minors, e.g. 30, 36, 42, etc. when they should be 30, 35, 40, etc for minorsPerMajor set to 5. Please advise on how to fix this issue as my major tick labels should be every 5, not every 6. Hi, Brad. What you need is to create your own TickProvider and in updateTicks method add to majorTicks only those values that you need. See the code: import SciChart.Protected.SCITickProvider class CustomNumericTickProvider: SCINumericTickProvider { override func updateTicks(minorTicks: SCIDoubleValues!, majorTicks: SCIDoubleValues!) { let start = self.axis.visibleRange.minAsDouble.rounded() let end = self.axis.visibleRange.maxAsDouble.rounded() let step: Double = 5 var current: Double = (start / step).rounded(.up) * step while current <= end { majorTicks.add(current) current += step } } } For more details, take a look at—tickprovider-and-deltacalculator-api.html#creating-your-own-tickprovider
https://www.scichart.com/questions/ios/ios-vertical-axis-major-tick-marks-are-off-by-one
CC-MAIN-2022-33
en
refinedweb
THANKS to guys at kafkaesque-io there's that will make it much easier to setup. If you love Azure or Microsoft skip this part I dislike Azure due to a lot of time wasted on trying to configure it where at the end I would eventually give up and use whatever was possible or just give up and tell my employer that this is not possible on Azure. Some examples include: - Impossible to setup MongoDB as docker image via Container Instance which would enable easy setup for dev server. Short: it's not possible to attach Azure file share so mongo can use it. It will create a couple of folders and give up as "permission denied". However neo4j passed it. - Network. I had great pain setting up Spring Boot and Micronaut Neo4j clients just to realise that App Services cannot go over 230 seconds of idle TCP but Azure default load balancer or server or whatever under these connections will not properly close connections. Instead, it will just drop it leaving things like SDN/RX, which rely on TCP communication proper closing, think that connection is still there. Therefor, you'll see 20 sec connection pending until your app gets a hang of it realises no Neo4j connection in pool actually works then drop it and create new. Acquire time has nothing to do with this as connections will be visible to clients and only way to detect it is to have transaction timeout where client knows that if X amount of time is not enough for transaction there's a connection problem. Easy fix was to use connection lifetime on client settings where connections have to end in 230 seconds or 3.8 minutes. Maybe 228s as I set it just to be sure to break existing connection in pool before it ends and gets pending state. - Anything you touch or do with it burns. Each software will come with instructions for AWS and a lot with GCP. A LOOOOOOOOOOOOOOT come without any info on Azure or special info saying trillions of steps on how to enable it somehow and some of the features are not available. I really did consider quitting my job and changing career just to be safe from the hell Microsfot puts his Azure clients through. And not to mention Blob Storage hell and prices. Preps However, job is a job, depression goes away and you still need to do stuff. Latest was the Apache Pulsar. I needed to get this up and running just to keep on working and I'm blocking the whole team. Good thing kafkaesque.io guys put together helm chart that has Azure included. First of all I want to mention that you need at least 6 nodes in AKS pool with machines that have 2 CPU and 4GB RAM. Because of companies subscription and previous AKS setup I had to remove AKS and add a new one with 5 nodes of 2 cpus + 4GB ram. Good enough. Docker to speed up generating tokens prior to deploy I actually used docker to run my own standalone pulsar to generate keys upfront. If you do it like that generate all keys on docker. You can jump in to pulsar docker instance like docker container exec -it <<pulsar container id>> /bin/bash Than generate stuff like explained in kafkaesque.io git repo for helm chart for pulsar under Authorisation section. Too lazy to explore well: bin/pulsar tokens create-key-pair --output-private-key my-private.key --output-public-key my-public.key bin/pulsar tokens create --private-key --subject admin >> admin.jwt bin/pulsar tokens create --private-key --subject superuser >> superuser.jwt bin/pulsar tokens create --private-key --subject websocket >> websocket.jwt bin/pulsar tokens create --private-key --subject proxy >> proxy.jwt By default everything is setup by those guys to work with those names. So it's really important that filename for keys are my-private.key and my-public.key. This is assuming that you want to do as less as possible with configuring anything. Next you want to have all of those files on your PC for easier use docker cp <<your pulsar docker id>>:/pulsar/my-private.key my-private.key docker cp <<your pulsar docker id>>:/pulsar/my-private.key my-private.key docker cp <<your pulsar docker id>>:/pulsar/my-public.key my-public.key docker cp <<your pulsar docker id>>:/pulsar/admin.jwt admin.jwt docker cp <<your pulsar docker id>>:/pulsar/proxy.jwt proxy.jwt docker cp <<your pulsar docker id>>:/pulsar/websocket.jwt websocket.jwt docker cp <<your pulsar docker id>>:/pulsar/superuser.jwt superuser.jwt Reason for "one by one file" is that you see all of the files used in here. You can have some shorthand command if you like. Also I had plugin for Kubernets in Visual Studio Code which made it easy to check Azure AKS and right click on them then and choose Merge into Kubeconfig. Using docker this was super easy and then you can switch via docker to AKS. Just to see what I'm talking about: Why? Well I'm too lazy to configure stuff so this was easy. Also on Windows docker will have tray icon with right click option Kubernetes where you can see your AKS in list after previous step and click on it to switch it very easy. Push secrets to AKS If you skipped Docker part please note here that it's important to have same names for files in secrets as secret names also. This is because it's already set up to work with these names and probably for development you won't care enough to change it so: kubectl create secret generic token-public-key --from-file=my-public.key --namespace pulsar kubectl create secret generic token-private-key --from-file=my-private.key --namespace pulsar kubectl create secret generic token-admin --from-file=admin.jwt --namespace pulsar kubectl create secret generic token-superuser --from-file=superuser.jwt --namespace pulsar kubectl create secret generic token-websocket --from-file=websocket.jwt --namespace pulsar kubectl create secret generic token-proxy --from-file=proxy.jwt --namespace pulsar Installation - for development For development means not storage. Why? Well it's too expensive. If you want you can add your Container Registry from Azure first: helm registry login <<yourazure>>.azurecr.io --username <<listed on azure>>--password <<so is your_pass>> Next, add helm repo: helm repo add kafkaesque helm repo update Why update? Well I have some form of OCD when it comes to updates so I like to update stuff as soon as I add new repo. Hope that's fine. If you want to enable Authorisation you need to add extra line at the top enableTokenAuth: yes to example development yaml so: I also removed pulsar functions because I don't need it* persistence: no enableAntiAffinity: no #this is missing in their repo file enableTokenAuth: yes zookeeper: resources: requests: memory: 512Mi cpu: 0.3 configData: PULSAR_MEM: "\"-Xms512m -Xmx512m -Dcom.sun.management.jmxremote -Djute.maxbuffer=10485760 -XX:+ParallelRefProcEnabled -XX:+UnlockExperimentalVMOptions -XX:+AggressiveOpts -XX:+DoEscapeAnalysis -XX:+DisableExplicitGC -XX:+PerfDisableSharedMem -Dzookeeper.forceSync=no\"" bookkeeper: replicaCount: 2 resources: requests: memory: 512Mi cpu: 0.3 configData: PULSAR_MEM: "\"-Xms512m -Xmx512m -XX:MaxDirectMemorySize=512m -Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.linkCapacity=1024 -XX:+UseG1GC -XX:MaxGCPauseMillis=10 -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCApplicationStoppedTime -XX:+PrintHeapAtGC -verbosegc -XX:G1LogLevel=finest\"" broker: component: broker replicaCount: 1 resources: requests: memory: 512Mi cpu: 0.3 configData: PULSAR_MEM: "\"-Xms512m -Xmx512m -XX:MaxDirectMemorySize=512m -Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.linkCapacity=1024 \"" autoRecovery: resources: requests: memory: 1Gi cpu: 1 proxy: replicaCount: 1 resources: requests: memory: 512Mi cpu: 0.3 wsResources: requests: memory: 512Mi cpu: 0.3 configData: PULSAR_MEM: "\"-Xms512m -Xmx512m -XX:MaxDirectMemorySize=512m\"" Now apply this as installation. A little heads up (for users on Windows and helm 3+ I think but may be all of you): Docs on previous github repo link do not use name for installing repo so add it like this: helm install pulsar --namespace pulsar kafkaesque/pulsar --values 'C:\Users\<<YourUsername>>\Desktop\test_helm\dev_values.yml' This means that first pulsar in the command is actually name of the installation inside of your AKS or whichever proper word is used instead of installation. Anyways it's missing at the time of writing in the repo example. Wait a bit and do a quick check with "kubectl get pods". If everything is initialised you can use kubectl expose service pulsar-proxy --type=LoadBalancer --name pulsar-exposed This will get you and External IP for using pulsar outside of AKS. Use that IP with 6650 or 8080 and admin.jwt (I think this one is for basic use). Good luck. Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/_hs_/apache-pulsar-on-aks-quick-setup-for-development-4m91
CC-MAIN-2022-33
en
refinedweb
Search... FAQs Subscribe Pie FAQs Recent topics Flagged topics Hot topics Best topics Search... Search Coderanch Advance search Google search Register / Login Jeroen van der Velden Greenhorn 10 4 Threads 0 Cows since Apr 07,eroen van der Velden LearnKey Masterexam Does anyone know if "Head First Servlets and JSP" will be accompanied with a LearnKey Masterexam. I missed that with "Head First EJB". Jeroen. show more 18 years ago Web Component Certification (OCEJWCD) SCWCD 1.4 book Hi Kathy & Bert, Will "Head First Servlets and JSP" be accompanied with a learnkey masterexam. I missed that with "Head First EJB". Jeroen. show more 18 years ago Web Component Certification (OCEJWCD) need start-up for SCBCD exam!! see JavaRanch SCBCD Links show more 18 years ago EJB Certification (OCEEJBD) need start-up for SCBCD exam!! Hi kriti, I think if you want to prepare for the certification Head First EJB by Bert Bates and Kathy Sierra is a good book to start. If you want to have a more profound knowledge of EJBs you can read Mastering Enterprise JavaBeans (2nd Edition) by Ed Roman, Scott W. Ambler, Tyler Jewell. Remember that these books are about the EJB 2.0 specification and maybe at your work place they are still using the EJB 1.1 specification. [ April 24, 2004: Message edited by: Jeroen van der Velden ] show more 18 years ago EJB Certification (OCEEJBD) ejbcertificate.com : Transactions : Question 5 The short, practical answer is ... because it makes your entity beans useless as a reusable component. Also, transaction management is best left to the application server - that's what they're there for.?. See link for the original message where i copied the answer above from... show more 18 years ago EJB Certification (OCEEJBD) WSIF? I think you will find your answer here : the WSIF site They also have a mailinglist. [ May 07, 2003: Message edited by: Jeroen van der Velden ] show more 19 years ago Web Services Static blocks Thanks Bert. Ik just overlooked de topic show more 19 years ago Programmer Certification (OCPJP) Static blocks In the Java SCJP 1.4 Practice Exam from Whizlabs Question 2 I found this piece of code: public class Static { static { int x = 5; } static int x,y; public static void main(String args[]) { x--; myMethod(); System.out.println(x + y + ++x); } public static void myMethod() { y = x++ + ++x; } } Can anyone tell me wat a static block is? When is it being executed? [ April 12, 2003: Message edited by: Jeroen van der Velden ] [ April 12, 2003: Message edited by: Jeroen van der Velden ] show more 19 years ago Programmer Certification (OCPJP) Class within a method Why can a class within a method only see the final variables of the enclosing method. show more 19 years ago Programmer Certification (OCPJP) java rule round up #113 Can somebody explain to me why the following code compiles and runs: double x; x=24.0/0; The answer says it will compile and run because floating point nummers don't produce a divide-by-zero ArithmeticException. They will give a result which is Not a Number value. But for me it looks like 24.0 is a double and not a floating point number because it has no f attached to it. show more 19 years ago Programmer Certification (OCPJP)
https://www.coderanch.com/u/47976/Jeroen-van-der-Velden
CC-MAIN-2022-33
en
refinedweb
rfm69 and atc Hi. I have played a little bit with atc examples from lowpowerlab which work well but I have some trouble to include it in mysensors Here description of atc for rfm69 : What I like is : - "green" rf (no scream level) - autoadjust rssi when fresh made node, or you move it (rare) - optimized low power as it adjust rssi - dynamic Very basic setup I am trying to compile: - from, add rfm69_atc. h and .cpp to rfm69 driver folder - it will need ifdef at some place, but for dirty test, in transportrfm69: //#include "drivers/RFM69/RFM69.h" #include "drivers/RFM69/RFM69_ATC.h" // RFM69 _radio(MY_RF69_SPI_CS, MY_RF69_IRQ_PIN, MY_RFM69HW, MY_RF69_IRQ_NUM); RFM69_ATC _radio(MY_RF69_SPI_CS, MY_RF69_IRQ_PIN, MY_RFM69HW, MY_RF69_IRQ_NUM);``` - still in transportrfm69.cpp, in transport.init: _radio.enableAutoPower(-70); // fixed for tests That would need some define conf like for example - MY_ENABLE_ATC to enable atc mode - MY_ENABLE_ATC_LEVEL_RFM69 for target rss level I'm getting this really dumb errors! sketch\SensebenderMicro.ino.cpp.o: In function `transportInit()': C:\Users\scalz\Documents\Arduino\libraries\MySensors/core/MyTransportRFM69.cpp:42: undefined reference to `RFM69_ATC::initialize(unsigned char, unsigned char, unsigned char)' sketch\SensebenderMicro.ino.cpp.o: In function `transportReceive(void*)': C:\Users\scalz\Documents\Arduino\libraries\MySensors/core/MyTransportRFM69.cpp:84: undefined reference to `RFM69_ATC::sendACK(void const*, unsigned char)' collect2.exe: error: ld returned 1 exit status exit status 1 Looking at each rfm, rfm_atc or rfmtransport I don't understand why it's undefined..rfm69_atc class is simply derived from rfm69 class. I am thinking about bad linking, bad inheritance declaration, or something not "in sync" with some params of core class methods (but I don't see where or why)?? Can you explain me? I'm feeling blind, and would like to learn.. @Hek, I'm sure you know what's wrong. don't laugh I will add more things I need when I will understand my mistake here thx Do you include your new ATC cpp here? @hek no I didn't see this place! thx. I have just added but still same errors. I am still looking.. Edit: I am using dev branch of course! and for the test, I am trying to compile sensebender sketch. @hek: sorry, it's oki for cpp. I added .h instead So now I have a small lot of errors due to the addition , I will look how to fix it and report here if I have another problem or success. thx You might need to include both... both in mysensors.h ? I will try. I need to understand well how it is organized I think. Yes, include both .cpps, looks like the new class inits the old: yes I kept the too. but if I remove rfm69.cpp or not, it still have these new errors I'm looking after In file included from C:\Users\scalz\Documents\Arduino\libraries\MySensors/MySensor.h:265:0, from C:\Users\scalz\AppData\Local\Temp\arduino_modified_sketch_741351\SensebenderMicro.ino:65: C:\Users\scalz\Documents\Arduino\libraries\MySensors/drivers/RFM69/RFM69_ATC.cpp: In member function 'void RFM69_ATC::sendFrame(uint8_t, const void*, uint8_t, bool, bool, bool, int16_t)': C:\Users\scalz\Documents\Arduino\libraries\MySensors/drivers/RFM69/RFM69_ATC.cpp:111:18: error: 'RFM69_CTL_SENDACK' was not declared in this scope SPI.transfer(RFM69_CTL_SENDACK | (sendRSSI?RFM69_CTL_RESERVE1:0)); // TomWS1 TODO: Replace with EXT1 ^ C:\Users\scalz\Documents\Arduino\libraries\MySensors/drivers/RFM69/RFM69_ATC.cpp:118:32: error: 'RFM69_CTL_REQACK' was not declared in this scope SPI.transfer(_targetRSSI ? RFM69_CTL_REQACK | RFM69_CTL_RESERVE1 : RFM69_CTL_REQACK); ^ C:\Users\scalz\Documents\Arduino\libraries\MySensors/drivers/RFM69/RFM69_ATC.cpp:129:66: error: 'RF69_TX_LIMIT_MS' was not declared in this scope while (digitalRead(_interruptPin) == 0 && millis() - txStart < RF69_TX_LIMIT_MS); // wait for DIO0 to turn HIGH signalling transmission finish ^ exit status 1``` Too much for my brain at this hour. You should probably start looking for the missing RFM69_CTL_REQACK define. Good night yep. thank you very much good night cool it compiles after updated rfm69 lib. I do some define and tests, and I will try my first PR I also need to check what are all diff between old and new lib. can't wait to try listenmode now... I will keep a very close eye on this! Listen mode is very interesting, but also the possible prolonged battery life from the other sensors!!! Big thumbs up! thx. so far so good I had a little bit time last night...I think I will rename this topic "improvements for our mysensors rfm69 lib" lol I diff checked with mysensors rfm69 driver lib: latest lowpowerlab rfm69 lib (master+spi transaction version) + few others variant I found to see if something could be missing. So I started from mysensors rfm69 driver one and added step by step the changes, and of course not forgot to keep boards define (atsam, esp, 328...) + checked the purpose of these changes. The list of improvements I noticed: - ATC, Automatic Transfer Power Control : merged, working (not the biggest part) - small improvements on spi transaction part : merged but not full tested. I don't use ethernet shield, so just tested with eeprom but I think this change was mostly made for things like w5100 shield...if I have time I will try to make more tests on this.. - ListenMode : still, in progress but in a good way I think Almost merged but not tested yet (was too late!!). For the moment it compiles. It will need some tests I think to see if all properly works, power consumption..At lowpowerlab they get very few ua (1-2ua order) in listenmode. sounds great I hope to have same success. I need register to their forum at least to thx them. not done. booo - with this listenmode, I plan to use gammon sketch J. I already did tests and noticed a better low consumption than lowpowerlab sleep of mysensors. but I will check if it's still the same case - still about sleep mode, I will look if it's possible to improve wakeup time, I read interesting things. - when all this will be ok, I will try to see if it's interesting to use sort of WDT Listenmode : a wdt done by rssi using listenmode. but that's at the end of the list! and needs to see then if it's better or not than common wdt power consumption. but that looks tempting because common wdt don't do listenmode... Files impacted for those interested to know: - rfm69 libs updated - rfm69_atc added - myTransportRFM69 updated - one cpp include in mysensor.h - few define in myconfig.h - mysensorscore, transport, hw..to add a SleepWithListenMode method ...For the moment I add my own sleep method to not break anything and keep mysensors archi... Some stuff, I hope I will have everything ok! But I'm very happy doing this as now I am a lot lot more confident with mysensors archi. very cool see you soon very good work ! especially for ATC and Listenmode. Is the code mods on Github for us to look at?? Thanks @Francois sorry this work is still in progress, I will work on it this week. This is handled in the rfm69 lib. And lowpowerlab explains this well: copied from Lowpowerlab - "The basic idea behind this extension is to allow your nodes to dial down transmission power based on the received signal strength indicator (RSSI)." - -)" Require some small changes in libs, but for the moment I have to check/think few things for the listenmode to have everything well packaged with mysensors..not finished yet. have you tested the code enough to release it for our use and testing?? Thanks @lafleur no, not yet. sorry for delay. I have no time actually..and I think perhaps lowpowerlab team is preparing few changes in their lib. so I am delaying a bit to finish other things in the mean time, and to see if they add new features. then, if nothing new, I will finish this if someone didn't beat me on this but I will try to do my best @scalz I have it working to some extent, send me what you have and I will add your changes to what I've done to make sure I did not miss anything... Then I will post the changes to the development branch Thanks tom --at-- lafleur --.-- us I have all this working now and have 7 devices on it to a serial gateway... Using new RFM69 driver and RFM69_ATC... Its interesting to see the power levels change as packets flow... If I can figure out how to do a PULL request, I will make it happen to 2.0b development branch... tom sorry for delayed answer, a bit busy, I have actually no time to look at/share my experiments I found this interesting too, seeing powerlevels change. btw I didn't finish the listenmode part, I'm on other things for the moment, so that's great if your are doing the job thx hi guys. just a little update to say that I'm back on this it's still a wip so I will share/release a bit later I have this working in mysensors dev for the moment. - I can get rssi value. - atc power mode. - listenmode : an mqtt esp8266 GW is peridocially waking up a proto node which is in deepsleep (the node is woken up by INT0 triggered by the radio of course). I will mainly use this for sort of remote watchdog for some of my nodes etc.. @scalz As you noticed I have just started with rfm69 and thanks for your help in the other thread. Really looking forward to your work as ATC is something I am missing in MySensors. @alexsh1 : thx. i know i have not pr this yet..it's still working local but have not done extended tests as I'm busy improving mqtt wifi gw..but i will do my best Hi Scalz, any progress here? Cant wait to use ATC and the rssi-value for my nodes Thanks for your work regards david @Fleischtorte So I am not the only one David waiting for ATC and RSSI David. @lafleur @scalz Hi guys, Have one of you succed with the PR, so we can enjoy at least the ATC feature. This is really the down side with the RFM69HW, power consumption so your work could be really helpfull ! Really thank you for that I have it all working, but my development environment and Jenkins are not in alignment, so my PR was rejected by the development team. They provide me NO help in resolving the issue... They tend to favor and support only the older NRF24L01 radio and have little interests in the newer, better preforming RFM69 or RFM95 radios. So have a look at my code changes in the close PR. It was not hard to implement. I have move on..... I'm sorry if your feelings got hurt. But you simply wouldn't do what the team instructed you in PR #440 to get it merged. Thanks @lafleur, found your PR, I will try to do my own thing based on your work. Shouldn't be a big deal. I did exactly what you ask, but it continued to fail in building the examples... In another PR, you pointed out that there were issue in building the examples under IDE 1.6.9. My feeling were not hurt, but I did NOT want to wast anymore time in dealing with Jenkins with out guidance.... Also there is NO guide on what Jenkins expect to see in its development environment. it all trial and error... FYI... I have developed RFM95 and TTN transport layers for my snapshot of your code. @frencho @lafleur sorry for this delay. i'm busy, running..sometimes i refresh myself doing some sw but I admit i spent more time on hw than looking how to cleanly do this PR. ouch but for me it has to be hobby even if I'm always rushing myself. you're not true in fact mysensors team is not nrf24 only as soon as i can, i will look. it could be a bit time consuming as it would be my first PR, why i always delay..boo lazy i am The best way I think (as, now, I'm not up to date with dev branch, I will need diffchecking my stuff): - I would start/improve PR437 with minor changes needed. As mysensors drivers compiles ok, i would do in this way than taking lowpowerlab lib first (all warnings enabled of course) - then a separated for ATC + or another one for ListenMode. to have a better history and not breaking anything. so, I would separate the PR. I don't know if you tried like this (if i'm overhead), or what were your PR issue. I will look a bit later for curiosity or in case I would run in similar issue hihi. Cool if you have it working @lafleur thanks for your work! On the weekend i was able to upgrade the driver and enable ATC with RSSI-Report. With the instructions from PR440 it was very easy It's great to see that you got it all working from the PR440. I hope others will find your work useful.... I'm my test,it works very well..... @Fleischtorte would you accept to share the work ? I started to play with the RFM69, but didn't get to the ATC part yet. It could save me a couple of hours, and debug ^^ - BenCranston last edited by @lafleur @Fleischtorte @frencho I think I'm a bit dense today. I don't understand where the code currently sits in terms of something that could be tested. I see that PR440 was closed and referenced to go back to PR437 or open a new PR? I'd love to give ATC code a go on my RFM69HW's. Is it being integrated into the development branch? How would I go about testing at this time? I did a quick look at the codebase in git and there is no mention of ATC in the MySensors dev branch... A quick diff of the RFM69.cpp code from Felix and MySensors shows a few differences, so they are not 100% in sync. Alas, I'm at a loss on how to apply the work already done to test.. I can "git clone" like a banshee, but beyond that I'm lost with Jenkins. Sorry, again, dense today... Any advice on how I can help is appreciated. I can test pretty easily. All of my nodes are Moteino's with RFM69HW radios. Thanks again for everyone's efforts and work to make ATC a reality in the MySensors codebase! - Fleischtorte last edited by Fleischtorte @frencho @BenCranston This is my implementation of PR440 first download new RFM69 driver from LowPowerLab Replace the files from libraries\MySensors\drivers\RFM69 (copy all and replace) Change in file RFM69.cpp line 31-32 #include <RFM69.h> #include <RFM69registers.h> to #include "RFM69.h" #include "RFM69registers.h" in RFM69_ATC.cpp line 32-34 #include <RFM69_ATC.h> #include <RFM69.h> // include the RFM69 library files as well #include <RFM69registers.h> to #include "RFM69_ATC.h" #include "RFM69.h" // include the RFM69 library files as well #include "RFM69registers.h" i think this was the driver.. next was mysensors in file libraries/MySensors/MySensor.h line 268 #include "drivers/RFM69/RFM69_ATC.cpp" in file libraries/MySensors/core/MyTransportRFM69.cpp first in line 24 #include "drivers/RFM69/RFM69_ATC.h" line 25-26 RFM69 _radio(MY_RF69_SPI_CS, MY_RF69_IRQ_PIN, MY_RFM69HW, MY_RF69_IRQ_NUM); uint8_t _address; to #ifdef MY_RFM69_Enable_ATC RFM69_ATC _radio(MY_RF69_SPI_CS, MY_RF69_IRQ_PIN, MY_RFM69HW, MY_RF69_IRQ_NUM); #else RFM69 _radio(MY_RF69_SPI_CS, MY_RF69_IRQ_PIN, MY_RFM69HW, MY_RF69_IRQ_NUM); #endif uint8_t _address; and line 53 idk if this is necessary return _radio.sendWithRetry(to,data,len); to return _radio.sendWithRetry(to,data,len,5); btw i use not the dev version see comment from trlafleur there is my testing node (molgan PIR ) /** *_NODE_ID 4 #define MY_RADIO_RFM69 #define MY_RFM69_FREQUENCY RF69_868MHZ #define MY_RFM69_NETWORKID 121 #define MY_RFM69_ENABLE_ENCRYPTION #define MY_RFM69_Enable_ATC #define CHILD_ID_RSSI 7 // Id for RSSI Value // Initialize motion message MyMessage msg(CHILD_ID, V_TRIPPED); // Initialize RSSI message MyMessage rssiMsg(CHILD_ID_RSSI,V_TEXT); void setup() { #ifdef MY_RFM69_Enable_ATC _radio.enableAutoPower(-70); Serial.println("ATC Aktiviert"); #endif pinMode(DIGITAL_INPUT_SENSOR, INPUT); // sets the motion sensor digital pin as input } void presentation() { // Send the sketch version information to the gateway and Controller sendSketchInfo("Molgan-PIR", "1.0"); // Register all sensors to gw (they will be created as child devices) present(CHILD_ID, S_DOOR); present(CHILD_ID_RSSI, S_INFO); } void loop() { // Read digital motion value boolean tripped = digitalRead(DIGITAL_INPUT_SENSOR) == HIGH; Serial.println(tripped); send(msg.set(tripped?"1":"0")); // Send tripped value to gw int var1 = _radio.RSSI; send(rssiMsg.set(var1)); // Send RSSI value to gw // Sleep until interrupt comes in on motion sensor. Send update every two minute. sleep(digitalPinToInterrupt(DIGITAL_INPUT_SENSOR), CHANGE, SLEEP_TIME); } i hope this helps im just learing mysensors & co david - BenCranston last edited by @Fleischtorte Sweet!! This I can do. I'll make the changes and start testing tonight. thanks for the explicit help. -Ben @Fleischtorte thanks for the details, I'll look into it, as soon as I have my RFM69 GW talking to my RFM69 node @Fleischtorte is your code example working ? I mean do you see the RSSI going down to -70 ? Cause on this page Felix says we must use a radio.sendwithretry ?! I'm a little confused it not working so I'm digging @BenCranston did you get it to work ? - Fleischtorte last edited by Fleischtorte @Frencho radio.sendwithretry is used (see line 53 in libraries/MySensors/core/MyTransportRFM69.cpp) and ATC must be enabled on GW/Node (use _radio.enableAutoPower(-70); only on the node side). It seems you need continous traffic to see the effect of ATC (i use a simple relay sketch which reports the rssi with every switch command). How is the ATC working out? I think it is a neat feature, but am reluctant to mess with the core MySensors libraries. hi , it works but not very stable... after a while the sensors become offline so i revert to the stable version of MySensors. how is the status now?? Works the ATC now? @cablesky it is not included in Mysensors yet as it breaks compatibility with current packet protocol. It's planned in an upcoming release, and working well
https://forum.mysensors.org/topic/3483/rfm69-and-atc
CC-MAIN-2020-50
en
refinedweb
Announcing the release of Spring Cloud Stream Horsham (3.0.0.RELEASE) We are pleased to announce the release of the Spring Cloud Stream Horsham (3.0.0.RELEASE) release train which is available as part of Spring Cloud Hoxton.RELEASE (imminent) and builds on Spring Boot 2.2.x and Spring Cloud Function 3.0.0.RELEASE which was also just released. Spring Cloud Stream Horsham.RELEASE modules are available for use in the Maven Central repository. Quick highlights: As mentioned in these posts (demystified and simplified, functional and reactive, stream and spring Integration and event routing) preceding this announcement, the core theme of this release is functions!. Historically, Spring Cloud Stream exposed annotation-based configuration model that required the user to be aware of and provide considerable amount of boilerplate information that could be otherwise easily inferred. You can read more details about it here, but with this release and subsequent release of Spring Cloud Functions that is no longer the case. Stream app is just a boot app! @SpringBootApplication public class SampleApplication { @Bean public Function<String, String> uppercase() { return value -> value.toUpperCase(); } } Yes, the above is fully functional Spring Cloud Stream application Notable features and enhancements: Most of notable features and enhancements are to emphasise our commitment to functional programming model; Routing Function - which effectively corresponds to equal functionality (and more) provided by conditionattribute of @StreamListenerannotation. See Event Routing for more details. Multiple bindings with functions (multiple message handlers) - see Multiple functions in a single application for more details. Function arity from previous versions. - Schema Registry module has been migrated to a stand alone project For more information you should also checkout the updated user guide. Functional support in Kafka Streams Kafka Streams binder now supports a first class function based programming model using which you can now write your Kafka Streams applications based on java.util.function support. This further reduces the boilerplate code that the applications need to write and allow the developers to focus on the business logic at hand. For further details, please visit Functional Style section for more details. Soby Chako (the lead for Spring Cloud Stream Kafka binder) is planning to have a dedicated set of write ups going over all the new features. As always, we welcome feedback and contributions, so please reach out to us on Stackoverflow or GitHub and or Gitter
https://spring.io/blog/2019/11/25/announcing-the-release-of-spring-cloud-stream-horsham-3-0-0-release
CC-MAIN-2020-50
en
refinedweb
Flex 3 SDK Vs. Flex 4 SDK – Part 5 – View States In previous posts in the series we discussed new features in Flex 4 that are basic in nature (namespaces, an introduction to the new Spark components architecture and the new mxml tags to name a few). In the next few posts we will discuss the more interesting stuff new with Flex 4 like FXG, states, […] Thank you for your interest! We will contact you as soon as possible. We will contact you as soon as possible.
https://www.tikalk.com/posts/2010/06/29/flex-3-sdk-vs-flex-4-sdk-part-5-view-states/
CC-MAIN-2020-50
en
refinedweb
Hello, I have created class that inherits from SKCanvasView. All it does is override OnTouch() method. Now I need to add TapGestureRecognizer to it, but for some reason I can't. However when I tried to add the recognizer to original SKCanvasView class object it worked fine. Do I need to add something to my class to enable gesture recognizers? The following code works fine on my side , the tap event called as expected . public class MySKCanvasView : SKCanvasView { protected override void OnTouch(SKTouchEventArgs e) { base.OnTouch(e); } } public class SimpleCirclePage : ContentPage { public SimpleCirclePage() { Title = "Simple Circle"; MySKCanvasView canvasView = new MySKCanvasView(); canvasView.PaintSurface += OnCanvasViewPaintSurface; Content = canvasView; TapGestureRecognizer tap = new TapGestureRecognizer(); tap.Tapped += Tap_Tapped; canvasView.GestureRecognizers.Add(tap); } private void Tap_Tapped(object sender, System.EventArgs e) { //trigger as expected } } Answers The following code works fine on my side , the tap event called as expected . Well, thing was that for some reason I put e.Handled; in my OnTouch() method and I forgot about it. Guess working 11 hours straight is too much for me. Thanks for Your time.
https://forums.xamarin.com/discussion/comment/396822
CC-MAIN-2020-50
en
refinedweb
Groovy has the def keyword to replace a type name when we declare a variable. Basically it means we don't really want to define the type ourselves or we want to change the type along the ride. def myvar = 42 assert myvar instanceof Integer myvar = 'I am a String' // String assignment changes type. assert myvar instanceof String String s = 'I am String' assert s instanceof String s = new Integer(100) // Surprise, surprise, value is converted to String! assert s instanceof String int i = 42 assert i instanceof Integer try { i = 'test' // Cannot assign String value to Integer. } catch (e) { assert e instanceof org.codehaus.groovy.runtime.typehandling.GroovyCastException }
https://blog.mrhaki.com/2009/11/groovy-goodness-using-def-to-define.html
CC-MAIN-2020-50
en
refinedweb
C++ standard libraries extensions From cppreference.com < cpp | experimental Version 1 of the C++ Extensions for Library Fundamentals, ISO/IEC TS 19568:2015 defines the following new components for the C++ standard library: The following components of ISO/IEC TS 19568:2015 were not selected for inclusion in C++17 [edit] Modified versions of existing classes to support type-erased allocators [edit] Polymorphic allocators and memory resources [edit] General utilities [edit] Feature test macros [edit] Merged into C++17 The following components of ISO/IEC TS 19568:2015 were included into C++17 [edit] optional objects [edit] class any [edit] string_view [edit] Type-erased and polymorphic allocators [edit] Polymorphic allocators and memory resources The entities in this section are declared in the std::experimental::pmr namespace. [edit] Convenience aliases for containers using polymorphic allocators Convenience aliases and alias templates for containers using polymorphic allocators are provided in the std::experimental::pmr namespace for the following class templates in the standard library: [edit] [edit] Sampling and searching algorithms [edit] General utilities In addition, the TS provides numerous constexpr variable templates for the following type traits and other class templates in the standard library:
https://en.cppreference.com/w/cpp/experimental/lib_extensions
CC-MAIN-2020-50
en
refinedweb
Update User Data during loop On 01/10/2013 at 02:38, xxxxxxxx wrote: Hello everybody. Below I have a very simplified version of what I'm trying to create. The simplified code: Runs a loop to get all of the user data and prints it to the console Creates a new user data object (just lifted that straight from the documentation) Runs the same loop again and print to the console again. The results in the console are identical and I have to run the script again to be able to access the new userdata. Import c4d def main() : obj = doc.SearchObject("Null") ud = obj.GetUserDataContainer() for id, bc in ud: print id, bc[1] # The readout on console shows Test data 1 and Test data 2 bc = c4d.GetCustomDataTypeDefault(c4d.DTYPE_LONG) # Create default container bc[c4d.DESC_NAME] = "Test data 3" # Rename the entry element = obj.AddUserData(bc) # Add userdata container obj[element] = 0 # Assign a value c4d.EventAdd() # Update for id, bc in ud: print id, bc[1] #The readout on console shows Test data 1 and Test data 2 but no sign of Test data 3 if __name__=='__main__': main() I know I can set all of the parameters before I create the userdata, however what I intend to do is create a group and then clone some userdata and nest it in the group without having to run the script twice. My question is: Can I access newly created/cloned userdata within a single execution of a script? Any help is much appreciated. Thanks Adam On 01/10/2013 at 08:09, xxxxxxxx wrote: scripts are single threaded / executed from the c4d main thread. the general approach you would take on this would be a python tag. the python tag code single threaded too, but it allows you to use some fallthrough construct to split your task into two passes. if data not in userdata: UpdateMe() elif condition is met: DoSomething() edit: lol, I just realized that I have hit the posting count sweet spot for my nick ;) On 01/10/2013 at 08:25, xxxxxxxx wrote: UserData works the same way as other objects. If you change it in some manner. You have to grab it again after the changes were made to get it's current values. import c4d def main() : obj = doc.SearchObject("Null") #The master UD container ud = obj.GetUserDataContainer() #Print the current UD entries in the master container for id, bc in ud: print id, bc[1] bc = c4d.GetCustomDataTypeDefault(c4d.DTYPE_LONG) bc[c4d.DESC_NAME] = "Test data 2" element = obj.AddUserData(bc) #Add this new userdata to the master UD container obj[element] = 10 #Assign a value to it c4d.EventAdd() #We've changed the master UD container above #So we have to grab it again to get the current stuff in it ud = obj.GetUserDataContainer() for id, bc in ud: print id, bc[1] if __name__=='__main__': main() -ScottA On 01/10/2013 at 08:36, xxxxxxxx wrote: @littledevil Ha, Demonic post! - I'm privileged. I'm actually using a Python Node for this bit and created a sloppy workaround which involved an Iterator Node to trigger the Python twice. @ScottA That's what I was looking for! Thank you. I feel like bashing my head off the desk for missing that. Solved!
https://plugincafe.maxon.net/topic/7468/9292_update-user-data-during-loop
CC-MAIN-2020-50
en
refinedweb
Mypy-friendly boto3 type annotations for batch service. Project description mypy-boto3-batch submodule Provides type annotations for boto3.batch service Installation pip install mypy-boto3[batch] Usage import boto3 from mypy_boto3.batch import Client, ServiceResource client: Client = boto3.client("batch") resource: ServiceResource = boto3.resource("batch") # now your IDE can suggest you method and arguments names # and mypy can check types Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/mypy-boto3-batch/0.1.7/
CC-MAIN-2020-16
en
refinedweb
Selection Tool The SelectionTool is one of the tools that come out-of-the-box with RadImageEditor. It allows you to select a specific area from an image and apply different effects to it. The selection isolates parts of the image. By selecting specific areas, the user can edit and apply effects to portions of the image while leaving the unselected areas untouched. When a selection is active it will be surrounded by a dashed outline to indicate the area. Example 1: Define selection tool item <telerik:ImageToolItem <telerik:ImageToolItem.CommandParameter> <tools:SelectionTool > <tools:SelectionTool.SelectionShapes> <shapes:RectangleShape /> <shapes:EllipseShape /> <shapes:FreeformShape /> </tools:SelectionTool.SelectionShapes> </tools:SelectionTool> </telerik:ImageToolItem.CommandParameter> </telerik:ImageToolItem> Figure 1: Selection tool selection area The code snippets points to the following namespace: xmlns:tools="clr-namespace:Telerik.Windows.Media.Imaging.Tools;assembly=Telerik.Windows.Controls.ImageEditor" xmlns:shapes="clr-namespace:Telerik.Windows.Media.Imaging.Shapes;assembly=Telerik.Windows.Controls.ImageEditor" xmlns:commands="clr-namespace:Telerik.Windows.Media.Imaging.ImageEditorCommands.RoutedCommands;assembly=Telerik.Windows.Controls.ImageEditor" Predefined Selection Area Shapes There are several predefined shapes that are used for selection. FreeformShape: Allows to draw a freeform selection region. The edge of the selected region will follow the mouse cursor as it is dragged around the canvas. The shape will automatically be closed with a straight line from the current cursor location back to the start point. Figure 2: Free form shape selection RectangleShape: Draws rectangles or square selection region (when the LockRatio property is set to true). Figure 3: Rectangle shape selection EllipseShape: Allows you to create ellipse or circle selection region (when the LockRatio property is set to true). Figure 4: Ellipse shape selection Custom Selection Area Shapes The SelectionTool allows you to define custom shapes that can be used to select an area. You can see how to define a custom shape in the Shape Tool article. Tool Settings Activating the tool opens a settings pane which contains the supported effects that can be applied to the selection region. You could also switch the different selection modes by using a combobox. Figure 5: Tool settings UI By default, all set effects are applied when a new selection is made. The Auto Reset Settings checkbox could be used if you want to reset the effects when a new selection is made. You can draw in the selection region by pressing the Draw toggle button. Figure 6: Drawing in the selection area
https://docs.telerik.com/devtools/wpf/controls/radimageeditor/tools/selection-tool
CC-MAIN-2020-16
en
refinedweb
. function OnDrawGizmos() { // Draw a yellow sphere at the transform's position Gizmos.color = Color.yellow; Gizmos.DrawSphere (transform.position, 1); } using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { void OnDrawGizmos() { Gizmos.color = Color.yellow; Gizmos.DrawSphere(transform.position, 1); } } See Also: OnDrawGizmosSelected. Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/2017.1/Documentation/ScriptReference/MonoBehaviour.OnDrawGizmos.html
CC-MAIN-2020-16
en
refinedweb
Introduction: Arduino Temperature Sensor Ever wanted to read the temperature with your arduino? Heres a great way how using only 4 wires! With the TC74!! The temperature is accurate to about ±2°C Teacher Notes Teachers! Did you use this instructable in your classroom? Add a Teacher Note to share how you incorporated it into your lesson. Step 1: What You'll Need... The things you need are: - An Arduino (Im using Duemilanove) - The TC47 (3.3V or 5V) - Four(4) bits of wire - A breadboard (Optional but helps a lot!) Step 2: Wire It All Up! Using the picture, connect: NC to nothing SDA to Arduino analog pin 4 GND to Arduino ground SCLK to Arduino analog pin 5 VDD to either Arduino 5V or 3.3V (depending on which sensor you have) Step 3: Can I Have Yo Number? Now we need to find out the I2C address of you sensor (because you can connect up to 8 sensors using the same 2 analog pins). Using the included table find your part number and corresponding binary address. Got it? Good. Because now we need to convert the binary address (0s and 1s) into a hex value(sounds way more complicated then it really is). Now take that address and put it into the [BINARY] field of this website and hit decode: Were almost done, just copy what you see in the [HEX] field and add 0x in front of it. For example if your output is 48 make it 0x48. Step 4: Code, Code and More Code! Now this code isnt mine, and im not quite sure where I got it.. so if anyone recognizes it give me a shout. Anyways here it is. Just remember to replace the address in the code with the address of your sensor. All you gotta do is upload this to your arduino, and open the serial monitor and you should be getting the temperature. #include "Wire.h" //wire library #define address 0x48 //address of the temperature sensor #define delayC 1000 //delay count in ms #define baudrate 9600 //baudrate for communication void setup() { Wire.begin(); Serial.begin(baudrate); } void loop() { Serial.print("temperature in Celsius: "); //let's signal we're about to do something int temperature; //temperature in a byte Wire.beginTransmission(address); //start the transmission Wire.send(0x00); Wire.requestFrom(address, 1); if (Wire.available()) { temperature = Wire.receive(); Serial.println(temperature); } else { Serial.println("---"); } Wire.endTransmission(); //end the transmission delay(delayC); } Be the First to Share Recommendations 3 Discussions 4 years ago on Introduction Cool! I like how this sensor uses the I2C interface, this way you don't use up, somewhat precious adc pins. :) 4 years ago on Introduction Thanks lots of the info I'm using mcp9800..... I am not getting any data out :( ... the code you posted is exactly the same, except I modified the address which is in this case 0x92 because I am using PIC I2C demo board, and it is embedded. Second, I accessed the Ambient temperatur regester, which can be accessed via 0x00 as you also posted . 7 years ago on Introduction Just a heads up use refer to TC74 as TC47 which may be confusing to readers. Otherwise great detailed guide!
https://www.instructables.com/id/Arduino-Temperature-Sensor/
CC-MAIN-2020-16
en
refinedweb
CLI Reference OpenShift Container Platform 3.5 CLI Reference Abstract Chapter 1. Overview..5-rpms" # yum install atomic-openshift-clients For RHEL, Fedora, and other Linux distributions, you can also download the CLI directly from the Red Hat Customer Portal as a tar.gz archive. After logging in with your Red Hat account, you must have an active OpenShift Enterprise subscription to access the downloads page. Download the CLI from the Red Hat Customer Portal Tutorial Video: The following video walks you through this process: Click here to watch The oc login command is the best way to initially set up the CLI, and it serves as the entry point for most users. The interactive flow helps you establish a session to an OpenShift Container Platform server with the provided credentials. The information is automatically saved in a CLI configuration file that is then used for subsequent commands. The following example shows the interactive setup and login using the oc login command: Example 2.1. Initial CLI Setup $ 2.5. CLI Configuration Files A CLI configuration file permanently stores oc options and contains a series of authentication mechanisms and OpenShift Container Platform: Example 2.2. Viewing the CLI Configuration $ Container Platform servers, namespaces, and users so that you can switch easily between them. The CLI can support multiple configuration files; they are loaded at runtime and merged together along with any override options specified from the command line. 2.6. Projects A project in OpenShift Container Platform. endif::openshift-origin[][]
https://access.redhat.com/documentation/en-us/openshift_container_platform/3.5/html-single/cli_reference/index
CC-MAIN-2020-16
en
refinedweb
panda3d.core.AsyncTaskCollection¶ from panda3d.core import AsyncTaskCollection - class AsyncTaskCollection¶ A list of tasks, for instance as returned by some of the AsyncTaskManager query functions. This also serves to define an AsyncTaskSequence. TODO: None of this is thread-safe yet. Inheritance diagram __init__(copy: AsyncTaskCollection) → None removeTask(task: AsyncTask) → bool¶ Removes the indicated AsyncTask from the collection. Returns true if the task was removed, false if it was not a member of the collection. removeTask(index: size_t) → None Removes the nth AsyncTask from the collection. addTasksFrom(other: AsyncTaskCollection) → None¶ Adds all the AsyncTasks indicated in the other collection to this task. The other tasks are simply appended to the end of the tasks in this list; duplicates are not automatically removed. removeTasksFrom(other: AsyncTaskCollection) → None¶ Removes from this collection all of the AsyncTasks listed in the other collection. removeDuplicateTasks() → None¶ Removes any duplicate entries of the same AsyncTasks on this collection. If a AsyncTask appears multiple times, the first appearance is retained; subsequent appearances are removed. hasTask(task: AsyncTask) → bool¶ Returns true if the indicated AsyncTask appears in this collection, false otherwise. findTask(name: str) → AsyncTask¶ Returns the task in the collection with the indicated name, if any, or NULL if no task has that name. - Return type - size() → size_t¶ Returns the number of tasks in the collection. This is the same thing as getNumTasks(). - Return type size_t output(out: ostream) → None¶ Writes a brief one-line description of the AsyncTaskCollection to the indicated output stream. write(out: ostream, indent_level: int) → None¶ Writes a complete multi-line description of the AsyncTaskCollection to the indicated output stream.
https://docs.panda3d.org/1.10/python/reference/panda3d.core.AsyncTaskCollection
CC-MAIN-2020-16
en
refinedweb
Hi Alan, Alan Mackenzie <address@hidden> writes: > OK. Here's a first approximation to a solution, which I would be > grateful if you could try out on real code. Please let me know how well > it works, and if it introduces any nasty looking bugs. > > What I've done is to count nesting depth of braces inside a class or > namespace, etc. When that depth is 1, we're at the top level, and > anything looking like a function is fontified as one. When the depth is > more than 1, we're not at top level, and anything looking like a > function is fontified as a uniform initialisation. > > The following patch should apply OK to the savannah master branch: Thanks for the patch. I'm testing it now. It works fine with the example that I initally gave. However, this one does not work: template <class T> void barf (T t, const char *file_name) { std::ofstream fout (file_name); fout << t; fout.close (); } Here, "template <class T>" is what confuses the font-lock. In C++, these angle braces can be nested to an arbitrary depth. Oleh
https://lists.gnu.org/archive/html/emacs-devel/2016-09/msg00150.html
CC-MAIN-2020-16
en
refinedweb
Java as a language and the JVM as a platform just celebrated its 20th birthday. With its noble origins on set-top-boxes, mobiles and java-cards, as well as all kinds of server systems, Java is emerging as the lingua franca of the Internet of Things. Quite obviously Java is everywhere! Less obvious is that Java is also heavily immersed in all sorts of low-latency applications such as game servers and high frequency trading applications. This was only made possible thanks to a propitious deficiency in the Java visibility rules for classes and packages, offering access to a controversial little class called sun.misc.Unsafe. This class was and still is a divider; some love it, others hate it with a passion - the essential part is, it helped the JVM and the Java ecosystem to evolve to where it is today. The Unsafe class basically compromised on some of Java's hallmark strict safety standards in favor of speed. Passionate discussions like on JCrete, or our "What to do About sun.misc.Unsafe" mission paper, and blog posts such as this one on DripStat, created awareness of what might happen in the Java world if sun.misc.Unsafe (along with some smaller private APIs) were to just disappear without a sufficient API replacement. The final proposal (JEP260) from Oracle now solves the problem by offering a nice migration path. But the question remains - how will this Java world look once the Unsafe dust has settled? Organization A glance at the sun.misc.Unsafe feature-set provides the unsettling realization that it was used as a one-stop dumping ground for all kinds of features. An attempt to categorize these features produces the following five sets of use cases: - Atomic access to variables and array content, custom memory fences - Serialization support - Custom memory management / efficient memory layout - Interoperability with native code or other JVMs - Advanced Locking support In our quest for a replacement for all of this functionality, we can at least declare victory on the last one; Java has had a powerful (and frankly very nice) official API for this for quite some time, java.util.concurrent.LockSupport. Atomic Access Atomic access is one of the heavily used features of sun.misc.Unsafe featuring basic “put” and “get” (with or without volatile semantics) as well as compare and swap (CAS) operations. public long update() { for(;;) { long version = this.version; long newVersion = version + 1; if (UNSAFE.compareAndSwapLong(this, VERSION_OFFSET, version, newVersion)) { return newVersion; } } } But wait, doesn’t Java offer support for this through some official APIs? Absolutely, through the Atomic classes, and yes it is as ugly as the sun.misc.Unsafe based API and actually worse for other reasons, let’s see why. AtomicX classes are actually real objects. Imagine for example that we are maintaining a record inside a storage system and we want to keep track of certain statistics or metadata like version counters: public class Record { private final AtomicLong version = new AtomicLong(0); public long update() { return version.incrementAndGet(); } } While the code is fairly readable, it is polluting our heap with two different objects per data record instead of one, namely the Atomic instance as well as our actual record itself. The problem is not only the extraneous garbage generation, but also the extra memory footprint and additional dereferences of the Atomic instances. But hey, we can do better - there is another API, the java.util.concurrent.atomic.AtomicXFieldUpdater classes. AtomixXFieldUpdaters are a memory optimized version of the normal Atomic classes, trading memory footprint for API simplicity. Using this component a single instance can support multiple instances of a class, in our case Records, and can update volatile fields. public class Record { private static final AtomicLongFieldUpdater<Record> VERSION = AtomicLongFieldUpdater.newUpdater(Record.class, "version"); private volatile long version = 0; public long update() { return VERSION.incrementAndGet(this); } } This approach has the advantage of producing more efficient code in terms of object creation. Also, the updater is a static final field, and only a single updater is necessary for any number of records, and most importantly it is available today. Additionally It is a supported public API, which should almost always be your preferred strategy. On the other hand, looking at the creation and usage of the updater, it is still rather ugly, not very readable and frankly counter-intuitive. But can we do better? Yes, Variable Handles (or affectionately - “VarHandles”) is on the drawing board and offers a more attractive API. VarHandles are an abstraction over data-behavior. They provide volatile-like access, not only over fields but also on elements inside of arrays or buffers. It might seem odd at first glance looking at the following example, so let’s see what is going on. public class Record { private static final VarHandle VERSION; static { try { VERSION = MethodHandles.lookup().findFieldVarHandle (Record.class, "version", long.class); } catch (Exception e) { throw new Error(e); } } private volatile long version = 0; public long update() { return (long) VERSION.addAndGet(this, 1); } } VarHandles are created by using the MethodHandles API, a direct entry point into the JVM internal linkage behavior. We use a MethodHandles-Lookup, passing in the containing class, field name and field type, or we “unreflect” a java.lang.reflect.Field instance. So why, you might ask, is this better than the AtomicXFieldUpdater API? As mentioned before, VarHandles are a general abstraction over all types of variables, arrays or even ByteBuffers. That said, you just have one abstraction over all of these different types. That sounds super nice in theory, but it is still somewhat wanting in the current prototypes. The explicit cast of the returned value is necessary since the compiler is not yet able to figure it out automatically. In addition there are some more oddities as a result of the young prototyping state of the implementation. I hope those problems will disappear in the future as more people get involved with VarHandles, and as some of the related language enhancements proposed in Project Valhalla start to materialize. Serialization Another important use case nowadays is serialization. Whether you are designing a distributed system, or you want to store serialized elements into a database, or you want to go off-heap, Java objects somehow need to be serialized and deserialized quickly. “The faster the better” is the motto. Therefore a lot of serialization frameworks use Unsafe::allocateInstance, which instantiates objects while preventing constructors from being called, which is useful in deserialization. This saves a lot of time and is still safe since the previous object-state is recreated through the deserialization process. public String deserializeString() throws Exception { char[] chars = readCharsFromStream(); String allocated = (String) UNSAFE.allocateInstance(String.class); UNSAFE.putObjectVolatile(allocated, VALUE_OFFSET, chars); return allocated; } Please note that this code fragment might still break in Java 9, even though sun.misc.Unsafe will remain available, because there’s an effort to optimize the memory footprint of a String. This will remove the char[] value in Java 9 and replace it with a byte[]. Please refer to the draft JEP on improving memory efficiency in Strings for more details. Back to the topic: There is not yet a replacement proposal for Unsafe::allocateInstance but the jdk9-dev mailing list is discussing certain solutions. One idea is to move the private class sun.reflect.ReflectionFactory::newConstructorForSerialization into a supported place that will prevent core classes from being instantiated in an unsafe manner. Another interesting proposal is frozen arrays, which might also help serialization frameworks in the future. It might look like the following snippet, which is totally my concoction as there is no proposal yet, but it is based on the currently available sun.reflect.ReflectionFactory API. public String deserializeString() throws Exception { char[] chars = readCharsFromStream().freeze(); ReflectionFactory reflectionFactory = ReflectionFactory.getReflectionFactory(); Constructor<String> constructor = reflectionFactory .newConstructorForSerialization(String.class, char[].class); return constructor.newInstance(chars); } This would call a special deserialization constructor that accepts a frozen char[]. The default constructor of String creates a duplicate of the passed char[] to prohibit external mutation This special deserialization constructor could prevent copying the given char[], since it is a frozen array. More on frozen arrays later. Again, remember this is just my artificial rendition and will probably look different in the real draft. Memory Management Possibly the most important usages for sun.misc.Unsafe is for reading and writing; not only to the heap, as seen in the first section, but especially writing to regions outside of the normal Java heap. In this idiom native memory is acquired (represented through an address / pointer) and offsets are calculated manually. For example: public long memory() { long address = UNSAFE.allocateMemory(8); UNSAFE.putLong(address, Long.MAX_VALUE); return UNSAFE.getLong(address); } Some might jump in and say that the same is possible using direct ByteBuffers: public long memory() { ByteBuffer byteBuffer = ByteBuffer.allocateDirect(8); byteBuffer.putLong(0, Long.MAX_VALUE); return byteBuffer.getLong(0); } On the surface this approach might seem more appealing; unfortunately ByteBuffer’s are limited to roughly 2-GB of data since a DirectByteBuffer can only be created with an int (ByteBuffer::allocateDirect(int)). Additionally all indexes on the ByteBuffer API are only 32-bit as well. Was it Bill Gates who once asked “Who will ever need more than 32 bits?” Retrofitting the API to use long-type will break compatibility, so VarHandles to the rescue. public long memory() { ByteBuffer byteBuffer = ByteBuffer.allocateDirect(8); VarHandle bufferView = MethodHandles.byteBufferViewVarHandle(long[].class, true); bufferView.set(byteBuffer, 0, Long.MAX_VALUE); return bufferView.get(byteBuffer, 0); } Is the VarHandle API in this case really better? At the moment we are constrained by the same limitations; we can only create ByteBuffers with ~2-GB, and the internal VarHandle implementation for the views over ByteBuffers is also based on ints, but that might be “fixable”. So at present there is no real solution to this problem. The nice thing here though is that the API is again the same VarHandle API as in the first example. Some more options are under discussion. Oracle engineer and project owner of JEP 193: Variable Handles Paul Sandoz talked about a concept of a Memory Region on twitter; and although the concept is still nebulous, the approach looks promising. A clean API might look like something like the following snippet. public long memory() { MemoryRegion region = MemoryRegion .allocateNative("myname", MemoryRegion.UNALIGNED, Long.MAX_VALUE); VarHandle regionView = MethodHandles.memoryRegionViewVarHandle(long[].class, true); regionView.set(region, 0, Long.MAX_VALUE); return regionView.get(region, 0); } This is only an idea, and hopefully Project Panama, the native code OpenJDK project, will present a proposal for those abstractions in the near future. Project Panama is actually the right place for this, since those memory regions will also need to work with native libraries that expect a memory address (pointer) passed into its calls. Interoperability The last topic is interoperability. This is not limited to efficient transfer of data between different JVMs (perhaps via shared memory, which could also be a type of a memory region, and which would avoid slow socket communication). It also covers communication and information-exchange with native code. Project Panama hoisted the sails to supersede JNI in a more Java-like and efficient way. People following JRuby might know Charles Nutter for his efforts on JNR, the Java Native Runtime, and especially the JNR-FFI implementation. FFI means Foreign Function Interface and is a typical term for people working with other languages like Ruby, Python, etc. The FFI basically builds an abstraction layer to call C (and depending on the implementation C++) directly from the current language without the need of creating glue code as in Java. As an example, let’s say we want to get a pid via Java. All of the following C code is currently required: extern c { JNIEXPORT int JNICALL Java_ProcessIdentifier_getProcessId(JNIEnv *, jobject); } JNIEXPORT int JNICALL Java_ProcessIdentifier_getProcessId(JNIEnv *env, jobject thisObj) { return getpid(); } public class ProcessIdentifier { static { System.loadLibrary("processidentifier"); } public native void talk(); } Using JNR we could simplify this to a pure Java interface which would be bound to the native call by the JNR implementation. interface LibC { void getpid(); } public int call() { LibC c = LibraryLoader.create(LibC.class).load("c"); return c.getpid(); } JNR internally spins the binding codes and injects those into the JVM. Since Charles Nutter is one of the main developers of JNR and also works on Project Panama we might expect something quite similar to come up. From looking at the OpenJDK mailing list, it feels like we will soon see another incarnation of MethodHandles that binds to native code. A possible binding might look like the following snippet: public void call() { MethodHandle handle = MethodHandles .findNative(null, "getpid", MethodType.methodType(int.class)); return (int) handle.invokeExact(); } This may look strange if you haven’t seen MethodHandles before, but it is obviously more concise and expressive when compared to the JNI version. The great thing here is that, just like the reflective Method instances, MethodHandle can be (and generally should be) cached, to be called over and over again. You can also get a direct inlining of the native call into the jitted Java code. However I still slightly prefer the JNR interface version as it is cleaner from a design perspective. On the other hand I’m pretty sure we will get direct interface binding as a nice language abstraction over the MethodHandle API - if not from the specification, then from some benevolent open-source committer. What else? A few more things are floating around Project Valhalla and Project Panama. Some of those are not directly related to sun.misc.Unsafe but are still worth mentioning. ValueTypes Probably the hottest topic in these discussions is ValueTypes. These are lightweight wrappers that behave like Java primitives. As the name suggests, the JVM is able to treat them like simple values, and can do special optimizations that are not possible on normal objects. You can think of those as user-definable primitive types. value class Point { final int x; final int y; } // Create a Point instance Point point = makeValue(1, 2); This also is still a draft API and it is unlikely that we would get a new "value" keyword, as it might break user code that might already use that keyword as an identifier. Ok but what really is so nice about ValueTypes? As already explained the JVM can treat those types as primitive values, that, for example, offer the option to flatten the layout into an array: int[] values = new int[2]; int x = values[0]; int y = values[1]; They might also be passed around in CPU registers and most probably wouldn’t need to be allocated on the heap. This actually would save a lot of pointer dereferences and will offer the CPU a much better option to prefetch data and do logical branch prediction. A similar technique is already used today to analyze data in a huge array. Cliff Click’s h2o architecture does exactly that, to offer extremely fast map-reduce operations over uniform, primitive data. In addition ValueTypes can have constructors, methods and generics. You can think of it, as Oracle Java language architect Brian Goetz so eloquently declares, as “Codes like a class, behaves like an int”. Another related feature is the anticipated “specialized generics”, or more broadly “type specialization”. The idea is simple; extend the generics system to support not only objects and ValueTypes but also primitives. Using this approach the ubiquitous String class would be a candidate for a rewrite using ValueTypes. Specialized Generics To bring this to life (and keep it backwards compatible) the generics system would need to be retrofitted, and some new, special wildcards will bring the salvation. class Box<any T> { void set(T element) { … }; T get() { ... }; } public void generics() { Box<int> intBox = new Box<>(); intBox.set(1); int intValue = intBox.get(); Box<String> stringBox = new Box<>(); stringBox.set("hello"); String stringValue = stringBox.get(); Box<RandomClass> box = new Box<>(); box.set(new RandomClass()); RandomClass value = box.get(); } In this example the designed Box interface features the new wildcard An amazing talk about type specialization is available from this year's JVM Language Summit (JVMLS) by Brian Goetz himself. Arrays 2.0 The proposal for Arrays 2.0 has been around for quite some time, as visible in John Rose’s talk from the JVMLS 2012. One of the most prominent features will be the disappearance of the 32-bit index limitation of the current arrays. Currently an array in Java cannot be greater than Integer.MAX_VALUE. The new arrays are expected to accept a 64-bit index. Another nice feature is the ability to “freeze” arrays (as we saw in the Serialization examples above), allowing you to create immutable arrays that can be passed around without any danger of having their contents mutated. And since great things come in pairs we can expect Arrays 2.0 to support specialized generics! ClassDynamic One more interesting proposal floating around is the so called ClassDynamic proposal. This proposal is probably in the earliest state of any of the ones we have mentioned so far, and so not a lot of information is currently available. But let’s try to anticipate what it will look like. A dynamic class brings the same generalization concept as specialized generics but on a broader scope. It provides some kind of templating mechanism to typical coding patterns. Imagine the returned collection from Collections::synchronizedMap as a pattern where every method call is simply a synchronized version of the original call: R methodName(ARGS) { synchronized (this) { underlying.methodName(ARGS); } } Using dynamic classes as well as pattern-templates supplied to the specializer will simplify the implementation of recurring patterns dramatically. As said earlier, there is not a lot more information available at the time of this writing, but I hope to see more coming up in the near future, most probably as part of Project Valhalla. Conclusion Overall I’m happy with the direction and accelerated speed of development of the JVM and Java as a language. A lot of interesting and necessary solutions are underway and Java is converging to a modern state, while the JVM is providing new efficiencies and improvements. From my perspective people are definitely advised to invest in the genius piece of technology that we call the JVM, and I expect that all JVM languages will benefit from the newly integrated features. In general I highly recommend the JVMLS talks from 2015 for more information on most of these topics, and I suggest you read a summary of Brian Goetz’s talk about Project Valhalla. About the Author Christoph Engelbert is Technical Evangelist at Hazelcast. He is a passionate Java developer with a deep commitment for Open Source software. He mostly is interested in Performance Optimizations and understanding the internals of the JVM and the Garbage Collector. He loves to bring software to its limits by looking into profilers and finding problems inside of the codebase. Community comments Excellently written article. by Ben Cotton / Re: Excellently written article. by Christoph Engelbert / Nice post by Binh Nguyen / Nice post by Binh Nguyen / Re: Nice post by Christoph Engelbert / What are VarHandles? ;-) by Heinz Kabutz / Re: What are VarHandles? ;-) by Christoph Engelbert / Excellently written article. by Ben Cotton / Your message is awaiting moderation. Thank you for participating in the discussion. Interesting that the upcoming Java 9 VarHandles API still leaves us without a 1 x direct allocation invoke > 2gb. Seems like UNSAFE remains absolutely essential for now. So in Java 9, should I want to directly alloc a 1TB native buffer, will I even have the choice to call UNSAFE.allocateMemory(1TB); or will I have to call (500 times!) the VarHandles equivalent of ByteBuffer.allocateDirect(2GB); ?? Project Panama looks nice. Thanks Christoph. Nice post by Binh Nguyen / Your message is awaiting moderation. Thank you for participating in the discussion. Thanks, nice information. Nice tips. Nice post by Binh Nguyen / Your message is awaiting moderation. Thank you for participating in the discussion. Thanks, nice information. Nice tips. Re: Excellently written article. by Christoph Engelbert / Your message is awaiting moderation. Thank you for participating in the discussion. Yeah it looks like that. So far there is no real proposal for the most common "off-heap" use case. However we're kind of going there :) Re: Nice post by Christoph Engelbert / Your message is awaiting moderation. Thank you for participating in the discussion. @Binh: Thank you :) It was actually a lot of fun writing it. What are VarHandles? ;-) by Heinz Kabutz / Your message is awaiting moderation. Thank you for participating in the discussion. Yeah, I knew what they were, but under the other name of "enhanced volatiles" :-) Great article, and also a very interesting discussion we had with Marcus at JCrete about this. I'm glad it was recorded for posterity. A few comments if you don't mind, to your great article: Atomic Access AtomicLongFieldUpdater check on every access whether the context in which they are used allow read/write to the field. This is because an AtomicLongFieldUpdater could accidentally be leaked from inside the class and was done as a safety me asure. Thus the AtomicLongFieldUpdater has more overhead than Unsafe or VarHandles. Note also that there are several new atomic access methods in Unsafe for longs and ints which are significantly faster t han the default CAS loop. How do VarHandles address that? Serialization The deserializeString will also break in Java 6, see Memory 31-bit indexes, not 32-bit. And Bill Gates famously was questioning whether anyone would ever need more than 640k :-) Heinz Re: What are VarHandles? ;-) by Christoph Engelbert / Your message is awaiting moderation. Thank you for participating in the discussion. Atomic Access: yeah great addition, thanks! :) Serialization: Oh, good to know. Hopefully we can ditch Java 6 support soon ;) Memory: I know that this was not the real question. The original text had a smiley to make it more clear (seem to have disappeared while copy-editing). But I think even the original question was never real but a very common myth (...)
https://www.infoq.com/articles/A-Post-Apocalyptic-sun.misc.Unsafe-World
CC-MAIN-2020-16
en
refinedweb
There are many features in C++ that can be used to enhance the quality of code written with classic C design even if no object oriented techniques are used. This article describes a technique to protect against value overflow and out-of-bounds access of arrays. This article started with a discussion about how C projects could use features in C++ to improve the quality of the code without having to do any major redesign. The built-in integral types in C and C++ are very crude. They map directly to what can be represented in hardware as bytes and words with or without signs. There is no way to say that a number can only have values in the range 1 to 100. The best you can do is to use an unsigned char which typically has a value range from 0 to 255, but this does not provide any checking for overflow. It is easy to create an integral type that does the range checking as Pascal and Ada do. The implementation of BoundedInt in listing 1 shows how this can be done with C++ templates. It takes three parameters. The first two specify the inclusive range of allowed values. The third parameter specifies the underlying type to be used and uses a default type given by the BoundedIntTraits class. #include <cassert> template <int Lower, int Upper, typename INT=typename BoundedIntTraits<Lower,Upper>::Type> class BoundedInt { public: // Default constructor BoundedInt() #ifndef NDEBUG : m_initialised(false) #endif {} // Conversion constructor BoundedInt(int i) : m_i(static_cast<INT>(i)) #ifndef NDEBUG , m_initialised(true) #endif { // Check input value assert((Lower<=i) && (i<=Upper)); } // Conversion back to a builtin type operator INT() { assert(m_initialised); return m_i; } // Assignment operators BoundedInt & operator+=(int rhs) { assert(m_initialised); // Check for overflow assert(m_i/2 + rhs/2 + (m_i&rhs&1) <= Upper/2); assert(Lower/2 <= m_i/2 + rhs/2 - ((m_i^rhs)&1)); // Check result value assert((Lower<=m_i+rhs) && (m_i+rhs<=Upper)); // Perform operation m_i += rhs; return *this; } // Increment and decrement operators. BoundedInt & operator++() { assert(m_initialised); // Check for overflow assert(m_i < Upper); // Perform operation ++m_i; return *this; } // Other operators ... private: INT m_i; #ifndef NDEBUG bool m_initialised; #endif }; Listing 1: Definition of BoundedInt. Only the plus operator is shown here. The other arithmetic operators follow the same design. The BoundedIntTraits class is used to find the smallest built-in type that can hold numbers of the specified range. It uses some meta-programming to figure out which type to use. The implementation of the BoundedIntTraits class is shown in listing 2. #include <climits> // Compile time assertion: template <bool condition> struct StaticAssert; template <> struct StaticAssert<true> {}; // Template for finding the smallest // built-in type that can hold a given // value range, based on a set of // conditions. template< bool sign, bool negbyte, bool negshort, bool negint, bool sbyte, bool ubyte, bool sshort, bool ushort, bool sint> struct BoundedIntType; template<> struct BoundedIntType< true, true, true, true, true, true, true, true, true> { typedef signed char Type; }; template< bool negbyte, bool sbyte, bool ubyte> struct BoundedIntType< true, negbyte, true, true, sbyte, ubyte, true, true, true> { typedef signed short Type; }; template<bool negbyte, bool negshort, bool sbyte, bool ubyte, bool sshort, bool ushort> struct BoundedIntType< true, negbyte, negshort, true, sbyte, ubyte, sshort, ushort, true> { typedef signed int Type; }; template <bool sbyte> struct BoundedIntType< false, true, true, true, sbyte, true, true, true, true> { typedef unsigned char Type; }; template< bool sbyte, bool ubyte, bool sshort> struct BoundedIntType< false, true, true, true, sbyte, ubyte, sshort, true, true> { typedef unsigned short Type; }; template< bool sbyte, bool ubyte, bool sshort, bool ushort, bool sint> struct BoundedIntType< false, true, true, true, sbyte, ubyte, sshort, ushort, sint> { typedef unsigned int Type; }; // The traits template provides value // range information to the // BoundedIntType to get the smallest // possible type. template <int Lower, int Upper> struct BoundedIntTraits { StaticAssert<(Lower <= Upper)> check; typedef typename BoundedIntType<Lower < 0, Lower >= CHAR_MIN, Lower >= SHRT_MIN, Lower >= INT_MIN, Upper <= CHAR_MAX, Upper <= UCHAR_MAX, Upper <= SHRT_MAX, Upper <= USHRT_MAX, Upper <= INT_MAX>::Type Type; }; Listing 2: Definition of BoundedIntTraits. The types long and unsigned long are not included to keep the listing shorter. The checking is performed here by using the assert() macro. Note that this checking only happens in debug builds and not in the release builds to reduce the overhead for this checking. Using inlining and the assert() macro removes any overhead in optimised release builds. With a good optimiser the resulting code will be identical to when built-in types are used. Alternatives to assert() can of course be used such as throwing an exception or logging a message to a file. The BoundedInt class is only designed to work with value ranges that fit in an int. To support wider ranges all methods that take an int as a parameter must have overloaded siblings that take a long, or even long long where supported. The operator+=() member must check that the new value is within the valid range. It also has to check that there is no overflow during addition. The method of detecting overflow is complicated as there is no support for detecting overflow for built-in types in C and C++. The method here scales down all values to manageable sizes in order to do an overflow check. Because of the scaling down, it has to keep track of carry over data from the least significant bits to work properly in edge cases where the value range is close to the value range of the underlying type. Other arithmetic assignment operators that BoundedInt should support are not shown here as they would take too much space. The design of these operators follows the design for the plus operator. There are no binary arithmetic operators defined. When a BoundedInt object is used in a binary arithmetic operation, it will be converted to a built-in integral type before the operation. This means that there is no checking of the results of these operations, unless the result is assigned to a BoundedInt object. There is a pitfall here in that overflow cannot be checked for. BoundedInt<-10, INT_MAX> a = 10; a += INT_MAX; // Overflow checked a = a + INT_MAX; // Overflow not checked A default constructor is available in order to mimic the behaviour of built-in types. It does not initialise the value but maintains a flag to indicate that this object does not have a defined value. This flag is checked by member functions that access or modify the value. The m_initialised member flag is surrounded by conditional pre-processing directives to avoid overhead in release builds. The copy constructor and copy assignment operators are not defined as the compiler generated versions are appropriate. Below are some examples from an imaginary C project implementing a lift control with a single change to use BoundedInt: typedef BoundedInt<-4, 17> FloorNumber; FloorNumber liftPosition = 0; const FloorNumber myOfficeFloor = 10; /* go up */ ++liftPosition; /* go up fast */ liftPosition += 4; printf("The lift is %d floors away.\n", abs(liftPosition-myOfficeFloor)); BoundedInt objects can appear in any arbitrarily complex expression thanks to the conversion operator. Because the conversion operator is inlined the BoundedInt object will generate exactly the same code as when using a built-in type. A BoundedInt object can be used as a bounds checked index into arrays. Example: const int SixPackSize = 6; Bottle myBeers[SixPackSize]; BoundedInt<0, SixPackSize-1> ix; for( ix = 0 ; ix < SixPackSize ; ++ix ) { drink(myBeers[ix]); } If ix for some reason is changed to an invalid value, the BoundedInt class will warn about this. We can take this one step further by creating a class that only allows element access using numbers within the allowed range. template <typename T, size_t Size> class BoundedArray { public: T& operator[](BoundedInt<0, Size-1> ix) { return m_data[ix]; } public: T m_data[Size]; }; Note that the member data is public to allow aggregate initialisation. See how this is used below. The member data can be made public without risk for misuse as the data is equally accessible through the index operator as with direct access. Whenever an element is requested using an index of any builtin integral type, that index is converted to a BoundedInt which checks that its value is within the acceptable range. This template takes two parameters, the type of the elements in the array and a non-type template parameter to indicate the size of the array. The simple example above will work as before with only a small change to the definition of myBeers. BoundedArray<Bottle, SixPackSize> myBeers; This array can be initialised in the same way as a built-in array: BoundedArray<Bottle, SixPackSize> myBeers = { ... }; There is no overhead in release builds for this array class. The index operator is inlined and there is no indirect pointer access to the underlying array. Having the size as a template parameter may look like we are causing code bloat if several arrays of different sizes are used. Yes, there will be several instantiations but because all functions are inlined and optimised away there is no extra code that can multiply. In the same way as for using checked array indices we can create a smart pointer class that makes sure that it points to an element inside the array. It will have to know the base address of the array and the size to do the checking. This information is retrieved from the array class when a pointer is created. The starting point is an example with built-in pointers: Bottle* p = myBeers; for( ; p->size != 0 ; ++p ) { drink(*p); } myBeers is an array where the last elements members are cleared as a termination condition. We replace the built-in pointer p with a smart pointer: BoundedPointer<Bottle> p = myBeers; The loop in the example above remains unchanged. The definition of BoundedPointer is shown in listing 3. The array base address, array size and the initialised flag are kept as members only for debug builds to perform the runtime checks. To avoid this overhead in release builds the m_base, m_size and m_initialised members are surrounded with conditional preprocessing directives. #include <cstddef> #include <cassert> template <typename T> class BoundedPointer { public: // Default constructor BoundedPointer() #ifndef NDEBUG : m_initialised(false) #endif {} // Constructor from a built-in array template <size_t Size> BoundedPointer(T (&arr)[Size]) : m_p(arr) #ifndef NDEBUG , m_base(arr), m_size(Size) , m_initialised(true) #endif {} // Constructor from a user defined array BoundedPointer(const T* base, size_t size) : m_p(const_cast<T*>(base)) #ifndef NDEBUG , m_base(m_p) , m_size(size) , m_initialised(true) #endif {} // Constructor from null BoundedPointer(void * value) : m_p(static_cast<T *>(value)) #ifndef NDEBUG , m_base(m_p), m_size(1) , m_initialised(true) #endif {} // Dereference operators T & operator*() { assert(m_initialised); assert(m_p != 0); return *m_p; } T * operator->() { assert(m_initialised); assert(m_p != 0); return m_p; } T & operator[](size_t ix) { assert(m_initialised); assert(m_p != 0); assert(m_p + ix < m_base + m_size); return m_p[ix]; } // Pointer arithmetic operations ptrdiff_t operator-(BoundedPointer const & rhs) { // Check validity of the pointers assert(m_initialised); assert(rhs.m_initialised); assert(m_p != 0); assert(rhs.m_p != 0); // Ensure both pointers point to same array assert(m_base == rhs.m_base); return m_p - rhs.m_p; } BoundedPointer & operator+=(ptrdiff_t rhs) { // Check validity of the pointer assert(m_initialised); assert(m_p != 0); m_p += rhs; assert(m_base <= m_p && m_p < m_base + m_size); return *this; } BoundedPointer & operator++() { // Check validity of the pointer assert(m_initialised); assert(m_p != 0); ++m_p; assert(m_p < m_base + m_size); return *this; } // Other arithmetic operators ... // Comparison operators bool operator==(BoundedPointer const & rhs) { // Check validity of the pointers assert(m_initialised); assert(rhs.m_initialised); assert(m_p != 0); assert(rhs.m_p != 0); // Make sure that both pointers point // to the same array assert(m_base == rhs.m_base); return m_p == rhs.m_p; } // Other comparison operators ... private: T * m_p; #ifndef NDEBUG T * m_base; size_t m_size; bool m_initialised; #endif }; // Binary arithmetic operators template <typename T> inline BoundedPointer<T> operator+(BoundedPointer<T> lhs, int rhs) { return lhs.operator+=(rhs); } template <typename T> inline BoundedPointer<T> operator+(int lhs, BoundedPointer<T> rhs) { return rhs.operator+=(lhs); } Listing 3: Definition of BoundedPointer. A BoundedPointer object can be constructed from built-in arrays and from user defined array types. The constructor for user defined array types takes two parameters (base address and size) and is intended to be called from conversion operators of those array classes. This conversion operator for BoundedArray looks like this: template <typename T, size_t Size> class BoundedArray { public: ... operator BoundedPointer<T>() { return BoundedPointer<T>(m_data, Size); } }; There is also a constructor that takes a void* parameter to support assignment from NULL. A T* parameter cannot be used as it would conflict with the constructor for built-in arrays. The BoundedPointer class supports all the operations that can be used with built-in pointers. There are checks for incrementing and decrementing the pointer to make sure that it does not point outside its array. As with BoundedInt there are checks to see that the pointer is initialised when it is used. All methods are inlined to avoid any overhead in release builds. The classes described here are designed to do the bounds checking during unit and system testing when compiled in debug mode. It is important to run as many test cases as possible that exercise all boundary conditions. In release builds, all you have to do is make sure that the NDEBUG macro is defined, inlining is enabled and the optimise level is as high as possible. Then your code will be as efficient as if built-in types were used. The BoundedIntTraits in listing 2 hides the chosen underlying integral type. If the ranges change in the future, there is no need to manually change the underlying type required for the wider range. This article describes the design of a class that wraps an array and adds bounds checking functionality. There are many more possible classes that can be used in this framework for different purposes. Examples include a class that manages dynamically allocated arrays. A possible extension to the checked pointer is to keep track of whether the array still exists. If the array goes out of scope or is de-allocated the pointer shall be set to an invalid state. This is straight-forward to implement but is outside the scope of this article. This article does not discuss checked iterators for STL containers as the article was originally intended to motivate C users to adopt C++ to improve their lives. For STL there are already implementations that check validity of the iterators. Although the code in this article has been tested with several C++ compilers there are some difficulties using some existing compilers. If your compiler does not support partial template specialisations you cannot use the traits class BoundedIntTraits. You can avoid the BoundedIntTraits class by removing it from the template parameter list of BoundedInt and replace it with int. You will miss the feature where the underlying type of BoundedInt is automatically chosen from the specified range and it will be int if a type is not specified. With the strategies shown in this article it is possible to catch various out of bounds conditions during the testing phase at no cost to the released code. An additional benefit is that the bounds given to BoundedInt and the array types document their valid ranges well. - Safe and efficient data types in C++ by Nicolas Burrus Describes classes for compile time type safety when using different integral types. It defines safe operations for a set of integral types. The integral types used here are only bounded by the number of bits used in the internal representation. The description of operations and integral promotion is interesting and can be applied to the classes in this article. - Boost Integer Library Contains some helpful classes for determining types of integers given required number of bits. Also contains other helpful classes that can be useful in implementing a portable bounded integer and pointer library. - Boost array class in the container library A constant size array class. The design goal for this class is to follow the STL principles. - Bounds checking pointers for GCC. Additions to GCC to add bounds checking to the generated code. An implementation of STL that performs various run-time checks on iterators. - CheckedInt: A Policy-Based Range-Checked Integer by Hubert Matthews Overload issue 58, December 2003 Describes how policy classes can be used to select behaviour when a given range is exceeded.
https://accu.org/index.php/journals/313
CC-MAIN-2020-16
en
refinedweb
Introduction to printing¶ A tested document¶ This is a tested document. The following instructions are used for initialization: >>> from lino import startup >>> startup('lino_book.projects.min9.settings.doctests') >>> from lino.api.shell import * >>> from lino.api.doctest import * In a web application, printing means to produce a printable document and then show it in the client's browser. A printable document is a file generated by Lino and delivered to the end-user who will view it in their browser (or some external application if their browser is configured accordingly) and eventually print them out on their printer. Printable documents are typically either in .pdf format (for read-only files) or one of .odt, .doc or .rtf (for editable files). Sites which offer editable printables to their users might also use DavLink so that the users don't need to save these files on their client machines. End-users see a printable document by invoking the Print button on a printable database object (see also the Printable model mixin). Lino applications can decide to use printable documents in other ways than showing them to your browser, e.g. attach them to an email, or send them directly from the application server to a printer in a local area network. Lino comes with a selection of ready-to-use mechanisms for generating printable documents using different types of templates. A print template is a file that serves as master document for building a printable document. Lino processes the template using a given build method, inserting data from the database and producing the printable document as a new file stored in a cache directory. Print templates are part of your site configuration (see The local configuration directory). The easiest way to edit and manage your templates is to make your server's local configuration directory accessible to your desktop computer and to use some file manager of your choice. The print action¶ Here is what happens when a user invokes the do_print action of a printable object: Lino generates ("builds") the printable document on the server. For cached printables (see CachedPrintable), Lino may skip this step if that document had been generated earlier. Lino delivers the document to the user. Build methods¶ Lino comes with a series of "build methods". You can imagine a build method as a kind of "driver" who generates ("builds") printable documents from a template. >>> rt.show(printing.BuildMethods) ============ ============ ====================== value name text ------------ ------------ ---------------------- appydoc appydoc AppyDocBuildMethod appyodt appyodt AppyOdtBuildMethod appypdf appypdf AppyPdfBuildMethod appyrtf appyrtf AppyRtfBuildMethod latex latex LatexBuildMethod rtf rtf RtfBuildMethod weasy2html weasy2html WeasyHtmlBuildMethod weasy2pdf weasy2pdf WeasyPdfBuildMethod xml xml XmlBuildMethod ============ ============ ====================== Template engines¶ A template engine is responsible for replacing template commands by their result. The template engine determines the syntax for specifying template commands when designing templates. - PisaBuildMethodand LatexBuildMethoduse Django's template engine whose template commands look for example like {% if instance.has_family %}yes{% else %}no{% endif %}or My name is {{ instance.name }}.. RtfBuildMethoduses pyratemp as template engine whose template commands looks like @!instance.name!@. We cannot use Django's template engine because both use curly braces as command delimiters. This build method has a flaw: I did not find a way to "protect" the template commands in your RTF files from being formatted by Word. Markup versus WYSIWYG¶ There are two fundamentally different categories of templates: WYSIWYG (.odt, .rtf) or Markup (.html, .tex). Template collections that use some markup language are usually less redundant because you can design your collection intelligently by using template inheritance. On the other hand, maintaining a collection of markup templates requires a relatively skilled person because the maintainer must know two "languages": the template engine's syntax and the markup syntax. WYSIWYG templates (LibreOffice or Microsoft Word) increase the probability that an end-user is able to maintain the template collection because there's only on language to learn (the template engine's syntax) Post-processing¶ Some print methods need post-processing: the result of parsing must be run through another software in order to turn into a usable format. Post-processing creates dependencies to other software and has of course influence on runtime performance. Weblinks¶ - Pisa HTML/CSS to PDF converter written in Python. See also /docs/blog/2010/1020 - odtwriter - pyratemp
https://lino-framework.org/admin/printing.html
CC-MAIN-2020-16
en
refinedweb
Base class used to exchange data between element data structures and class calculating base functions. More... #include <src/approximation/BaseFunction.hpp> Base class used to exchange data between element data structures and class calculating base functions. Definition at line 41 of file BaseFunction.hpp. Definition at line 46 of file BaseFunction.hpp. Implements MoFEM::BaseFunctionUnknownInterface. Reimplemented in MoFEM::IntegratedJacobiPolynomialCtx, MoFEM::KernelLobattoPolynomialCtx, MoFEM::EntPolynomialBaseCtx, MoFEM::FatPrismPolynomialBaseCtx, MoFEM::FlatPrismPolynomialBaseCtx, MoFEM::LobattoPolynomialCtx, MoFEM::JacobiPolynomialCtx, and MoFEM::LegendrePolynomialCtx. Definition at line 21 of file BaseFunction.cpp.
http://mofem.eng.gla.ac.uk/mofem/html/struct_mo_f_e_m_1_1_base_function_ctx.html
CC-MAIN-2020-16
en
refinedweb
Social Network For Security Executives: Help Make Right Cyber Security Decisions Now that we have covered PeopleSoft Architecture, it is time to continue with PeopleSoft security and describe some attack vectors against PeopleSoft system discovered by ERPScan researchers. The first one is an attack on back-end systems. First, we should clarify some essential terms: To begin with, let’s find out how the authentication of a PeopleSoft user into the application server works. Authentication consists of the following steps:. The first table – PSOPRDEFN – contains PS?. import base64, sysdef xor_strings(xs, ys): return "".join(chr(ord(x) ^ ord(y)) for x, y in zip(xs, ys)) if len(sys.argv) < 2: sys.exit('Usage: %s b64_encoded_value_of_AccessID_or_AccessPSWD' % sys.argv[0]) key = "\xE3\x45\x98\x30\xCD\x02\xAD\xA8" result = base64.b64decode(sys.argv[1]) if len(result) != 8: print "Wrong encrypted value legth" result = xor_strings(result, key) print "Decrypted value is:\t"+result. The article is written by Alexey Tyurin, Head of Oracle Security at ERPScan Views: 527 © 2020 Created by CISO Platform. Badges | Report an Issue | Privacy Policy | Terms of Service You need to be a member of CISO Platform to join the discussion! Join CISO Platform
https://www.cisoplatform.com/profiles/blogs/peoplesoft-security-part-2-decrypting-accessid
CC-MAIN-2020-16
en
refinedweb
Install truffle Truffle is a development environment, testing framework and asset pipeline for blockchains. npm install -g truffle 1. create a new folder,my folder name is token mkdir token cd token 2. Initialize a npm project. npm init Install openzeppelin-solid. Inside the folder, we import the libraries from OpenZeppelin with npm install [email protected] The version 1.12.0 is what we need. 3. Install @truffle/hdwallet-provider. To connect to RSK, we are going to modify the Truffle configuration. We are going to use a provider that allows us to connect to any network but unlocking an account locally. We are going to use @truffle/hdwallet-provider.(Node >= 7.6) npm install @truffle/hdwallet-provider 4. Initialize a Truffle project. truffle init If you see the following result on the terminal, this step is successful: ✔ Preparing to download ✔ Downloading ✔ Cleaning up temporary files ✔ Setting up box Unbox successful. Sweet! Commands: Compile: truffle compile Migrate: truffle migrate Test contracts: truffle test Then you can see the file structure like this: ├── contracts │ └── Migrations.sol ├── migrations │ └── 1_initial_migration.js ├── test └── truffle-config.js In my editor,it shows like this: 5. Create an RSK account To create our wallet we are going to use this web app:. This may not be used for any ‘real’ wallet; it’s not a secure way to generate a private key! We are going to use it just for learning the basics. 5.2 Create an account: var HDWalletProvider = require('@truffle/hdwallet-provider') var mnemonic = 'rocket fault regular ... YOUR MNEMONIC';// 12 key words we generated before var publicNode = ''; module.exports = { networks: { testnet: { provider: () => new HDWalletProvider(mnemonic, publicNode), network_id: '*', gas: 2500000, gasPrice: 183000 } }, compilers : { solc: { version: "0.5.0", evmVersion: "byzantium" } } } truffle console --network testnet truffle(testnet)> What we are doing is telling truffle to connect to RSK public test node, and having control of your recently created account. truffle(testnet)> var account = Object.keys(web3.currentProvider.wallets)[0] undefined truffle(testnet)> account '0xf08f6c2eac2183dfc0a5910c58c186496f32498d' This string in the last line is our address. We mentioned before that RSK Testnet is a free network. To get funds to use in this network, we are going to use a faucet. A faucet is commonly a site where you enter your address and it automatically sends you some testnet funds for testing. Let’s go to RSK Faucet:. Steps of usage: truffle(testnet)> web3.eth.getBalance(account, (err, res) => console.log(res)) The string displayed on my terminal is the funds I got: '999969677083000' 6. Create a simple Token pragma solidity ^0.4.17; import 'openzeppelin-solidity/contracts/token/ERC20/StandardToken.sol'; import "openzeppelin-solidity/contracts/ownership/Ownable.sol"; contract YourNewToken is StandardToken, Ownable { string public name = 'CoinFabrik'; string public symbol = 'CF'; uint8 public decimals = 18; uint public INITIAL_SUPPLY = 1000; string Owner; event Yes(string); event No(string); constructor() public {'); } } function destroy() public onlyOwner { selfdestruct(owner); } } 7. Let me explain the above code. pragma solidity ^0.4.17; import 'openzeppelin-solidity/contracts/token/ERC20/StandardToken.sol'; import "openzeppelin-solidity/contracts/ownership/Ownable.sol";. contract YourNewToken is StandardToken, Ownable { } After that, we have all the functions from those libraries and from their imported upward libraries. string public name = 'YourNewToken'; string public symbol = 'YNT'; uint8 public decimals = 18; uint public INITIAL_SUPPLY = 1000; string Owner; constructor() public { totalSupply_ = INITIAL_SUPPLY * (10**uint(decimals)); balances[msg.sender] = totalSupply_; }'); } } 7.8 Finally, we add a destroyable capability to the contract in which the owner is the only one who can execute it. function destroy() public onlyOwner { selfdestruct(owner); } 8. Creating the Migration var YourNewToken = artifacts.require("./YourNewToken.sol"); module.exports = function(deployer) { deployer.deploy(YourNewToken); }; 9.Deploy contract truffle(testnet)> compile Output: Compiling your contracts... =========================== > Compiling ./contracts/Migrations.sol > Compiling ./contracts/YourNewTokens.sol > Compiling openzeppelin-solidity/contracts/math/SafeMath.sol > Compiling openzeppelin-solidity/contracts/ownership/Ownable.sol > Compiling openzeppelin-solidity/contracts/token/ERC20/BasicToken.sol > Compiling openzeppelin-solidity/contracts/token/ERC20/ERC20.sol > Compiling openzeppelin-solidity/contracts/token/ERC20/ERC20Basic.sol > Compiling openzeppelin-solidity/contracts/token/ERC20/StandardToken.sol > Artifacts written to /Users/huangxu/Project/RIL-DOCS/Smart Contract/build/contracts > Compiled successfully using: - solc: 0.4.24+commit.e67f0147.Emscripten.clang truffle(testnet)> migrate --reset Output: Using network 'testnet'. Running migration: 1_initial_migration.js Deploying Migrations... ... 0xf00d4ecf2b5752022384f7609fe991aa72dda00a0167a974e8c69864844ae270 Migrations: 0x1dc2550023bc8858a7e5521292356a3d42cdcbe9 Saving successful migration to network... ... 0x3e759e8ff8a7b8e47a441481fa5573ccf502b83f3d591ad3047e622af0f9169e Saving artifacts... Running migration: 2_deploy_token.js Deploying YourNewToken... ... 0x300c8bb1e434e2aa4b13dcc76087d42fcbe0cb953989ca53a336c59298716433 YourNewToken: 0xc341678c01bcffa4f7362b2fceb23fbfd33373ea Saving successful migration to network... ... 0x71771f7ee5d4e251e386979122bdda8728fa519d95a054572751bb10d40eb8c5 Saving artifacts... truffle(testnet)> migrate --all --reset The migration contract will be deployed first. Truffle gives us the transaction hashes of each operation, so we can check for details or logs later. Here is the complete output that I’ve received. Go to top
https://developers.rsk.co/tutorials/create-a-token/
CC-MAIN-2020-16
en
refinedweb
Hi Torunn As I recall, this can happen if the authenication middleware doesn't execute correctly on the WebAPI request pipeline. Do you have the following line at the end of your OWIN Startup class? app.UseStageMarker(PipelineStage.Authenticate); I don't recall if that fixed the same or another of the authentication issues I have faced with WebAPI on Episerver. Stefan: This didn't do any difference, unfortunately. Quan: No, I don't have the ServiceApi installed. I don't think I have code to supress the cookie authentication. I have the same configuration in startup.cs as in Alloy, although i have some custom configuration: toto //app.AddCmsAspNetIdentity<ApplicationUser>(); app.SetupAspNetIdentity<ApplicationUser>(); app.CreatePerOwinContext<ApplicationUserManager<TUser>>(CreateApplicationUserManager); manager.PasswordHasher = new SqlPasswordHasher(); Can try this in a IConfigurableModule's ConfigureContainer(ServiceConfigurationContext context) GlobalConfiguration.Configure(config => { config.Filters.Add(new HostAuthenticationFilter(DefaultAuthenticationTypes.ApplicationCookie)); }); Not sure if it will work for you, but worked for me (but we allso use epis default app.AddCmsAspNetIdentity<ApplicationUser>(); ) Edit: had a little too much info there :) So I added this line config.Filters.Add(new HostAuthenticationFilter(DefaultAuthenticationTypes.ApplicationCookie)); at the end of my GlobalConfiguration.Configure(config => in Application_Start method in Global.asax.cs. But PrincipalInfo is still returning anonymous. Wonder if that maybe is too early, and it also may not be the issue... but we try everything :) namespace Norrona.Commerce.Site.Infrastructure.Initialization { [ModuleDependency(typeof(EPiServer.Commerce.Initialization.InitializationModule))] public class TestInitialization : IConfigurableModule { public void Initialize(InitializationEngine context) { } public void ConfigureContainer(ServiceConfigurationContext context) { GlobalConfiguration.Configure(config => { config.Filters.Add(new HostAuthenticationFilter(DefaultAuthenticationTypes.ApplicationCookie)); }); } public void Uninitialize(InitializationEngine context) { } } } This is a very stripped down of my initialization for the site Oh and you use the cookie authenticatior? Would be a good first question I have this in my startup.cs (I also use epis default AddCmsAspNetIdentity<SiteUser>() (SiteUser is a class in my solution) In the startup.cs, right after the AddCmsAspNetIdentity I have this: app.UseCookieAuthentication(new CookieAuthenticationOptions { AuthenticationType = DefaultAuthenticationTypes.ApplicationCookie, LoginPath = new PathString("<SiteUser>, SiteUser>( validateInterval: TimeSpan.FromMinutes(30), regenerateIdentity: (manager, user) => manager.GenerateUserIdentityAsync(user)), OnApplyRedirect = (context => context.Response.Redirect(context.RedirectUri)), //OnResponseSignOut = (context => context.Response.Redirect(UrlResolver.Current.GetUrl(ContentReference.StartPage))) } }); Hi Antti, that's the thing I am unsure about. It's just a call to the api from frontend, where we would like to make a simple check in the backend, whether the current user is logged in or not. If this is not possible, how should the front end developer call this api? We are trying to avoid sending e.g. username. We use webapi in our solution, and in the frontend we just do a regular Ajax call to the webapi. Since it is the same domain, then the auth cookie that asp.bet use should be sent too in that call automatic. So if the frontend code doesn't send ".AspNet.ApplicationCookie" to the api call, then the principal will be empty, but like I said, this usually happens automatic Hi Torunn, In debug mode check the "Cookie" header value (so for example in the immediate window: Request.Headers.GetValues("Cookie"); ) Do you get any cookies in the request? For example you could also just create a new Alloy site, add Web API, create a demo Web Api controller and then try with that - and compare to your solution. I just did a demo with Alloy and Web API, and I don't need to do anything special in the Ajax call and I can check the user in my Web API controller. As Sebbe also pointed out, if your front-end and Web API are in the same domain it should just work BUT if your front-end is in other domain than your Episerver site, then your authentication cookie is not sent from the front-end - so how is your setup? Hi! I have a site which is built on AspNet Identity, and I would like to have an api method (not using Headless) that returns logged in status for the current user. Is this possible? It looks like this: As you can see, I get Name " " and only the "Everyone" and "Anonymous" role, even thought the user is logged in.
https://world.episerver.com/forum/developer-forum/-Episerver-75-CMS/Thread-Container/2019/10/how-to-get-principalinfo-in-web-api/
CC-MAIN-2020-16
en
refinedweb
// avoid dialog boxes with "There is no disk in the drive" ! using (new LP_SetErrorMode(ErrorModes.FailCriticalErrors)) { // do whatever needs to be done // when leaving the using block, the ErrorMode is returned // to its original value automatically } using System; using System.Runtime.InteropServices; // DllImport, MarschalAs namespace LP_Core { /// <summary> /// Possible mode flags for LP_SetErrorMode constructor. /// </summary> [Flags] public enum ErrorModes { /// <summary> /// Use the system default, which is to display all error dialog boxes /// </summary> Default=0x0, /// <summary> /// The system does not display the critical-error-handler message box. /// Instead, the system sends the error to the calling process. /// </summary> FailCriticalErrors=0x1, /// <summary> /// 64-bit Windows: The system automatically fixes memory alignment faults /// and makes them invisible to the application. It does this for the /// calling process and any descendant processes. /// </summary> NoOpFaultErrorBox=0x2, /// <summary> /// The system does not display the general-protection-fault message box. /// This flag should only be set by debugging applications that handle /// general protection (GP) faults themselves with an exception handler. /// </summary> NoAlignmentFaultExcept=0x4, /// <summary> /// The system does not display a message box when it fails to find a file. /// Instead, the error is returned to the calling process. /// </summary> NoOpenFileErrorBox=0x8000 } /// <summary> /// LP_SetErrorMode temporarily changes the Windows error mode. /// It can be used to try and get a file list on removable media drives /// (in order to avoid the dialog box "There is no disk in the drive") /// </summary> public struct LP_SetErrorMode : IDisposable { private int oldErrorMode; /// <summary> /// Create an instance of LP_SetErrorMode to change the Windows error mode. /// </summary> /// <remarks>With a using statement, the original value will be restored when the /// instance is automatically disposed.</remarks> /// <param name="mode"></param> public LP_SetErrorMode(ErrorModes mode) { oldErrorMode=SetErrorMode((int)mode); // FailCriticalErrors } void IDisposable.Dispose() { SetErrorMode(oldErrorMode); } [DllImport("kernel32.dll")] private static extern int SetErrorMode(int mode); } } Wheatbread wrote:Does .NET use CString or only CStringT? Anyway to have it accept CString or do we need to go and do a global replace for CString w/ CStringT? Wheatbread wrote:Any ideas on this or other porting issues we may run into would be appreciated. Thanks. Wheatbread wrote:Any ideas on this or other porting issues we may run into would be appreciated. General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Forums/1650/NET-Framework.aspx?df=90&mpp=25&sort=Position&view=Normal&spc=Relaxed&prof=True&select=1951982&fr=18779
CC-MAIN-2020-16
en
refinedweb
#include <LoadGenericNonLinear.h> These load models (known also as generic non-linear dynamic (GNLD) load models) can be used in mid-term and long-term voltage stability simulations (i.e., to study voltage collapse), as they can replace a more detailed representation of aggregate load, including induction motors, thermostatically controlled and static loads. Steady state voltage index for reactive power (BS). Transient voltage index for reactive power (BT). Type of generic non-linear load model. Steady state voltage index for active power (LS). Transient voltage index for active power (LT). Dynamic portion of active load (PT). Dynamic portion of reactive load (QT). Time constant of lag function of active power (TP). Time constant of lag function of reactive power (TQ).
https://cim.fein-aachen.org/libcimpp/doc/IEC61970_16v29a_SINERGIEN_20170324/classIEC61970_1_1Dynamics_1_1StandardModels_1_1LoadDynamics_1_1LoadGenericNonLinear.html
CC-MAIN-2020-16
en
refinedweb
Multiple Kernels Multiple kernels As weve seen before, SYCL kernels are launched asynchronously. To retrieve the results of computation, we must either run the destructor of the buffer that manages the data or create a host accessor. A question comes up - what if we want to execute multiple kernels over the same data, one after another? Surely we must then manually synchronize the accesses? Luckily, we barely have to do anything. The SYCL runtime will guarantee that dependencies are met and that kernels which depend on others results will not launch until the ones they depend on are finished. All of this is managed under the hood and controlled through buffers and accessors. It is deterministic enough for us to be able to know exactly what will happen. Lets see an example: Executing interdependent kernels #include <iostream> #include <numeric> #include <CL/sycl.hpp> namespace sycl = cl::sycl; int main(int, char**) { sycl::queue q(sycl::default_selector{}); std::array<int, 16> a_data; std::array<int, 16> b_data; std::iota(a_data.begin(), a_data.end(), 1); std::iota(b_data.begin(), b_data.end(), 1); sycl::buffer<int, 1> a(a_data.data(), sycl::range<1>(16)); sycl::buffer<int, 1> b(b_data.data(), sycl::range<1>(16)); sycl::buffer<int, 1> c(sycl::range<1>(16)); sycl::buffer<int, 1> d(sycl::range<1>(16)); <<Read A, Write B>> <<Read A, Write C>> <<Read B and C, Write D>> <<Write D>> auto ad = d.get_access<sycl::access::mode::read>(); for (size_t i = 0; i < 16; i++) { std::cout << ad[i] << " "; } std::cout << std::endl; return 0; } In this example, we submit four command groups. Their operations are not particularly important. What matters is which buffers they write to and read from: Read A, Write B q.submit([&] (sycl::handler& cgh) { auto aa = a.get_access<sycl::access::mode::read>(cgh); auto ab = b.get_access<sycl::access::mode::discard_write>(cgh); cgh.parallel_for<class kernelA>( sycl::range<1>(16), [=] (sycl::item<1> item) { ab[item] = aa[item] * 2; } ); } ); Read A, Write C q.submit([&] (sycl::handler& cgh) { auto aa = a.get_access<sycl::access::mode::read>(cgh); auto ac = c.get_access<sycl::access::mode::discard_write>(cgh); cgh.parallel_for<class kernelB>( sycl::range<1>(16), [=] (sycl::item<1> item) { ac[item] = aa[item] * 2; } ); } ); Read B and C, Write D q.submit([&] (sycl::handler& cgh) { auto ab = b.get_access<sycl::access::mode::read>(cgh); auto ac = c.get_access<sycl::access::mode::read>(cgh); auto ad = d.get_access<sycl::access::mode::discard_write>(cgh); cgh.parallel_for<class kernelC>( sycl::range<1>(16), [=] (sycl::item<1> item) { ad[item] = ab[item] + ac[item]; } ); } ); Write D q.submit([&] (sycl::handler& cgh) { auto ad = d.get_access<sycl::access::mode::read_write>(cgh); cgh.parallel_for<class kernelD>( sycl::range<1>(16), [=] (sycl::item<1> item) { ad[item] /= 4; } ); } ); As we can see, some buffers are reused between the kernels with different access modes, while others are used independently. The order in which the SYCL runtime schedules the kernels will mirror this usage. The first two kernels will be scheduled concurrently, because they do not depend on each other. Both of them read from the same buffer (A), but they do not write to it. Since concurrent reading is not a data race, that part is independent. Then, they also write to different buffers, so writes do not conflict. The runtime is aware of all this and will exploit it for maximum parallelism. The third kernel is not independent - it reads from the buffers B and C into which the first two kernels write. Hence, it will wait for them to finish and be scheduled immediately after that. Finally, the fourth kernel does not read anything that a previous kernel wrote, but it does write to the same data - the D buffer. Since mutating shared state in parallel is a data race, this kernel has to wait for the third one to finish and will execute only then. Our program outputs the correct results: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 In this case we have a well-defined execution order, since all kernels are submitted from the same thread. What if we have a multi-threaded application, with submit calls being made on several threads? The queue is thread-safe, and the order in which kernels are executed will be decided by the order of submission. If you want to guarantee a specific order between kernels submitted from different threads, you have to synchronize this manually and make submit calls in the right order - otherwise it could be random, depending on which thread happens to execute its operation on the queue first.
https://developer.codeplay.com/products/computecpp/ce/guides/sycl-guide/multiple-kernels
CC-MAIN-2020-16
en
refinedweb
Content Count53 Joined Last visited Days Won1 Reputation Activity - Nepoxx reacted to codevinsky in generator-phaser-official: Yeoman Generator for Phaser projects I'll work on learning typescript this week, and then incorporating it into the generator. - Nepoxx got a reaction from Dread Knight in [Ask] Error Get phaser.Map If you don't plan on using the map to debug, you can prevent this behavior by removing the following comment from the minifed source: //@ sourceMappingURL=phaser.map Actually, you should remove that line from production code as it will prevent a totally unnecessary http get call (not that is it expensive, however). - Nepoxx got a reaction from Dread Knight in The Phaser 3 Wishlist Thread :) I'll give another +1 for basic networking. I'm not quite sure what this engine provides, but it sure sounds sweet: Source - Nepoxx reacted to Arcanorum in Have one "body" attract all other "bodies". Basically have the earth with gravity. What I mean by don't use SO like a Q&A site is that people shouldn't just dump their own specific, situational problems there expecting someone to come along and work it out for them. The Q&A aspect of SO is meant to serve as a way to grow a repository of useful knowledge, not just to be someones own personal helpdesk. - Nepoxx reacted to pixelpathos in Have one "body" attract all other "bodies". Basically have the earth with gravity. Hi WhatsDonIsDon, Without knowing the specifics of your game and Phaser, it sounds like the implementation of gravity is correct (a force applied to each body/asteroid in the direction of Earth, perhaps inversely proportional to the square of the distance between asteroid and Earth). Perhaps all you need to do is to ensure that the asteroid has some velocity perpendicular to Earth. This should allow the asteroid to orbit in some fashion (I assume this is what you are aiming for). For example, if your Earth is in the centre of the screen, and you have an asteroid directly above at the top of the screen: if the asteroid has some initial sideways motion, you should get some kind of orbit. I've implemented gravity in my game, Luminarium, which you might find a useful example (see this thread, or direct link in my signature). Have a look at scene 3 (locked levels are playable in the demo), which introduces an "orbit" command, analogous to your Earth. When I first implemented the orbit, I found that the Lumins (aliens) would be "sling-shotted" by the orbit command i.e. they would be pulled closer towards it, and accelerate, but then would escape out the other side of the orbit. To try an guarantee continuous orbit, I implemented slight damping (reduction of the aliens' velocity over time). Also, to reduce the number of calculations required, especially if you're going to create many asteroids, I limited the radius over which the orbit/gravity operates, as indicated by the circle drawn around my orbit. Let me know if I can provide any more details! - Nepoxx reacted to xerver in Infinite game Sure its possible, just need to manage what tilemaps are loaded and that memory yourself. There are no automatic features to do what you want, you just need to track the user and load/unload tilemaps as needed. Think of it like a tilemap where each tile is a tilemap, just manage it yourself. - Nepoxx reacted to gnumaru in Infinite game I found this article which seems interesting: The author does an analysis on approaches to implementing voxel engines. Even though it is about 3D worlds, you could just “cut out one of the dimensions” and think 2D =) - Nepoxx reacted to rich in Phaser 2.1.0 "Cairhien" is Released 2.1.0 was supposed to be a Phaser release that just updated p2 physics. However it quickly grew into something quite a bit more! We took the opportunity to introduce a bunch of new features (ScaleMode RESIZE being a pretty significant one) and hammer the github issues list into submission, until it was a mere shadow of its former self, now composed mostly of feature requests. As you should be able to tell from the version number this is an API breaking release. Not significantly, as most changes are confined to P2, but definitely in a few other areas too. As usual the change log details everything, and although we appreciate there is a lot to read through, everything you need to know can be found there. The next release will be 2.1.1 in approximately 1 month, so if you find any problems, and are sure they are actual problems, please report them on github and we'll get them fixed! To everyone who contributed towards this release: thank you, you're the best. Also, one of my favourite new examples (draw on it) Cheers, Rich - Nepoxx reacted to ASubtitledDeath in IE9 framework errors on load Hey Guys, Thanks for the replys. I did a few more experiments. These are the variations of the framework I have tried: Phaser.min.js - Error Object doesn't support property or method 'defineProperty', line 3, character 2900 phaser-no-libs.js - no errors, but I need physics and Pixi phaser-arcade-physics.js - Error Object doesn't support property or method 'defineProperty', line 603, character 1 This is the offending code: Object.defineProperty(PIXI.DisplayObject.prototype, 'interactive', { get: function() { return this._interactive; }, set: function(value) { this._interactive = value; // TODO more to be done here.. // need to sort out a re-crawl! if(this.stage)this.stage.dirty = true; }}); HOWEVER!! I added this user agent compatibility tag into the head of the doc and it now works! <meta http- I also retro fitted it to the full framework and that fixed it as well. - Nepoxx got a reaction from lewster32 in Multiplayer RPG game. It's possible? You'll most likely have to write your own server code. Non-real-time is trivial to do, however. You can communicate from Phaser using simple AJAX to a NodeJS server. That should be more than enough for your needs. - Nepoxx reacted to rich in Slice Engine. ? - Nepoxx got a reaction from lewster32 in Howto videoanimationcreator.php in phazer ? You're asking for a lot, however here's a small list of good resources for Phaser: - Nepoxx reacted to rich in 1995 Sega Game Remake, works on mobile too. Right, because no-one else has ever made a game in Phaser that used the mouse If it's meant to be fully keyboard controlled then don't respond to mouse events at all on desktop. Then at least people won't click around and get confused as to why some buttons respond fine and others don't - that's bloody confusing, no matter how you spin it. - Nepoxx reacted to gnumaru in Alternatives to organize code base into several files (AMD, Browserify and alikes) Nepoxx Indeed, the C compiler preprocessor would do with the files exactly what I do not want to do. I do not want to bundle every .js file into one single big file, that's what the C preprocessor does. But when I made comparisons with C includes, I was talking about execution behavior, the javascript execution behavior compared to the behavior of a compiled C code that got includes. For example, if you execute the following lines on your browser: /* ********** */ eval("var a = 123;"); alert(a); var b = 987; eval("alert(b );"); /* ********** */ The first alert call will alert '123' and the second alert call will alert '987'. But if you 'use strict', the "var a" declaration and assignment wont be visible outside the eval, and the first alert will throw a "ReferenceError: a is not defined", and if you omit the var for the variable's 'a' declaration it will throw a "ReferenceError: assignment to undeclared variable a" (because when you 'use strict' you only declare globals explicitly by appending them to the window object). But the second alert will behave identically with or without 'use strict', because when you eval some string, it's code runs using the context where the eval call is made. This behavior of eval (although achieved in execution time) is the same of a C include statement (although achieved in compile time). If you create two C source files named a.c and b.c: /* ********** */ //code for a.c int main(){ int x = 0; #include "b.c" i = i+1; } /* ********** */ /* ********** */ //code for b.c x = x+1; int i = 0; /* ********** */ then compile them: $ gcc a.c; It will compile successfully because the code of b.c was coppied "as is" in the place where #include "b.c" is called. Thus not only the code in b.c got access to the code defined before the include statement in a.c, as well as the code defined after the include has access to the code defined in b.c. That's exactly the behavior of eval without "use strict", and "half" the behavior of the eval with "use strict". About eval being bad, I'm not so sure yet. I know most of the planet repeat Douglas Crockford's mantra "eval is evil" all day long, but it seems eval is more like "something that usually is very badly used by most" than "something that is necessarily bad wherever used". I had yet no in depth arguments about the performance of eval, and personally I guess that it "must be slower but not so perceively slower". About the security, that surely opens doors to malicious code, but the exact functionality I seek can not be achieved otherwise, at least not until ecmascript 6 gets out of the drafts and becomes standard. About the debugging issue, I think that's the worst part, but as already said, there is no other way to achieve what I seek without it. SebastianNette When I said javascript couldn't include other javascript files it was because "javascript alone" doesn't have includes. The default, de-facto, way of including javascript files is of course through script tags (it was the default way since the beginning of the language). But the script tag is a functionality that is part of the html markup language, not of the javascript programming language. Javascript itself, in it's language definition standards, does note have (yet) a standards defined way to include/require other javascript files. I was already aware of the Function constructor. I really don't know the innards of the javascript engines, but I bet that internally there is no difference between evalling a string and passing a string to a function constructor (jshint says that “The Function constructor is a form of eval”). I did run your tests on jsperf.com, and eval gave me a performance only 1.2% slower than the function constructor (on firefox 31). On chrome 36, it gave me a difference of 1.45%, which are both not so bad. I'm sure that one big js file bundled through browserify can be much more easily chewed by the javascript engines out there. The question could be about "how much slower" does a code recently acquired through a xmlhttprequest runs in comparison of a code that was always bundled since the beginning? And does this slowdown happens only after the first execution? and what if I cache the code? will it run faster afterwards? or it will always run slower? I don't know the answer, I never studied compilers, interpreters or virtual machines architectures. At least, my results in the jsperf test you gave me where good to me =) Anyway, I changed the eval to the “new Function” because I noticed that I wasn't caching the retrieved codes AT ALL. Now I've switched to a slightly better design. Everyone I have now implemented a limited commonjs style module loading on executejs (without a build step). It does not handles circular dependencies yet, and it expects only full paths (not relative paths). What bothers me of browserify is that it compels you to a build step. RequireJS does not have it, you can use your modules as separate files or bundle them together, you decide. But that's not true with browserify, and I prefer the commonjs require style than the amd style. I searched for a browser module loader that supports commonjs, but every one of them seem to need a build step. The only one I found was this: And it seems to be too big and complicated for something that should not be so complex... - Nepoxx reacted to lewster32 in Phaser autocomplete in editor Visual Studio's Intellisense works very well for me. Use phaser.js and put this at the top of your own JavaScript file(s) to enable Intellisense: /// <reference path="phaser.js" /> The path is relative to the file you're working on, so if you keep all your JS files in the same folder this will work as is. - Nepoxx got a reaction from callidus in video tutorial on phaser Thanks for the link! (They have a playlist specifically for Phaser:) - Nepoxx reacted to lewster32 in Blocks collapse It seems to me that if all the boxes are the same size, are spawned on an x-axis grid and aren't meant to intersect, then it's likely to be a grid-based game. I could be totally wrong in my assumptions of course! If there's a need to actually have a stack of physics objects like that, then we really need more info about what it is Tubilok is trying to achieve.
https://www.html5gamedevs.com/profile/10142-nepoxx/reputation/?type=forums_topic_post&change_section=1
CC-MAIN-2020-16
en
refinedweb
am am trying to write a program that requires me hitting a https web link. However, I can't seem to get it to work. The program works fine when dealing with http sites, however, when I try it with a https site I get socket.gaierror: [Errno 11001] getaddrinfo failed It seems like it has something to do with the ssl not working, however, I do have the ssl.py in the python library and I have no problem importing it. My code is below. Any help would be greatly appreciated. import urllib.request auth = urllib.request.HTTPSHandler() proxy = urllib.request.ProxyHandler({'http':'my proxy'}) opener = urllib.request.build_opener(proxy, auth) f = opener.open('')
https://www.queryhome.com/tech/77226/error-of-drawing-map-from-shape-file-in-python-3-2-basemap
CC-MAIN-2020-16
en
refinedweb
On Mon, 09 May 2005 20:56:33 -0300 Roberto Ierusalimschy <[email protected]> wrote: > > - On platforms without dynamic library loading capabilities, not only > > the "loadlib" function gets disabled, but also "require". The mere > > presence of the C loader voids the functionality of the entire Loadlib. > > This is probably a bug. > > Can you give more details? I could not reproduce that behavior. At the moment I can't reproduce it either. When I stumbled across the alleged problem, I was probably mixing different things up badly. Sorry. > > Even though the definition for the path delimiter can be overridden, > > I think that Lua should not try to abstract from an underlying > > filesystem in any way -- this task should be left to specialized > > third-party libraries. It seems more appropriate to me if at least > > one function (preferrably "require") is guaranteed to leave the path > > and filename unmodified. > > One of the main uses of require is to search for modules. So it cannot > leave the path unmodified. There is one function that leaves all > untouched: loadfile. Actually, you can use it together with require (but > don't have to): > > package.preload["modname"] = loadfile("/whole/path/myfile.lua") > require"modname" Loadfile itself is no replacement for require, but this is slick! I can live with that. The problem is that the previous behavior (and the documentation) did not contain a hint towards a special meaning of the dot character. This renders some of my old scripts incompatible. A different approach would be to present Lua an uniform filesystem namespace, regardless of the operating system underneath, and to use a path delimiter where a path delimiter was meant. This is the idea I'm pursuing, by providing replacements for functions such as fopen and dlopen. At present it's rather unrealistic for this approach to become part of the main development branch of Lua, I guess; what I'm asking for is to make as few assumptions about the underlying filesystem as possible, and not to introduce new meanings for characters too easily. - Timm
http://lua-users.org/lists/lua-l/2005-05/msg00033.html
CC-MAIN-2020-16
en
refinedweb
From Terraform If your infrastructure was provisioned with Terraform, there are a number of options that will help you adopt Pulumi. - Coexist with resources provisioned by Terraform by referencing a .tfstatefile. - Import existing resources into Pulumi in the usual way or using the tf2pulumito adopt all resources from an existing .tfstatefile. - Convert any Terraform HCL to Pulumi code using tf2pulumi. This range of techniques helps to either temporarily or permenanely use Pulumi alongside Terraform, in addition to fully migrating existing infrastructure to Pulumi. Referencing Terraform State Let’s say your team already has some infrastructure stood up with Terraform. Maybe now isn’t the time to convert it or maybe some part of your team wants to keep using Terraform for awhile, while you start adopting Pulumi. Often you’ll want to interact with that infrastructure, maybe because it exports important IDs, IP addresses, configuration information, and so on. For example, it might define a VPC and you need its ID to create some new VMs in your new Pulumi project; or it may provision a Kubernetes cluster and you need the kubeconfig to deploy some application services into the cluster; etc. In each of these cases, you can use the RemoteStateReference resource to reference output variables exported from the Terraform project. This works for manually managed state files in addition to Terraform Cloud or Enterprise ones. To use this class, first install the relevant package on your system: $ npm install @pulumi/terraform $ npm install @pulumi/terraform $ pip3 install pulumi_terraform Terraform RemoteStateReference is not yet supported in Go. See. Terraform RemoteStateReference is not yet supported in .NET. See. For example, this code reads AWS EC2 VPC and subnet IDs from terraform.tfstate file and provisions new EC2 instances that use them: let aws = require("@pulumi/aws"); let terraform = require("@pulumi/terraform"); // Reference the Terraform state file: let networkState = new terraform.state.RemoteStateReference("network", { backendType: "local", path: "/path/to/terraform.tfstate", }); // Read the VPC and subnet IDs into variables: let vpcId = networkState.getOutput("vpc_id"); * as aws from "@pulumi/aws"; import * as terraform from "@pulumi/terraform"; // Reference the Terraform state file: const networkState = new terraform.state.RemoteStateReference("network", { backendType: "local", path: "/path/to/terraform.tfstate", }); // Read the VPC and subnet IDs into variables: const vpcId = networkState.getOutput("vpc_id"); pulumi_aws as aws import pulumi_terraform as terraform # Reference the Terraform state file: network_state = terraform.state.RemoteStateReference('network', backend_type='local', args=terraform.state.LocalBackendArgs(path='../terraform.tfstate')) # Read the VPC and subnet IDs into variables: vpc_id = network_state.get_output('vpc_id') public_subnet_ids = network_state.get_output('public_subnet_ids') # Now spin up servers in the first two subnets: for i in range(2): aws.ec2.Instance(f'instance-{i}', ami='ami-7172b611', instance_type='t2.medium', subnet_id=public_subnet_ids[i]) // Terraform RemoteStateReference is not yet supported in Go. // // See. // Terraform RemoteStateReference is not yet supported in .NET. // // See. If we run pulumi up, well see the two new servers get spun up: $ pulumi up Updating (dev): Type Name Status pulumi:pulumi:Stack tfimport-dev + ├─ aws:ec2:Instance instance-0 created + └─ aws:ec2:Instance instance-1 created Resources: + 2 created 1 unchanged This example uses the "local" backend type which simply reads a tfstate file on disk. There are multiple backends available. For example, this slight change to how the RemoteStateReference object is constructed will use a Terraform Cloud or Enterprise workspace: let aws = require("@pulumi/aws"); let pulumi = require("@pulumi/pulumi"); let terraform = require("@pulumi/terraform"); // Reference the Terraform remote workspace: let config = new pulumi.Config(); let tfeToken = config.requireSecret("tfeToken"); let networkState = new terraform.state.RemoteStateReference("network", { backendType: "remote", token: tfeToken, organization: "acmecorp", workspaces: { name: "production-network" }, }); // Same as above ... import * as aws from "@pulumi/aws"; import * as pulumi from "@pulumi/pulumi"; import * as terraform from "@pulumi/terraform"; // Reference the Terraform remote workspace: const config = new pulumi.Config(); const tfeToken = config.requireSecret("tfeToken"); const networkState = new terraform.state.RemoteStateReference("network", { backendType: "remote", token: tfeToken, organization: "acmecorp", workspaces: { name: "production-network" }, }); // Same as above ... import pulumi import pulumi_aws as aws import pulumi_terraform as terraform # Reference the Terraform state file: config = pulumi.Config() tfe_token = config.require_secret('tfeToken') network_state = terraform.state.RemoteStateReference('network', backend_type='remote', args=terraform.state.RemoteBackendArgs( organization='acmecorp', token=tfe_token, workspace_name='production-network')) # Same as above ... // Terraform RemoteStateReference is not yet supported in Go. // // See. // Terraform RemoteStateReference is not yet supported in .NET. // // See. Notice also that we’ve used Pulumi secrets to ensure the Terraform Cloud or Enterprise token is secure and encrypted. The full list of available backends are as follows: - Artifactory ( "artifactory") - Azure Resource Manager ( "azurerm") - Consul ( "consul") - etcd v2 ( "etcd") - etcd v3 ( "etcdv3") - Google Cloud Storage ( "gcs") - HTTP ( "http") - Local .tfstateFile ( "local") - Manta ( "manta") - Postgres ( "pg") - Terraform Enterprise or Terraform Cloud ( "remote") - AWS S3 ( "s3") - Swift ( "swift") Please refer to the API documentation for these libraries for full details on configuration options for each backend type: Node.js (JavaScript or TypeScript) or Python. Converting Terraform HCL to Pulumi The tf2pulumi tool can convert existing Terraform source code written in the HashiCorp Configuration Language (HCL) into Pulumi source code. In addition to converting source code, this tool also offers the option to automatically insert import IDs as described here, so that you can also import state during the conversion. This ensures live resources are brought under the control of Pulumi as well as letting you deploy and manage new copies of that inrastruture. How to Use the Tool To use this tool, first install it. At the moment, this tool needs to be built from source. Regularly released binaries across macOS, Linux, and Windows will soon be available. cd into a Terraform project you’d like to convert. Create a new stack in a subdirectory: $ pulumi new typescript --dir my-stack At the moment, TypeScript is the only language target. Python is under active development. Please let us know if your desired language isn’t available. Next, run tf2pulumi. It will conver the entire project whose directory you are in and print the resulting code to stdout. You’ll probably want to redirect its output, for instance to a file named index.ts in the directory that contains the Pulumi project you just created: $ tf2pulumi >my-stack/index.ts This will generate a Pulumi TypeScript program in index.ts that when run with pulumi update will deploy the infrastructure originally described by the Terraform project. Note that if your infrastructure references files or directories with paths relative to the location of the Terraform project, you will most likely need to update these paths such that they are relative to the generated index.ts file. If tf2pulumicomplains about missing Terraform resource plugins, install those plugins as per the instructions in the error message and re-run the command above. The --allow-missing-pluginsoption allows you to proceed even in the face of missing plugins. If you’d like to record the original HCL source code positions in the resulting generated code, pass the --record-locations flag. This can help with subsequent refactorings which may require that you consult back with the origin source code. Importing Resources That command converted the static HCL source code to Pulumi code. What if you want to import existing resource states from a .tfstate file, however, to avoid unnecessarily recreating your infrastructure? To do so, copy the import.ts file from this repo into your new stack’s directory, and add the following near the top of your generated index.ts file just before any resource creations: ... import "./import"; ... Next, set the importFromStatefile config setting on your project to a valid location of a .tfstate file to import resources from that state: $ pulumi config set importFromStatefile ./terraform.tfstate After doing this, the first pulumi up for a new stack with this configuration variable set will import instead of create all of the resources defined in the code. Once imported, the existing resources in your cloud provider can now be managed by Pulumi going forward. See the Importing Infrastructure User Guide for more details on importing existing resources. Limitations While the majority of Terraform constructs are supported, there are some known gaps that we are working to address. If you run into a problem, please let us know on GitHub and we would be happy to work through it with you. Example Conversion For an example of a full end-to-end conversion, including some improvements made possible after the conversion is finished, please see the blog post, From Terraform to Infrastructure as Software.
https://www.pulumi.com/docs/guides/adopting/from_terraform/
CC-MAIN-2020-16
en
refinedweb
RpcInterface Table of Contents: - Overview - Implementing RpcInterfaces - Configuring RpcInterfaces - Serve the Interfaces - Asynchronous Nature of RpcInterfaces - Logging and ActivityIds Overview As described in the software architecture overview, the functionality of an iModel.js app is typically implemented in separate components that run in different threads, processes, and/or machines. These components communicate through interfaces, which are called RpcInterfaces because they use remote procedure calls, or RPC. The diagram above shows an app frontend requesting operations from some backend. The frontend in this case is the client and the backend is the server. In general, the terms client and server specify the two roles in an RpcInterface: client -- the code that uses an RpcInterface and calls its methods. A client could be the frontend of an app, the backend of an app, a service, or an agent. A client could be frontend code or backend code. server -- the code that implements and exposes an RpcInterface to clients. A server could the backend of an app or a service. A server is always backend code. An RpcInterface is defined as a set of operations exposed by a server that a client can call, using configurable protocols, in a platform-independent way. As shown, client and server work with the RpcManager to use an RpcInterface. RpcManager exposes a client "stub" on the client side. This stub forwards the request. On the other end, RpcManager uses a server dispatch mechanism to relay the request to the implementation in the server. In between the two is a transport mechanism that marshalls calls from the client to the server over an appropriate communications channel. The transport mechanism is encapsulated in a configuration that is applied at runtime. A typical app frontend will use more than one remote component. Likewise, a server can contain and expose more than one component. For example, the app frontend might need two interfaces, Interface 1 and Interface 2. In this example, both are implemented in Backend A. An app frontend can just as easily work with multiple backends to obtain the services that it needs. One of the configuration parameters for an RpcInterface is the identity of the backend that provides it. For example, suppose that the frontend also needs to use Interface 3, which is served out by Backend B. The RPC transport configuration that the frontend uses for Backend B can be different from the configuration it uses for Backend A. In fact, that is the common case. If Backend A is the app's own backend and Backend B is a remote service, then the app will use an RPC configuration that matches its own configuration for A, while it uses a Web configuration for B. As noted above, the client of an RPC interface can be frontend or backend code. That means that backends can call on the services of other backends. In other words, a backend can be a server and a client at the same time. A backend configures the RpcInterfaces that it implements by calling the initializeImpl method on RpcManager, and it configures the RpcInterfaces that it consumes by calling initializeClient. For example, suppose Backend B needs the services of Backend C. Implementing an RpcInterface RpcInterfaces are TypeScript Classes An RpcInterface is a normal TypeScript class. A client requests a server operation by calling an ordinary TypeScript method, passing parameters and getting a result as ordinary TypeScript objects. The client gets the TypeScript interface object from the RpcManager. As noted above, the client does not deal with communication Likewise, a server implements and exposes operations by writing normal TypeScript classes. A server registers its implementation objects with RcpManager. And, RpcManager dispatches in-coming requests from clients to those implementation objects. Parameter and Return Types RpcInterface methods can take and return only primitive types and objects that are composed of primitive types or other such objects. RpcInterface Performance Apps must be designed with remote communication in mind. In the case where a server or app backend is accessed over the Internet, both bandwidth and latency can vary widely. Therefore, care must be taken to limit number and size of round-trips between clients and servers. RpcInterface methods must be "chunky" and not "chatty". Also see best practices. Define the Interface To define an interface, write a TypeScript class that extends RpcInterface. The interface definition class must define a method for each operation that is to be exposed by the server. Each method signature must include the names and types of the input parameters. Each method must return a Promise of the appropriate type. These methods and their signatures define the interface. The definition class must also define two static properties as interface metadata: public static readonly interfaceName = "theNameOfThisInterface"; // The immutable name of the interface public static interfaceVersion = "1.2.3"; // The API version of the interface The interfaceName property specifies the immutable name of the interface. This string, rather than Javascript class name, is used to identify the interface when a request is sent from a frontend to a backend. That makes it safe to apply a tool such as Webpack to the frontend code, which may change the names of Javascript classes. See below for more on interface versioning. The definition class must be in a directory or package that is accessible to both frontend and backend code. Note that the RpcInterface base class is defined in @bentley/imodeljs-common. A best practice is that an interface definition class should be marked as abstract. That tells the developer of the client that the definition class is never instantiated or used directly. Instead, callers use the client stub for the interface when making calls. Example: import { RpcInterface, IModelTokenProps, RpcManager } from "@bentley/imodeljs-common"; import { Id64String } from "@bentley/bentleyjs-core"; // The RPC query interface that may be exposed by the RobotWorldEngine. export abstract class RobotWorldReadRpcInterface extends RpcInterface { public static readonly interfaceName = "RobotWorldReadRpcInterface"; // The immutable name of the interface public static interfaceVersion = "1.0.0"; // The API version of the interface public static getClient() { return RpcManager.getClientForInterface(this); } public async countRobotsInArray(_iModelToken: IModelTokenProps, _elemIds: Id64String[]): Promise<number> { return this.forward(arguments); } public async countRobots(_iModelToken: IModelTokenProps): Promise<number> { return this.forward(arguments); } public async queryObstaclesHitByRobot(_iModelToken: IModelTokenProps, _rid: Id64String): Promise<Id64String[]> { return this.forward(arguments); } } In a real interface definition class, each method and parameter should be commented, to provide documentation to client app developers that will try to use the interface. Client Stub The client stub is an implementation of the interface that forwards method calls to the RPC mechanism. Each method in the client stub is exactly the same single line of code: return this.forward(arguments); The forward property is implemented by the base class, and its forward method sends the call and its arguments through the configured RPC mechanism to the server. As shown in the previous example, the client stub code is incorporated into the interface definition class. Server Implementation The server-side implementation is also known as the "impl". An impl is always backend code. To write an impl, write a concrete TypeScript class that extends RpcInterface and also implements the interface definition class. The impl must override each method in the interface definition class. Each override must perform the intended operation. Each impl method must return the operation's result as a Promise. The impl method must obtain the ClientRequestContext by calling ClientRequestContext.current. It must then follow the rules of managing the ClientRequestContext. The methods in the impl may have to transform certain argument types, such as IModelTokenProps, before they can be used by backend code. A best practice is that an impl should be a thin layer on top of normal classes in the server. The impl wrapper should be concerned only with transforming types, not with functionality, while backend operation methods should be concerned only with functionality. Backend operation methods should be static, since a server should be stateless. Preferably, backend operation methods should be synchronous if possible. Example: import { RpcInterface, RpcInterfaceDefinition, IModelTokenProps, IModelToken } from "@bentley/imodeljs-common"; import { Id64String } from "@bentley/bentleyjs-core"; import { IModelDb } from "@bentley/imodeljs-backend"; import { RobotWorldEngine } from "./RobotWorldEngine"; import { RobotWorldReadRpcInterface } from "../common/RobotWorldRpcInterface"; // Implement RobotWorldReadRpcInterface export class RobotWorldReadRpcImpl extends RpcInterface implements RobotWorldReadRpcInterface { public async countRobotsInArray(tokenProps: IModelTokenProps, elemIds: Id64String[]): Promise<number> { const iModelDb: IModelDb = IModelDb.find(IModelToken.fromJSON(tokenProps)); return RobotWorldEngine.countRobotsInArray(iModelDb, elemIds); } public async countRobots(tokenProps: IModelTokenProps): Promise<number> { const iModelDb: IModelDb = IModelDb.find(IModelToken.fromJSON(tokenProps)); return RobotWorldEngine.countRobots(iModelDb); } public async queryObstaclesHitByRobot(tokenProps: IModelTokenProps, rid: Id64String): Promise<Id64String[]> { const iModelDb: IModelDb = IModelDb.find(IModelToken.fromJSON(tokenProps)); return RobotWorldEngine.queryObstaclesHitByRobot(iModelDb, rid); } } Impls must be registered at runtime, as explained next. RPC Configuration The architecture comparison diagram shows the role of RpcInterfaces in supporting portable, reusable app components. A different transport mechanism is used in each configuration. RpcManager is used by clients and servers to apply configurations to RpcInterfaces. Web RPC configuration The Web RPC configuration transforms client calls on an RpcInterface into HTTP requests. Provides endpoint-processing and call dispatching in the server process. The iModel.js cloud RPC configuration is highly parameterized and can be adapted for use in many environments. This configuration is designed to cooperate with routing and authentication infrastructure. See Web architecture. iModel.js comes with an implementation of a Web RPC configuration that works with the Bentley Cloud infrastructure. It is relatively straightforward for developers to write custom Web RPC configurations that works with other infrastructures. Desktop RPC configuration The iModel.js desktop RPC configuration is specific to the Electron framework. It marshalls calls on an RpcInterface through high-bandwidth, low-latency pipes between cooperating processes on the same computer. It provides endpoint-processing and call dispatching in the backend process. See Desktop architecture. In-process RPC configuration The in-process RPC configuration marshalls calls on an RpcInterface across threads within a single process. It also provides call dispatching in the backend thread. See Mobile architecture. Server-side Configuration A server must expose the RpcInterfaces that it implements or imports, so that clients can use them. To do that, the server must: register its impls, choose the interfaces that it wants to expose, configure those interfaces, and finally serve those interfaces. Register Impls The server must call RpcManager.registerImpl to register the impl classes for the interfaces that it implements, if any. Example: RpcManager.registerImpl(RobotWorldReadRpcInterface, RobotWorldReadRpcImpl); Choose Interfaces The server must decide which interfaces it wants to expose. A server can expose multiple interfaces. A server can expose both its own implementations, if any, and imported implementations. The server can decide at run time which interfaces to expose, perhaps based on deployment parameters. Example: private static chooseInterfacesToExpose(): RpcInterfaceDefinition[] { const interfaces: RpcInterfaceDefinition[] = [IModelReadRpcInterface, RobotWorldReadRpcInterface]; if (this._features.check("robot.imodel.readwrite")) { interfaces.push(RobotWorldWriteRpcInterface); } return interfaces; } Configure Interfaces The server must choose the appropriate RPC configuration for the interfaces that it exposes to clients. If the server is an app backend, the RPC configuration must correspond to the app configuration. If the server is a service, it must always use a Web RPC configuration for its interfaces. A backend should configure its RpcInterfaces in its configuration-specific main. Desktop Example: import { ElectronRpcManager } from "@bentley/imodeljs-common"; export function initializeRpcImplDesktop(interfaces: RpcInterfaceDefinition[]) { ElectronRpcManager.initializeImpl({}, interfaces); } Web Example: import { BentleyCloudRpcManager, BentleyCloudRpcParams } from "@bentley/imodeljs-common"; export function initializeRpcImplBentleyCloud(interfaces: RpcInterfaceDefinition[]) { const cloudParams: BentleyCloudRpcParams = { info: { title: "RobotWorldEngine", version: "v1.0" } }; BentleyCloudRpcManager.initializeImpl(cloudParams, interfaces); } Serve the Interfaces When a backend is configured as a Web app, it must implement a Web server to serve out its interfaces, so that in-coming client requests are forwarded to the implementations. This is always true of all services. Any Web server technology can be used. Normally, a single function call is all that is required to integrate all configured interfaces with the Web server. For example, if a Web server uses express, it would serve its RpcInterfaces like this: const webServer = express(); ... webServer.post("*", async (request, response) => { rpcConfiguration.protocol.handleOperationPostRequest(request, response); }); It is this simple because the server should be concerned only with serving its RpcInterfaces and not with static resources or any other kind of API. Client-side Configuration The client must specify what interfaces it plans to use and where those interfaces are found. The configuration for all app-specific RpcInterfaces must agree with the app's overall configuration. A frontend should configure its RpcInterfaces in its configuration-specific main. Desktop Configuration A desktop app must use a desktop configuration. Desktop Example: import { ElectronRpcManager } from "@bentley/imodeljs-common"; export function initializeRpcClientDesktop(interfaces: RpcInterfaceDefinition[]) { ElectronRpcManager.initializeClient({}, interfaces); } Web Configuration The configuration of RpcInterfaces in a Web app depends on the relative locations of the frontend and backend(s). There are two basic options: Same Server If the app has its own backend, and if its backend serves both its RpcInterfaces and its frontend Web resources, then configuration is simple. Just pass the array of interfaces to BentleyCloudRpcManager. The URI of the backend defaults to the origin of the Web page. Web example (simple app): // tslint:disable:no-duplicate-imports - The imports are intentionally separated in this cease. import { BentleyCloudRpcManager, BentleyCloudRpcParams, RpcInterfaceDefinition } from "@bentley/imodeljs-common"; export function initializeRpcClientBentleyCloudForApp(interfaces: RpcInterfaceDefinition[]) { const cloudParams: BentleyCloudRpcParams = { info: { title: "RobotWorldEngine", version: "v1.0" } }; BentleyCloudRpcManager.initializeClient(cloudParams, interfaces); } Different Servers If the origin of the frontend is different from the server that runs the backend that provides a given set of RpcInterfaces, then the frontend must specify the URI of the backend server in the uriPrefix property when configuring BentleyCloudRpcManager. Web example (separate backend): export function initializeRpcClientBentleyCloud(interfaces: RpcInterfaceDefinition[], serviceUrl?: string) { const cloudParams: BentleyCloudRpcParams = { info: { title: "RobotWorldEngine", version: "v1.0" }, uriPrefix: serviceUrl }; BentleyCloudRpcManager.initializeClient(cloudParams, interfaces); } A single frontend can consume RpcInterfaces from multiple sources, including the app's own backend, if any, and remote services. The frontend must group interfaces according to the backend that provides them and then use the appropriate configuration for each. Asynchronous Nature of RpcInterfaces The interface between a client and a server is intrinsically asynchronous. That is because the client and server are never in the same JavaScript context, as explained in the app architecture overview. Since a requested operation is carried out in a different thread of execution, it is asynchronous from the client's point of view, and so the client must treat the result as a Promise. As a result, the impl wrapper methods must also return Promises. Nevertheless, the static methods in the backend that actually perform the requested operations should not be async, unless the operation itself requires it. The purpose of a backend is to do the work, not pass the buck. It is the client that must wait, not the server. Logging and ActivityIds A request may pass through many communication tiers. A request will generally be carried out by backends running on other machines. Finally, the backends that carry out a request may run asynchronously. Yet, all of those steps make up a single "activity". To make it possible to understand and troubleshoot such distributed and asynchronous activities, RpcInterface associates a unique "ActivityId" with every client request that goes out over the wire. The ActivityId that was assigned to the original request then appears in logging messages emitted by downstream communications and backend methods that handle the request. That allows a log browser to correlate all of the operations with the original request, no matter where or when they were carried out. Frontend methods may also optionally log additional messages that are tagged with the same ActivityId, to provide useful information about the purpose of the activity. Frontend methods that invoke imodeljs-clients methods directly are responsible for generating or forwarding an ActivityId to them. A backend method that turns around an invokes another backend's method via RpcInterfaces will propagate the current ActivityId to it. Briefly, here is how it works: - Frontend/client - iModel.js on the frontend assigns a unique ActivityId value to an RpcInterface call. - It puts this value in the X-Correlation-ID HTTP header field, to ensure that it stays with the request as it passes through communication layers. - Backend - iModel.js on the backend gets the ActivityId from the HTTP header. - The RpcInterface mechanism and all the async methods in the backend work together to make the ActivityId part of the context in which backend methods are called. - Calls to the Logging manager also occur in this context, and so the Logging manager gets the ActivityId from the context and adds to the logging messages as metadata using a Bentley-standard "ActivityId" property id. - Log Browsers - Can filter on the Bentley-standard "ActivityId" property to correlate all messages related to the same request. See managing the ClientRequestContext for details. RpcInterface Versioning Each RpcInterface has a version. This should not to be confused with the version of the package that implements the interface. The version of an RpcInterface refers to the shape of the interface itself. You should change the version of an RpcInterface if you change its shape. Follow the rules of semantic versioning to indicate the type of change made to the RpcInterface. Non-Zero Major Versions (released) - Change in major version indicates a breaking change - Change in minor version indicates a method was added - Change in patch indicates a fix not affecting compatibility was made Zero Major Versions (prerelease) - Major version locked at zero - Change in minor version indicates a potentially breaking change - Change in patch indicates that a method was added or a fix was made Interface version incompatibility is a possibility when a client makes requests on a remote server. The RpcManager checks that the RcpInterface requested by the client is fulfilled by the implementation provided by the server. An interface is not fulfilled if it is missing or is incompatible. If the interface is missing, then the client's method call will throw an error. If the versions are incompatible, then the client's method call will throw an IModelError with an errorNumber of RpcInterfaceStatus.IncompatibleVersion. The rules of semantic versioning define compatibility. In brief, an interface is incompatible if: There are different types of incompatibilities: - Complete mismatch - Different major versions - Different minor versions in prerelease when major version is zero - Client too new - Client version has the same major version but is greater than the server's version when considering minor and patch Last Updated: 19 March, 2020
https://www.imodeljs.org/learning/rpcinterface/
CC-MAIN-2020-16
en
refinedweb
Ubuntu Feisty 7.04 manual page repository Ubuntu is a free computer operating system based on the Linux kernel. Many IT companies, like DeployIS is using it to provide an up-to-date, stable operating system. Provided by: libaa1-dev_1.4p5-30_i386 NAME aa_getevent - keyboard functions SYNOPSIS #include <aalib.h> int aa_getevent ( aa_context *c, int wait ); PARAMETERS aa_context *c Specifies the AA-lib context to operate on. int wait 1 if you wish to wait for the even when queue is empty. DESCRIPTION Return next event from queue. Return next even from queue. Optionally wait for even when queue is empty. RETURNS Next event from queue (values lower than 256 are used to report ascii values of pressed keys and higher values have special meanings) See the AA-lib texinfo documentation for more details. 0 is returned when queue is empty and wait is set to 0. aa_fonts(3), aa_dither‐ aa_mousedrivers(3), aa_displayrecommended(3), aa_scrwidth(3), aa_imgwidth(3), aa_attrs(3), aa_current‐ aa_autoinitmouse(3), aa_initkbd(3), aa_uninitmouse(3), aa_gotoxy(3), aa_hidemouse(3), aa_setfont(3), aa_parseoptions(3), aa_putpixel(3), aa_recom‐ aa_recommendhimouse(3), aa_recom‐ aa_recommendlowdisplay(3)
http://feisty.unixmanpages.org/man3/aa_getevent.html
CC-MAIN-2020-16
en
refinedweb
Note: Read the post on Autoencoder written by me at OpenGenus as a part of GSSoC. An autoencoder is a neural network that learns data representations in an unsupervised manner. Its structure consists of Encoder, which learn the compact representation of input data, and Decoder, which decompresses it to reconstruct the input data. A similar concept is used in generative models. Given a set of unlabeled training examples , an autoencoder neural network is an unsupervised learning algorithm that applies backpropagation, setting the target values to be equal to the inputs. i.e., it uses . For example, in case of MNIST dataset, The Linear autoencoder consists of only linear layers. In PyTorch, a simple autoencoder containing only one layer in both encoder and decoder look like this: import torch.nn as nn import torch.nn.functional as F): x = F.relu(self.fc1(x)) # output layer (sigmoid for scaling from 0 to 1) x = F.sigmoid(self.fc2(x)) return x. The normal convolution (without stride) operation). References:
https://kharshit.github.io/blog/2019/02/15/autoencoder-downsampling-and-upsampling
CC-MAIN-2020-16
en
refinedweb
In this section we will discuss about the arrayCopy() in Java.In this section we will discuss about the arrayCopy() in Java. In this section we will discuss about the arrayCopy() in Java. arrayCopy() is a method of System class which copies the given length of element started from the specified position from the source array to the given position into the destination array. This method copies the element of one dimensional array. NullPointerException is thrown in case either source, destination or both are null. An ArrayStoreException can be thrown in case of either source, destination or both arguments are not an array, source and destination arguments refers to different primitive types arrays. An IndexOutOfBoundsException is thrown in case of either source position, destination position, length, is negative, source array length is less than the srcPos+length, destination array length is less than destPos+length. Syntax : public static void arraycopy(Object src, int srcPos, Object dest, int destPos, int length) Example Here I am giving a simple example which will demonstrate you about how an specified length of source array beginning from the given position can be copied to the destination array at the specified position. In this example I have created a Java class named JavaArrayCopyExample.java. This class contains array of integer elements which elements will be copied to the another array. To copy the elements of an existing array I have created a new empty integer array with the specified size '6'. Then I used the arrayCopy() method to copy the elements to the specified array. Since array indexing is started from the 0 (zero) so the 0 will be the first position to store the element i.e. first position at index 0, second position at index 1 and so on. In this example I have given the src position =1 i.e. copying of elements will be started from the second position from original array and the destination position is given to 2, i.e. elements will be started to store at third position of destination array. Source Code JavaArrayCopyExample.java public class JavaArrayCopyExample { public static void main(String args[]) { int[] cpyFrom = {4,34,5,3,6,8,1}; int[] cpyTo = new int[6]; System.out.println(); System.out.print("Original array : "); for(int i = 0; i<cpyFrom.length; i++) { System.out.print(cpyFrom[i]+" "); } System.out.println(); System.arraycopy(cpyFrom, 1, cpyTo, 2, cpyTo.length-2); System.out.println("Array after copy " + "to the destination"); for(int i=0; i<cpyTo.length; i++) { System.out.print(cpyTo[i]+" "); } System.out.println(); } } Output When you will execute the above example you will get the output as follows : Ads
https://roseindia.net/java/example/java/core/java-array-copy-example.shtml
CC-MAIN-2020-16
en
refinedweb
As of MySQL 8.0.16, the C API includes asynchronous functions that enable nonblocking communication with the MySQL server. Asynchronous functions enable development of applications that differ from the query processing model based on synchronous functions that block if reads from or writes to the server connection must wait. Using the asynchronous functions, an application can check whether work on the server connection is ready to proceed. If not, the application can perform other work before checking again later. For example, an application might open multiple connections to the server and use them to submit multiple statements for execution. The application then can poll the connections to see which of them have results to be fetched, while doing other work. This section describes the C API asynchronous interface. In this discussion, asynchronous and nonblocking are used as synonyms, as are synchronous and blocking. The asynchronous C API functions cover operations that might otherwise block when reading to or writing from the server connection: The initial connection operation, sending a query, reading the result, and so forth. Each asynchronous function has the same name as its synchronous counterpart, plus a _nonblocking suffix: mysql_fetch_row_nonblocking(): Asynchronously fetches the next row from the result set. mysql_free_result_nonblocking(): Asynchronously frees memory used by a result set. mysql_next_result_nonblocking(): Asynchronously returns/initiates the next result in multiple-result executions. mysql_real_connect_nonblocking(): Asynchronously connects to a MySQL server. mysql_real_query_nonblocking(): Asynchronously executes an SQL query specified as a counted string. mysql_store_result_nonblocking(): Asynchronously retrieves a complete result set to the client. Applications can mix asynchronous and synchronous functions if there are operations that need not be done asynchronously or for which the asynchronous functions do not apply. The following discussion describes in more detail how to use asynchronous C API functions. All asynchronous C API functions return an enum net_async_status value. The return value can be one of the following values to indicate operation status: NET_ASYNC_NOT_READY: The operation is still in progress and not yet complete. NET_ASYNC_COMPLETE: The operation completed successfully. NET_ASYNC_ERROR: The operation terminated in error. NET_ASYNC_COMPLETE_NO_MORE_RESULTS: The operation completed successfully and no more results are available. This status applies only to mysql_next_result_nonblocking(). In general, to use an asynchronous function, do this: Call the function repeatedly until it no longer returns a status of NET_ASYNC_NOT_READY. Check whether the final status indicates successful completion ( NET_ASYNC_COMPLETE) or an error ( NET_ASYNC_ERROR). The following examples illustrate some typical calling patterns. represents an asynchronous function and its argument list. function( args) If it is desirable to perform other processing while the operation is in progress: enum net_async_status status; status = function(args); while (status == NET_ASYNC_NOT_READY) { /* perform other processing */ other_processing (); /* invoke same function and arguments again */ status = function(args); } if (status == NET_ASYNC_ERROR) { /* call failed; handle error */ } else { /* call successful; handle result */ } If there is no need to perform other processing while the operation is in progress: enum net_async_status status; while ((status = function(args)) == NET_ASYNC_NOT_READY) ; /* empty loop */ if (status == NET_ASYNC_ERROR) { /* call failed; handle error */ } else { /* call successful; handle result */ } If the function success/failure result does not matter and you want to ensure only that the operation has completed: while (function (args) != NET_ASYNC_COMPLETE) ; /* empty loop */ For mysql_next_result_nonblocking(), it is also necessary to account for the NET_ASYNC_COMPLETE_NO_MORE_RESULTS status, which indicates that the operation completed successfully and no more results are available. Use it like this: while ((status = mysql_next_result_nonblocking()) != NET_ASYNC_COMPLETE) { if (status == NET_ASYNC_COMPLETE_NO_MORE_RESULTS) { /* no more results */ } else if (status == NET_ASYNC_ERROR) { /* handle error by calling mysql_error(); */ break; } } In most cases, arguments for the asynchronous functions are the same as for the corresponding synchronous functions. Exceptions are mysql_fetch_row_nonblocking() and mysql_store_result_nonblocking(), each of which takes an extra argument compared to its synchronous counterpart. For details, see Section 28.7.14.1, “mysql_fetch_row_nonblocking()”, and Section 28.7.14.6, “mysql_store_result_nonblocking()”. This section shows an example C++ program that illustrates use of asynchronous C API functions. To set up the SQL objects used by the program, execute the following statements. Substitute a different database or user as desired; in this case, you will need to make some adjustments to the program as well. CREATE DATABASE db; USE db; CREATE TABLE test_table (id INT NOT NULL); INSERT INTO test_table VALUES (10), (20), (30); CREATE USER 'testuser'@'localhost' IDENTIFIED BY 'testpass'; GRANT ALL ON db.* TO 'testuser'@'localhost'; Create a file named async_app.cc containing the following program. Adjust the connection parameters as necessary. #include <stdio.h> #include <string.h> #include <iostream> #include <mysql.h> #include <mysqld_error.h> using namespace std; /* change following connection parameters as necessary */ static const char * c_host = "localhost"; static const char * c_user = "testuser"; static const char * c_auth = "testpass"; static int c_port = 3306; static const char * c_sock = "/usr/local/mysql/mysql.sock"; static const char * c_dbnm = "db"; void perform_arithmetic() { cout<<"dummy function invoked\n"; for (int i = 0; i < 1000; i++) i*i; } int main(int argc, char ** argv) { MYSQL *mysql_local; MYSQL_RES *result; MYSQL_ROW row; net_async_status status; const char *stmt_text; if (!(mysql_local = mysql_init(NULL))) { cout<<"mysql_init() failed\n"; exit(1); } while ((status = mysql_real_connect_nonblocking(mysql_local, c_host, c_user, c_auth, c_dbnm, c_port, c_sock, 0)) == NET_ASYNC_NOT_READY) ; /* empty loop */ if (status == NET_ASYNC_ERROR) { cout<<"mysql_real_connect_nonblocking() failed\n"; exit(1); } /* run query asynchronously */ stmt_text = "SELECT * FROM test_table ORDER BY id"; status = mysql_real_query_nonblocking(mysql_local, stmt_text, (unsigned long)strlen(stmt_text)); /* do some other task before checking function result */ perform_arithmetic(); while (status == NET_ASYNC_NOT_READY) { status = mysql_real_query_nonblocking(mysql_local, stmt_text, (unsigned long)strlen(stmt_text)); perform_arithmetic(); } if (status == NET_ASYNC_ERROR) { cout<<"mysql_real_query_nonblocking() failed\n"; exit(1); } /* retrieve query result asynchronously */ status = mysql_store_result_nonblocking(mysql_local, &result); /* do some other task before checking function result */ perform_arithmetic(); while (status == NET_ASYNC_NOT_READY) { status = mysql_store_result_nonblocking(mysql_local, &result); perform_arithmetic(); } if (status == NET_ASYNC_ERROR) { cout<<"mysql_store_result_nonblocking() failed\n"; exit(1); } if (result == NULL) { cout<<"mysql_store_result_nonblocking() found 0 records\n"; exit(1); } /* fetch a row synchronously */ row = mysql_fetch_row(result); if (row != NULL && strcmp(row[0], "10") == 0) cout<<"ROW: " << row[0] << "\n"; else cout<<"incorrect result fetched\n"; /* fetch a row asynchronously, but without doing other work */ while (mysql_fetch_row_nonblocking(result, &row) != NET_ASYNC_COMPLETE) ; /* empty loop */ /* 2nd row fetched */ if (row != NULL && strcmp(row[0], "20") == 0) cout<<"ROW: " << row[0] << "\n"; else cout<<"incorrect result fetched\n"; /* fetch a row asynchronously, doing other work while waiting */ status = mysql_fetch_row_nonblocking(result, &row); /* do some other task before checking function result */ perform_arithmetic(); while (status != NET_ASYNC_COMPLETE) { status = mysql_fetch_row_nonblocking(result, &row); perform_arithmetic(); } /* 3rd row fetched */ if (row != NULL && strcmp(row[0], "30") == 0) cout<<"ROW: " << row[0] << "\n"; else cout<<"incorrect result fetched\n"; /* fetch a row asynchronously (no more rows expected) */ while ((status = mysql_fetch_row_nonblocking(result, &row)) != NET_ASYNC_COMPLETE) ; /* empty loop */ if (row == NULL) cout <<"No more rows to process.\n"; else cout <<"More rows found than expected.\n"; /* free result set memory asynchronously */ while (mysql_free_result_nonblocking(result) != NET_ASYNC_COMPLETE) ; /* empty loop */ mysql_close(mysql_local); } Compile the program using a command similar to this; adjust the compiler and options as necessary: gcc -g async_app.cc -std=c++11 \ -I/usr/local/mysql/include \ -o async_app -L/usr/lib64/ -lstdc++ \ -L/usr/local/mysql/lib/ -lmysqlclient Run the program. The results should be similar to what you see here, although you might see a varying number of dummy function invoked instances. dummy function invoked dummy function invoked ROW: 10 ROW: 20 dummy function invoked ROW: 30 No more rows to process. To experiment with the program, add and remove rows from test_table, running the program again after each change. These restrictions apply to the use of asynchronous C API functions: mysql_real_connect_nonblocking()can be used only for accounts that authenticate with one of these authentication plugins: mysql_native_password, sha256_password, or caching_sha2_password. mysql_real_connect_nonblocking()can be used only to establish TCP/IP or Unix socket file connections. These statements are not supported and must be processed using synchronous C API functions: LOAD DATA, LOAD XML. Protocol compression is not supported for asynchronous C API functions.
https://dev.mysql.com/doc/refman/8.0/en/c-api-asynchronous-interface.html
CC-MAIN-2020-16
en
refinedweb
This is a tutorial on how to test WebJack. What is WebJack? WebJack is a JavaScript library that allows to communicate with an Arduino microcontroller via headphone jack. There is no need of a usual serial connection via USB or whatever, and therefore no drivers need to be installed. Instead, you can plug a cable into the headphone jack and open a website to begin data transfer. The aim of WebJack is to enable communication between smartphones (equipped with a browser) and Arduinos, without installing additional software on the phone. I successfully tested WebJack with a handful of different mobile devices. To get WebJack working with as much devices as possible, I kindly ask for your help on testing at real life conditions! What do you need to test WebJack? Hardware: - Arduino/Genuino (currently only tested with the Uno) - 3 resistors - 1 capacitor - a 4-pin headphone connector (tip-ring-ring-sleeve) Software: - Arduino IDE of your choice - SoftModem library Everyone not equipped with an Arduino can switch to this alternative tutorial. Now these are the steps to get it running: 1. Install SoftModem to your 'libraries' directory Get the pre-configured version from the webjack branch: 2. Load this sketch to the Arduino #include <SoftModem.h> SoftModem modem = SoftModem(); void setup() { Serial.begin(115200); Serial.println("Booting"); delay(100); modem.begin(); } void loop() { delay(150); uint8_t c[7] = {'W', 'e', 'b', 'J', 'a', 'c', 'k'}; modem.write(c, 7); } It will repeat sending "WebJack" all the time. 3. Patch the hardware together Use this circuit at your own risk. Just want to make sure I'm not responsible if you damage your phone or your Arduino ;) Ring 2 is the one closer to the sleeve. The resistors and the capacitor don't necessarily have to be of exactly the same values. I've build the circuit with three resistors of 1k Ohm and it worked well, too. The sum of R1 and R3 should not be much less than 1k Ohm. You also don't need a jack as pictured above, I just ripped of a cable and connected the wires of ring 2 and sleeve. Here is a photo of my setup: 4. Plug in the headphone connector ...and visit the demo page of WebJack. Here's a QR code for that, so you don't have to type it: Hint for Safari users: please switch to Chrome/Firefox/Opera. Safari does not support WebRTC yet. The browser may ask if you permit to take recordings from the microphone: answer 'yes'. You should see the string "WebJack" ocurring multiple times in the "Received Data" box. If nothing appears after a couple of seconds, or if the output is very cryptic text, the test failed. Else, it was successful and you can skip the following step. 5. Feedback Nothing appeared in the mentioned box? Then please go to this WebRTC recorder and record for about 5 seconds. Please save the recording and attach it with this information about your setup in the comments: - resistor/capacitor values - smartphone model - browser That's it, thank you for your help! And of course I would also love to hear your feedback and suggestions in the comments if the test was successful! 45 Comments Hi Richard, These are great instructions! I don't have experience to read the wiring diagram, and can't tell from the photo exactly what the significant positions are on the breadboard, so i'm not sure if i can test this for you. I'll ask folks in my lab, but maybe you could upload another photo? Is this a question? Click here to post it to the Questions page. Reply to this comment... Log in to comment Hi Liz, thanks for your feedback! You can click on the photo to get a larger version. There the connections are visible more clearly. I'll see if I can get it even sharper. Reply to this comment... Log in to comment
https://publiclab.org/notes/rmeister/07-18-2016/webjack-testers-needed
CC-MAIN-2020-16
en
refinedweb
Introduction: Internet Connected Scale Imagine if you never had to worry about running out of your favorite things, because a new package of them would arrive just before you did! That's the idea of NeverOut - the internet connected scale. Store something on it and never run out, because the cloud knows how much you have. You will need: Intel Edison & Grove Starter Kit Plus Digital scale (the one shown is a $15 digital kitchen scale from Walmart) Dual or quad rail-to-rail opamp (recommend the [MCP617](), pictures show TLV2374). Two 10k, two 1k, one 100 ohm resistors. Solderless breadboard Wires Strongly recommended: Soldering iron, solder Hot glue gun Perfboard Teacher Notes Teachers! Did you use this instructable in your classroom? Add a Teacher Note to share how you incorporated it into your lesson. Step 1: Set Up the Edison and Peripherals Follow this tutorial to set up the Eclipse IDE for the Edison, if you haven't already. Plug the Edison into the Edison Arduino breakout board, the Grove breakout board into that, and the Grove LCD-RGB Backlight into one of the connectors marked I2C. Create a new project in eclipse called adc_test. In the IoT Sensor Support tab on the right check Displays->i2clcd adc_test.cpp: #include <jhd1313m1.h> #include <mraa.hpp> #include <sstream> #include <iomanip> int main() { upm::Jhd1313m1 display(0); mraa::Aio a0(0), a1(1); std::stringstream ss; while (1) { ss.str(""); display.setCursor(0, 0); ss << "a0: " << std::fixed << std::setprecision(4) << a0.readFloat(); display.write(ss.str()); ss.str(""); display.setCursor(1, 0); ss << "a1: " << std::fixed << std::setprecision(4) << a1.readFloat(); display.write(ss.str()); } return 0; } Plug the potentiometer ("Rotary Angle Sensor ") into connector A0. Run adc_test. You should see the ADC value change on the display as you turn the potentiometer. Step 2: Hack the Scale Unscrew the back of the scale to reveal the contents. This scale has four load cells. A load cell is a combination of strain gauges and a cantilever structure which works as a force sensor. Load cells come in a few varieties. This scale has half bridge (resistor divider) load cells. Each load cell has three wires top (red) middle (white) and bottom (black). Cut the wires from the load cells and strip the insulation. We're only going to use two of the load cells. Solder on wires to go to the amplifier (next step). Connect red from one cell to black from the other and vice versa, to "flip" one cell's signal. Hot glue the wires down thoroughly for strain relief. All the bending and pulling you'll do while setting it up can easily break these tiny wires. Step 3: Build an Instrumentation Amplifier This is only using two of the load cells from the scale. Not sure if paralleling the other two would cause problems. You'll want A to increase in voltage when weight increases and B to decrease. Depending on the scale you got, this might mean using one of the load cells backwards (black to red and red to black). You need to use a rail-to-rail opamp here (one that works on 5V). I recommend the MCP617. AREF comes from the Edison arduino compatible board. This is a 2 opamp instrumentation amplifier. You can read a bit about it on page 9 of this pdf. I strongly recommend constructing this on a perfboard and soldering it together. Then you can use stranded wire which is less likely to break. Step 4: Hook It All Together Connect the output of two load cells to the + and - inputs of the instrumentation amplifier Connect power and ground for the load cells and instrumentation amplifier and VREF to the amplifier. Fire up the adc test program. You see the ADC value change when you press on the load cells. Use this to make sure the polarity of the cells is set up right so that they don't cancel each other out. Record the ADC value with nothing on the scale, then put an object of known weight on the scale and record that value. I used a 2L soda bottle and assumed it weighed 2kg. Now set up the actual software. You should be able to get and open it with Eclipse. Edit Scale.cpp line 22 with the two ADC values (0.51, 0.562) and the known weight in grams (2000g) `int grams = (raw - 0.51) * 2000.0 / (0.562 - 0.51);` Step 5: Connect It to the Cloud NeverOut was written in less than a day for a hackathon, so it's cloud features are pretty sparse. `CloudConnection.cpp` just runs `wget` to send data points (by accessing a URL). In `CloudConfig.h` you can set the base URL it will use. is a simple [Heroku]() app that displays a graph of the past values using [Bokeh]() Be the First to Share Recommendations Discussions 4 years ago Nice project!
https://www.instructables.com/id/Internet-Connected-Scale/
CC-MAIN-2020-16
en
refinedweb
A Locale. In the C APIs, a locales is simply a const char string. You create a Locale with one of the three options listed below. Each of the component is separated by '_' in the locale string. The first option is a valid ISO Language Code. These codes are the lower-case two-letter codes as defined by ISO-639. You can find a full list of these codes at a number of sites, such as:The first option is a valid ISO Language Code. These codes are the lower-case two-letter codes as defined by ISO-639. You can find a full list of these codes at a number of sites, such as:newLanguage newLanguage + newCountry newLanguage + newCountry + newVariant The second option includes an additonal ISO Country Code. These codes are the upper-case two-letter codes as defined by ISO-3166. You can find a full list of these codes at a number of sites, such as: The third option requires another additonal information--the Variant. The Variant codes are vendor and browser-specific. For example, use WIN for Windows, MAC for Macintosh, and POSIX for POSIX. Where there are two variants, separate them with an underscore, and put the most important one first. For example, a Traditional Spanish collation might be referenced, with "ES", "ES", "Traditional_WIN". Because a Locale is just an identifier for a region, no validity check is performed when you specify a Locale. If you want to see whether particular resources are available for the Locale you asked for, you must query those resources. For example, ask the UNumberFormat for the locales it supports using its getAvailable method. Note: When you ask for a resource for a particular locale, you get back the best available match, not necessarily precisely what you asked for. For more information, look at UResourceBundle. The Locale provides a number of convenient constants that you can use to specify the commonly used locales. For example, the following refers to a locale for the United States: Once you've specified a locale you can query it for information about itself. Use uloc_getCountry to get the ISO Country Code and uloc_getLanguage to get the ISO Language Code. You can use uloc_getDisplayCountry to get the name of the country suitable for displaying to the user. Similarly, you can use uloc_getDisplayLanguage to get the name of the language suitable for displaying to the user. Interestingly, the uloc_getDisplayXXX methods are themselves locale-sensitive and have two versions: one that uses the default locale and one that takes a locale as an argument and displays the name or country in a language appropriate to that locale. The ICU provides a number of services that perform locale-sensitive operations. For example, the unum_xxx functions format numbers, currency, or percentages in a locale-sensitive manner. Each of these methods has two variants; one with an explicit locale and one without; the latter using the default locale.Each of these methods has two variants; one with an explicit locale and one without; the latter using the default locale.UErrorCode success = U_ZERO_ERROR; UNumberFormat *nf; const char* myLocale = "fr_FR"; nf = unum_open( UNUM_DEFAULT, NULL, success ); unum_close(nf); nf = unum_open( UNUM_CURRENCY, NULL, success ); unum_close(nf); nf = unum_open( UNUM_PERCENT, NULL, success ); unum_close(nf); AAnf = unum_open( UNUM_DEFAULT, myLocale, success ); unum_close(nf); nf = unum_open( UNUM_CURRENCY, myLocale, success ); unum_close(nf); nf = unum_open( UNUM_PERCENT, myLocale, success ); unum_close(nf); Localeis the mechanism for identifying the kind of services ( UNumberFormat) that you would like to get. The locale is just a mechanism for identifying these services. Each international serivce implement these three class methods: const char* uloc_getAvailable(int32_t index); int32_t uloc_countAvailable(); int32_t uloc_getDisplayName(const char* localeID, const char* inLocaleID, UChar* result, int32_t maxResultSize, UErrorCode* err); Concerning POSIX/RFC1766 Locale IDs, the getLanguage/getCountry/getVariant/getName functions do understand the POSIX type form of language_COUNTRY.ENCODING@VARIANT and if there is not an ICU-stype variant, uloc_getVariant() for example will return the one listed after the @at sign. As well, the hyphen "-" is recognized as a country/variant separator similarly to RFC1766. So for example, "en-us" will be interpreted as en_US. As a result, uloc_getName() is far from a no-op, and will have the effect of converting POSIX/RFC1766 IDs into ICU form, although it does NOT map any of the actual codes (i.e. russian->ru) in any way. Applications should call uloc_getName() at the point where a locale ID is coming from an external source (user entry, OS, web browser) and pass the resulting string to other ICU functions. For example, don't use de-de@EURO as an argument to resourcebundle. Definition in file uloc.h. #include "unicode/utypes.h" #include "unicode/uenum.h" Go to the source code of this file.
http://icu.sourcearchive.com/documentation/4.2.1~rc1/uloc_8h.html
CC-MAIN-2018-13
en
refinedweb
This demos a few features that are currently in the development version of seaborn and will soon be in the 0.5 release. %matplotlib inline import seaborn as sns sns.set_context("talk", rc={"figure.figsize": (12, 9)}) Load the intrinsic cortical connecitivty data and compute a correlation matrix networks = sns.load_dataset("brain_networks", header=[0, 1, 2], index_col=0) corrmap = networks.iloc[:, :30].corr() Load a widget to interactively choose a diverging colormap (good for correlation data) cmap = sns.choose_diverging_palette(as_cmap=True) Plot the correlation matrix as a heatmap sns.heatmap(corrmap, square=True, linewidths=1, cmap=cmap); Now you can go back and tweak the parameters of the colormap (without rerunning that cell), then rerun the cell with the headmap to see the new colormap in action. Let's try a different visualization and a correspondingly different colormap. sns.set_style("white") cmap = sns.choose_cubehelix_palette(as_cmap=True) sns.jointplot(networks.loc[:, ("7", "1", "lh")], networks.loc[:, ("7", "1", "rh")], kind="hex", color=cmap(.4), joint_kws=dict(cmap=(cmap))); <seaborn.axisgrid.JointGrid at 0x1288a57d0>
http://nbviewer.jupyter.org/gist/mwaskom/381a5f5f7e38f8e45bd6
CC-MAIN-2018-13
en
refinedweb
Opened 12 years ago Closed 12 years ago #2036 closed defect (fixed) utils/autoreload.py falls over when "thread" module is not installed Description Python's thread module is the interface to your systems pthreads library - and it's also an optional module, as not all systems support pthreads. Where pthreads are not available, the "dummy_thread" module can be used as a drop-in replacement.. easy to do..instead of: import os, sys, thread, time use: import os, sys, time try: import thread except: import dummy_thread as thread Change History (1) comment:1 Changed 12 years ago by Note: See TracTickets for help on using tickets. (In [3020]) Fixed #2036 -- autoreload.py no longer fails for uninstalled 'thread' module. Thanks, plmeister@…
https://code.djangoproject.com/ticket/2036
CC-MAIN-2018-13
en
refinedweb
It's a late night and I'm diving into puppet manual. Who would know, that the bug I was hunting was the result of the following considiration: node 'kestrel.example.com' { import 'nodes/kestrel.pp' } This import statement looks like it should insert code INTO the node definition that contains it; instead, it will insert the code outside any node definition, and it will do so regardless of whether the node definition matches the current node. Really the one who let it be this way was no less then a retarded ruby programmer...
http://crypt47.blogspot.jp/2013/12/puppet-import-or-few-warm-words-to.html
CC-MAIN-2018-13
en
refinedweb
#ifndef __SOUND_PRODIGY_HIFI_H #define __SOUND_PRODIGY_HIFI_H /* * ALSA driver for VIA VT1724 (Envy24HT) * * Lowlevel functions for Audiotrak Prodigy Hifi * * PRODIGY_HIFI_DEVICE_DESC "{Audiotrak,Prodigy 7.1 HIFI},"\ "{Audiotrak Prodigy HD2},"\ "{Hercules Fortissimo IV}," #define VT1724_SUBDEVICE_PRODIGY_HIFI 0x38315441 /* PRODIGY 7.1 HIFI */ #define VT1724_SUBDEVICE_PRODIGY_HD2 0x37315441 /* PRODIGY HD2 */ #define VT1724_SUBDEVICE_FORTISSIMO4 0x81160100 /* Fortissimo IV */ extern struct snd_ice1712_card_info snd_vt1724_prodigy_hifi_cards[]; #endif /* __SOUND_PRODIGY_HIFI_H */
http://alsa-driver.sourcearchive.com/documentation/1.0.17.dfsg-2ubuntu1/alsa-kernel_2pci_2ice1712_2prodigy__hifi_8h-source.html
CC-MAIN-2018-13
en
refinedweb
We. Issues with using one large Entity Model: I. Performance.. II. Cluttered Designer Surface. III. Intellisense experience is not great When you generate an Edm model from a database with say 1000 tables, you will end up with 1000 different entity sets. Imagine how your intellisense experience would be when you type “context.” in the VS code window. IV. Cluttered CLR Namespaces Since a model schema will have a single EDM namespace, the generated code will place the classes in a single namespace. Some users have complained that they don’t like the idea of having so many classes in a single namespace. Possible Solutions. I. Compile time view generation. II. Choosing the right set of tables Join the conversationAdd Comment PingBack julie You state above that "the prescriptive guidance from EF team is to pre-generate views for all EF applications." If this is the case, then why do you not provide a better integration scenario in Visual Studio? The steps that you suggest are not onerous, but they are also not obvious either. I would expect Visual Studio to implement the best practice by default, but allow me to easily change it. In the next release of EF, could you please do the best solution by default? Julie, Out of the 3 folders in the zip, only one(SubsettingUsingForeignKeys) corresponds to the post today. The other two are for the second part of the post where I will go over type reuse with "Using". Since designer does not support "Using", the Edmx files would not be very useful for these. I will try to share the Edmx file for SubsettingUsingForeignKeys sample but in the mean while you can put it together pretty easily from the CSDL, SSDL and MSL files following the steps from Sanjay in this post :. Thanks Srikanth J’ai récemment été sollicité pour proposer des solutions afin de résoudre des problèmes de performances I work with model with more than 70 tables and it will grow. I think that it would be great to be able to work with EDM Model like we work with database model in SQL Server. In SQL Server we are able to generate different diagrams which describes some aspects of relations. It could be also implemented in EDM Diagram in some way. Helpful may be creating boundaries inside Model so we would work with all model or only with a part of it, but the part would still have relations with other parts (tables in parts). Example slices: OrderSlice which consists tables: Order, OrderDetails, OrderStatus, OrderType, OrderHistory ProductSlice which consists tables: Product, productCategory, ProductFamily, ProductImages, ProductJme, ProductDescription, etc Is this all has sense to implement in future EF? A bit more helpful than Elisa Flasko’s comment "Well, big entities are big entities…!" when someone asked this question at TechEd Europe recently. Last week, a customer asked me how to solve a big EDM performance problem? In his case, his model was More general information about Entity Framework runtime performance can be found at. Weekly digest of interesting stuff I worked with 250 tables in the Entity Model an can not split it into 2 or more Entity Models. I used allways pregenerated Views, but the compiletime is much to high. The Runtime Performance is good. Planed Microsoft a Performance Patch in the next Month ? EntityFramework的开发领导SrikanthMandadi称这个包含两部分内容的文章为 Hey, we’re working with quite a large database and using edmgen2.exe to generate our emdx and .cs files. I found this link very helpful as i didn’t know that pre-generating the Views would actually speed everything up. It’s created an 80 meg .cs file which VS actually struggles to build.. Once it’s built though. It means development is much faster than it used to be. Every time we used to make a change and started up the web site we’d have to wait ages before linq would respond. I’d recommend to anyone to do this view generation stuff before they work with linq to entities on a day to day basis. I hope in the next version alot of the speed issues and this hidden stuff is going available as options or properties. Also that linq to entities catches up with Linq to SQL. Direi che una buona pagina da cui partire è questo documento su MSDN :Performance Considerations Does anyone know if there has been some improvement for big database structure? Does the VS 2010/.NET 4 handle it better? We are in the development process of an application that will grow. For the moment and for the next year we are not expecting a very huge model, but it might become large later on. What has changed with the new upcoming versions? Thanks Last week, a customer asked me how to solve a big EDM performance problem? In his case, his model? una porkeria su Entity Framework.. 52 entities, 55 associations (Foreign keys) The Validate step worked slower and slower. Now it crashes both in VS and at run time … EF is big and clumsy. I even wonder if it can be fixed. Many unnecessary features that should have been orthogonal to the framework not built in. I actually wanted to use Linq to SQL, that’s a lean piece of software. But MS drops it and picks EF as the "winner". I apologize for being this harsh but it’s ridiculous to consider a 50-100 tables system as being big. What’s a 500 tables system then? I concur with Juan above … could your rewrite the petshop demo whit Entity Framework, so we can get a best practice sample. What’s wrong with the same type defined in multiple models? For example, with AdventureWorks, tables in the Person schema are related both to tables in the Sales and Human Resources schemas. Why not simply create two models, one for Sales and another for Human Resource, but with Person tables in both models? What are the problems with this approach? Good information and good way your blog post. Good luck blogger man.
https://blogs.msdn.microsoft.com/adonet/2008/11/24/working-with-large-models-in-entity-framework-part-1/
CC-MAIN-2018-13
en
refinedweb
Microsoft Azure Resource Management Namespace Package [Internal] Project Description This is the Microsoft Azure Management namespace package. This package is not intended to be installed directly by the end user. It provides the necessary files for other packages to extend the azure.mgmt namespace. If you are looking to install the Azure client libraries, see the azure bundle package. Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/azure-mgmt-nspkg/
CC-MAIN-2018-13
en
refinedweb
Following a keynote speech which laid out the new features of Facebook (see earlier news story), the rest of yesterday’s F8 conference in London saw speakers dig into the Open Graph API and explain how app developers could take advantage of it. Second on stage was platform engineer Simon Cross, who did a live demo centred around creating a social recipe app, which was able to post information-rich updates to Facebook. "The Facebook Platform is a world away from what it was six months ago,” he stressed. “A lot of improvements have been made under the hood.” Cross announced that Facebook has dumped Bugzilla and is working on a new debugging tool. There’s also a new policy of giving devs 90 days' notice of breaking changes. More effort is also being put into the developer blog and, if devs need to contact engineers with questions, they can now find them at facebook.stackoverflow.com, he added. Fundamental questions Taking advantage of social integration involves asking two fundamental questions, Cross explained. The first was: do users take actions in your product (such as listening to a song)? The second was: do they have an ongoing relationship with you? (He gave the example of how Spotify lets you represent your identity through your musical tastes, tying you into the app emotionally.) "Building on Facebook is just as easy as building for the web,” he said, and urged app developers to "build them and ship them today". Gaming and mobile Following lunch, Gareth Morris addressed gaming. "Over 200million+ users play Facebook games every month," he enthused. But to take advantage of this huge audience, you need your game to spread game virally. Key to this is considering what achievements players will want to boast about, and what their friends will be interested in. "The Open Graph is an essential tool for games," he concluded. "The distribution possibilities are huge." You just need to make an investment up front, and make sure you always put your users first, he said. Next up, Matt Kelly focused on mobile. Mobile users of Facebook are twice as active as desktop users, he pointed out. This offers app developers a huge opportunity: when EA/Playfish launched Sims Social, for instance, they gained 40million users in just a month. Facebook is working hard to make it easier for users to share things across different devices, he added. A showcase of mobile apps already using the Open Graph API can be found at, with documentation at docs on developers.facebook.com/mobile and developers.facebook.com/html5. Marketing API & case study Tom Elliot followed with a talk on how you can use the Marketing API to promote your app beyond the usual social channels. The main benefits are scale ("you can build hundreds or thousands of adverts in seconds") and automation (for example, you can link up with your real-time stock information and automatically alter your ad spend accordingly). Elliot demonstrated how you can target an ad based upon users who have performed an action. For example, “because I listened to Lady Gaga on Spotify, her management can target me to sell me tickets when she's on tour.” The final third of the afternoon kicked off with a talk from Mat Clayton from music service Mixcloud about their Open Graph launch. Integrating social features into the app resulted in a 55 per cent drop in bounce rate and a big increase in dwell time, he explained, while using the Facepile social plug-in increased signup conversion by 200-300 per cent. Q&A Lastly, a Q&A session with Christian Hern, Simon Cross and Ethan Beard answered a series of technical questions from the audience. We learned, among other things, that: - Facebook will be announcing a new class of their Preferred Developer programme "sometime soon". - There are no plans to introduce Timeline for brand pages. - It is possible to populate back-history on a Timeline, but you should always prompt the user first. - If you want to bulk upload historical data, you can add no_feed_story=1 to prevent a feed story coming up. - FQL is "not going away". - The ability to stream music is currently available only to approved partners due to legal issues; however, Facebook is open to new partners as long as they have legal access to music (or other audio). - You should view the Open Graph as a store for your app's data: store it in your own database. - You should publish any action you think is a relevant action - don’t hold back ("We can do smart aggregation to stop users being overwhelmed by updates"). - It's not possible to share namespaces across apps.
https://www.creativebloq.com/netmag/report-f8-london-part-2-10116692
CC-MAIN-2018-13
en
refinedweb
I know about the possibility of prototyping functions, which does help out quite a bit! Just hoping any of you know of any possibility of doing the same for classes. I`ll post my code for my project, which is literally just me learning classes in C++. Just incase anybody here needed it. #include <iostream> using namespace std; //The class I wish to appear below the "int main" function. class TestClass{ public: void sayings() { cout << "Your computer is dead to me." << endl; } }; //Just the main function, which prints out the sayings function within the TestClass class. int main() { TestClass testObject; testObject.sayings(); return 0; }
http://www.dreamincode.net/forums/topic/310000-prototyping-classes-in-c/
CC-MAIN-2018-13
en
refinedweb
Introduction To UFPS: Unity FPS Tutorial Learn the basics of UFPS! Obliterating hordes of sinister enemies with a shotgun or carefully sniping your opponent across the battlefield are immensely satisfying gaming experiences. Games that center around action-packed gun-slinging segments are defined as First-Person-Shooter (FPS). A close cousin is a third-person shooter — the difference is whether you’re looking at the character’s back or down the barrel of a gun. When you make an FPS game, you sign yourself up for a substantial amount of work. However, you don’t have to start from scratch when you leverage the power of the Ultimate First Person Shooter Framework (UFPS)! In this tutorial, you’ll learn what UFPS is and how to use it to create a basic Unity FPS sandbox game — “sandbox” refers to the fact that the player is limited to a contained area. After this tutorial, you’ll be comfortable with: - Using fundamental UFPS scripts to move the player and fire guns - Working with vp_FPcomponents - Adding bullet holes after a surface takes a shot - Adding a player with standard FPS movement to a scene - Adding a new gun and configuring some properties - Adding a sniper rifle (which is better than just any old new gun) with zooming and cross-hair targeting Prerequisites You should be comfortable with scripting in C# and attaching components to GameObjects. If you’re new to Unity or need to refresh your memory, start with our getting started in Unity tutorial. You’ll also need a license for “UFPS: Ultimate FPS” by Opsive, which you can get in the Unity Asset Store. This tutorial was written using UFPS version 1.7. Lastly, you’ll need Unity 5.5.0 or a newer version. Getting Started UFPS is a Unity plugin that includes core logic to power a typical FPS game. Rather than painstakingly write scripts for every scenario, you can modify preloaded scripts to handle typical activities such as: - Equipping guns - Animating and firing guns - Managing ammo and inventory - Moving the player - Managing the camera’s perspective - Tracking the camera - Managing events, such as getting hit by a ray gun - Managing surfaces, e.g., slippery slopes and waterways - Ragdoll and gravity physics No amazing game was ever created in one step. The best way to build games is to focus on the basics, such as core gameplay, and then build it up from there. UFPS helps you with the basics and goes further by providing a larger framework to leverage. Download the starter project: ReadyAimFire_Unity3D_starter. Open the project in Unity and double-click the Main scene in the project to load the game scene. Note: We cannot redistribute the source files for UFPS, so there is not a final project for this tutorial, nor are they part of the starter project. You will build this entire game yourself. :] After finding UFPS in the Unity Asset Store, go to the downloads section and click the Download button. When it finishes, click the Import button, wait for it to display the package’s contents then click OK. It’s a large package so be patient. Note: If you get the following warning about importing a complete project: This happens when an asset includes Project Settings that will overwrite your open project settings. In this case, only Graphics and Tag settings will be overwritten, so you can safely select the Import button to continue importing UFPS. After the import completes, you might get some warnings. These are from UFPS and can be ignored. Select Save Scenes from the File menu to save your work. Click the Play button and marvel in the glory of a fresh project. Important Concepts I’m sure you’re eager to get shooting (and scripting) but it’s important that you know these key concepts for the UFPS framework. FP components FP indicates a component belongs to a family of scripts that manage the actual first-person shooter object — aka the player. These components always begin with vp_FP. Among the most critical is the vp_FPController script that will be on the player object when it’s created. States States can be a confusing concept. Within many vp_FP components you’ll find a section named State. Here is an example of what it looks like: An area where states can be added Certain vp_FP scripts have a state section that track the values that local variables should be for the state of that component at a given time. Think of it like this: when you zoom in and use a sniper rifle’s crosshairs, you would be in a “Zoom” state — the preset values for that state would raise and center the gun by changing its position and rotation. When you zoom, it’ll also animate the weapon’s movement. Events An event is a signal between objects that indicates something important happened and other objects might need to react. Take, for example, when something gets hit. The ‘getting hit’ part is the event — a signal, if you will. Other objects react based on how you’ve set up your game. This tutorial won’t cover events in detail. OK, well done, partner. Setting Up The Player Prefab Type SimplePlayer into the Project browser’s search bar and then drag the SimplePlayer prefab into the scene in front of one of the targets. Select the SimplePlayer GameObject in the Hierarchy and set its Position to (X = -26.6, Y = 1.3, Z = -21.3) and Rotation to (X = 0, Y = 0, Z = 0). Delete the Camera GameObject from the Hierarchy. You don’t need it because the camera on SimplePlayer gives you a first-person perspective. Press play and voila, you have an FPS game up and running! Press play as soon as you fire or zoom to lock the cursor to the screen. Press Escape to regain control of the cursor. Use the WASD keys to walk and press the left mouse button to fire the pistol and the right mouse button to zoom — you can fire while zoomed. You can also press Shift to run and Space to jump. Make sure you shoot the target :] There is a lot of logic under the hood, but the beauty of UFPS is that it does a lot of heavy lifting for you. It lets you focus on more important matters, such as adding guns with custom behavior. You might be surprised to learn that you don’t have to create animations for when the player moves or fires the gun. The movement you see is done with physics. That’s how UFPS does it! UFPS starts you off with a pistol, but the starter project includes a shotgun and a sniper rifle prefab. You’ll put both to work shortly and will edit UFPS settings to perfect the animations. Although you don’t need to animate the gun’s movement, you are on your own for hand animations. This tutorial will not cover hand animations, nor will UFPS do them for you. However, you’d want to consider them when you’re building an FPS for release. Lastly, you do have the option to add complex animations with gun models. In that case, you’d need to include the hand with the model. Adding Bullet Holes You can already hear the gun and watch shells eject! There’s even a cool flare effect at the front of the gun! UFPS does that for you, which is one of the features that makes it easy to add and configure new guns. Try shooting something and tell me what’s missing. Is that gun shooting blanks? Why are there no bullet holes? Anytime you shoot something, you should leave a mark. Stop play. In the Hierarchy view, click the dropdown arrow to the left of the Environment GameObject and select the child Wall GameObjects. In the Inspector, click Add Component and type vp_SurfaceIdentifier into the search field. Click vp_SurfaceIdentifier — the only available script — to add it. Click the dot to the right of the Surface Type field. In the menu that pops up, click the Assets tab and select Metal. Select the Ground GameObject in the Hierachy and repeat the above steps, but select Wood for the last step. The targets need a wood surface identifier, so click the dropdown arrow next to the Target and BackTarget GameObjects in the Hierarchy to expand them, and then select their children and add the vp_SurfaceIdentifier script component. Also set the surface type to Wood. Click Play and try shooting the target, the walls and the floor. Now you can see the results of your handiwork, partner. :] How The UFPS Pistol Works No one ever defeated a horde of evil monsters with a single a pistol, right? :] You’ll add a shotgun soon, and it’ll be easier if you understand how the pistol works. Expand the SimplePlayer GameObject in the Hierachy, and then expand the FPSCamera GameObject. Select the 1Pistol GameObject. The number 1 in front of a weapon name represents the number the player should press to equip that pistol during gameplay. When you add new weapons, you’ll just increment the number and UFPS will automatically handle weapon swapping. The 1Pistol GameObject has several important components: AudioSourcemakes the pistol go “bang” when it fires. vp_FPWeaponidentifies this GameObject as a weapon in the UFPS framework and controls aspects that relate to the weapon itself. vp_FPWeaponShooterhandles effects like muzzle flash, shell ejection and the projectile. vp_FPWeaponReloaderhandles sounds and animations that relate to reloading. Note: Ammo and reloading are out of scope for this tutorial, but they’re something you’ll want to think about when you make your game. Also, there are many sections for these vp_FP components and the UFPS documentation covers them in full, so make it your first stop when you have questions. vp_FPWeapon Expand the Position Springs section on the vp_FPWeapon Script component in the Inspector. This component sets up the gun’s on-screen position, which changes based on the state — for instance, when the pistol moves for zooming. Offset is where the gun sits on-screen. Click Play and move the gun around by playing with the Offset values. By the way, you won’t mess up the game’s actual values while you’re in play mode. Tinkering with settings in play mode is a good way to test and debug — but make sure you’re not in play mode when you’re actually trying to change settings. It’s easy to get wrapped up in things and forget you’re still in play mode, which can lead to extensive reworking and bad words coming from your mouth. Look a little further down in the component to find Spring2 Stiffn and Spring2 Damp — these are the gun’s recoil settings. Spring2 Stiffn is ‘elastically’, or how the gun bounces back to its original position after firing. Spring2 Damp is how fast the bounce wears off. The easiest way to visualize this is by…actually visualizing it. :] Click Play and move the sliders around for Spring2 Stiffn and Spring2 Damp to get a feel for what they do. It can take some trial and error to make these animations look right, so don’t feel bad if your first few attempts make the gun bounce like a pogo stick. Now that you’ve familiarized yourself with Position, it’s time to focus on Rotation. Expand the RotationSprings section on the vp_FPWeapon in the Inspector. Look at the Offset values, which are the same as what you saw in the PositionSprings section. The difference is that these values control the gun’s rotation. Click Play and mess around with Offset values to get a feel for what it does. Now think about how it looks when you fire that pistol. UFPS creates the animation on the fly by on interpolating between state values. All you need to do is set up various state values in the vp_FPWeapon component — no need to import and set up animations for this effect. You’ll get hands-on experience with that when you add the sniper rifle and shotgun. Stop the game play. Dig a little deeper here. Expand the States section on the vp_FPWeapon in the Inspector and have a look at the pre-existing states and their associated data files. The data files track what values should change in the vp_FPWeapon when the state changes. For example, take a look at the Zoom state. Double-click WeaponPistolZoom and have a look at the script file — it will open in your default code editor for Unity. It should be a simple data file with the following: ComponentType vp_FPWeapon ShakeSpeed 0.05 ShakeAmplitude 0.5 0 0 PositionOffset 0 -0.197 0.17 RotationOffset 0.4917541 0.015994 0 PositionSpringStiffness 0.055 PositionSpringDamping 0.45 RotationSpringStiffness 0.025 RotationSpringDamping 0.35 RenderingFieldOfView 20 RenderingZoomDamping 0.2 BobAmplitude 0.5 0.4 0.2 0.005 BobRate 0.8 -0.4 0.4 0.4 BobInputVelocityScale 15 PositionWalkSlide 0.2 0.5 0.2 LookDownActive false These are the changes that must occur when the vp_FPWeapon enters the Zoom state. This might seem a little intimidating, but it’s actually quite easy — you can position the gun during play and then save a state file from there. vp_FPWeaponShooter Expand the Projectile section under the vp_FPWeaponShooter in the Inspector. This controls the firing of the gun. Firing Rateis how long the gun animation fires. Tap Firingis an override that lets the player fire without waiting for the fire animation to finish — it resets the fire animation. Prefabis the bullet. Keep in mind that bullets have their own script under vp_BulletFX, and you can learn more about it in the documentation. Expand the Muzzle Flash section. You can configure the art prefab as well as its scale and position to achieve the perfect flash effect each time the player fires the weapon. Expand the Shell section to see the settings that control what happens when the shell ejects from the gun. Finally, expand the Sound section. Here you configure the settings for the sound the weapon makes when the gun fires with and without ammo. You’ll also notice a States section in here too. It is similar to the state section for the vp_FPWeapon. If you’re curious how this works you can look into the source code, which involves the advanced topic of Reflection — you’ll see a link at the end of this tutorial where you can learn more. Adding a Shotgun What’s the point of building an FPS if you’re not going to give your hero a tool that destroys enemies at short range? There isn’t. It would be a sad, sad little FPS without such a weapon. To allow you to stay focused on UFPS, you’ll just add the gun model and won’t mess around with hands. In the Project browser, type 2Shotgun and drag the 2Shotgun prefab under the 1Pistol GameObject in the Hierachy. You might notice that the vp_WeaponShooter doesn’t have any options, and that’s because the prefab is disabled. Toggle the active box on the 2Shotgun GameObject. You might be thinking you can just click play then press 2 and load the shotgun right? Well — almost! The SimplePlayer prefab is only configured for the pistol. To make it work for new weapons you need to add a special component. Click the SimplePlayer GameObject in the Hierachy and then click Add Component in the Inspector. Type in vp_FPWeaponHandler and select the script. This is the component that allows weapon switching. Set the value of Start Weapon to 1 to set it to the pistol. Now when you click Play and then press 2 you’ll swap your pistol for the shotgun. Wait a minute…something is off — you’ve got that gun in a rather precarious position! With the 2Shotgun prefab selected in the Hierachy, expand the PositionSprings section of the vp_FPWeapon in the Inspector and set the Offset to (X = -0.04, Y = -0.46, Z = 4.89). Expand the RotationSprings section and set the offset to (X = -1.09, Y = -93.3, z = 2.5). The shotgun should fire a cluster of bullets. Where’s the fun in a single bullet? Expand Projectile on the vp_FPWeaponShooter and set Count to 10 and the Spread to 2. Expand the MuzzleFlash and set the prefab to MuzzleFlashShotgun. Set the Position to (X = -0.08, Y = -0.24, Z = 8.88) and set the Scale to (X = 1, Y = 1, Z = 1). If you want to figure out these values yourself just edit the MuzzleFlash Position during gameplay and update the position after you’ve stopped playing — the muzzle flash stays on so you can precisely position it. Expand the Sound section and set the Fire to ShotgunFire. Click Play then press 2 and add some holes to the target! Zoom is a special state pre-configured for the RMB (right mouse button) in UFPS. Upon clicking, it sets the zoom state for all components that are set to listen for this change. You can add custom zoom states. Try it for yourself by clicking Play and pressing 2 to equip the shotgun. Change the Position Springs and Rotation Springs Offset values for vp_FPWeapon so the camera looks down the barrel. Click the Save button at the bottom of the vp_FPWeapon, name the file ShotgunZoom and click Save again. Note: If you’re having trouble getting zoom to look right, or you’re on a Mac and can’t see the save button in the vp_FileDialog window, just download this ShotgunZoom and add it to your Assets folder in Unity. Now you have the zoom, but the shotgun doesn’t know how to use it. Stop game play and click the 2Shotgun GameObject in the Hierachy. Expand the vp_FPWeapon in the Inspector and expand the States section. Drag ShotgunZoom over to the Zoom field. UFPS takes the start and end values then creates an animation. Click Play and Zoom with the shotgun and it will animate to where you saved it to for the zoom state. Adding A Sniper Rifle Any good FPS game comes with an array of weapons so the player can wipe out enemies in a variety of situations. For those times when you want to pick ’em one by one from a safe distance, there’s nothing better than the almighty sniper rifle! Type 3Sniper in the Project browser search and drag the 3Sniper prefab under the 2Shotgun GameObject in the Hierachy. In the Inspector, add the vp_FPWeapon and vp_WeaponShooter scripts as you’ve done previously. Add the sniper rifle model to the Rendering section of the vp_FPWeapon by going to the Project browser and expanding the Assets folder then the Prefabs folder. Click the 3Sniper GameObject in the Hierarchy, and then expand the vp_FPWeapon component in the Inspector. Expand the Rendering section to display the 1st Person Weapon (Prefab) field. Finally, drag the m24 Prefab from the Project browser to the 1st Person Weapon (Prefab) field. Under vp_FPWeapon, set Position Springs to (X = 1.4, Y = -0.81, Z = 4.88) and Rotation Springs to (X = -2.7, Y = 79.62, Z = -5.01). Expand the Sound section on the vp_FPWeaponShooter and set Fire to SniperShot. The gun is now in a functional state. Click Play and press 3 to see for yourself. Don’t forget to give it a test fire. :] “Functional state” is rather subjective, don’t you think? There are no crosshairs when you zoom! Adding a scope zoom is an excellent use case for creating states while playing the game. It’s time to add a Sniper component that helps with zooming. This is a special script that needs to listen for player events in order to work. Open the Scripts folder, right-click anywhere and use the pop-up menu to create a new C# script named Sniper. Double-click to open it and replace its contents with the following code: using UnityEngine; using System.Collections; public class Sniper : MonoBehaviour { //1 public vp_FPPlayerEventHandler playerEventHandler; public Camera mainCamera; private bool isZooming = false; private bool hasZoomed = false; //2 void Update() { if (playerEventHandler.Zoom.Active && !isZooming) { isZooming = true; StartCoroutine("ZoomSniper"); } else if (!playerEventHandler.Zoom.Active) { isZooming = false; hasZoomed = false; GetComponent<vp_FPWeapon>().WeaponModel.SetActive(true); } if (hasZoomed) { mainCamera.fieldOfView = 6; } } //3 IEnumerator ZoomSniper() { yield return new WaitForSeconds(0.40f); GetComponent<vp_FPWeapon>().WeaponModel.SetActive(false); hasZoomed = true; } } Stepping through the script: 1. These variables keep track of references to the PlayerEventHandler and the camera. The Booleans are flags that track the state of the zoom. 2. This section checks if the zoom state is active. When the gun is zoomed, it sets the field of view to 6, which is what triggers zoom. 3. This coroutine tracks when the zoom animation will complete — after 0.4 seconds in this case — and then hides the weapon model. This creates a bit of time for the player to look down the scope before transitioning to the scope effect. Save the file and return to Unity. Drag the Sniper script to the 3Sniper GameObject. Drag the SimplePlayer prefab over the PlayerEventHandler on the Sniper component. Next, drag the FPSCamera over the Main Camera field on the Sniper component. In the Hierarchy, you’ll find a UI GameObject that has been set up with a SniperZoom texture. Go to the Scripts folder and create another new C# script. Name it GameUI. Select the UI GameObject in the Hierachy, and then drag the GameUI script from the Project browser to the Inspector. Open the GameUI script and replace the code with the following: using UnityEngine; using UnityEngine.UI; public class GameUI : MonoBehaviour { public GameObject sniperZoom; public vp_PlayerEventHandler playerEventHandler; public void ShowSniperZoom() { sniperZoom.SetActive(true); sniperZoom.GetComponent<Image>().enabled = true; } public void HideSniperZoom() { sniperZoom.SetActive(false); sniperZoom.GetComponent<Image>().enabled = false; } } These are public methods that you’ll use to show and hide the sniper scope texture — they keep a reference to the scope texture and the event handler. The Sniper script will call these after you configure it to do so. Save the script and return to Unity. Expand the UI GameObject in the Hierarchy. Drag SniperZoom to the SniperZoom field in the Inspector then drag SimplePlayer to the PlayerEventHandler field. Open the Sniper script. Add this variable to the top: public GameUI gameUI; Now the script will keep a reference to the GameUI. Below the else if (!playerEventHandler.Zoom.Active) statement, add the following: gameUI.HideSniperZoom(); Before the change: else if (!playerEventHandler.Zoom.Active) { isZooming = false; hasZoomed = false; GetComponent<vp_FPWeapon>().WeaponModel.SetActive(true); } After making the change: else if (!playerEventHandler.Zoom.Active) { gameUI.HideSniperZoom(); isZooming = false; hasZoomed = false; GetComponent<vp_FPWeapon>().WeaponModel.SetActive(true); } Now the sniper zoom texture will hide when in the non-zoom state. Below GetComponent<vp_FPWeapon>().WeaponModel.SetActive(false); add the following: gameUI.ShowSniperZoom(); Before the change you should have: IEnumerator ZoomSniper() { yield return new WaitForSeconds(0.40f); GetComponent<vp_FPWeapon>().WeaponModel.SetActive(false); hasZoomed = true; } After this change you should now have: IEnumerator ZoomSniper() { yield return new WaitForSeconds(0.40f); GetComponent<vp_FPWeapon>().WeaponModel.SetActive(false); gameUI.ShowSniperZoom(); hasZoomed = true; } This line shows the sniper scope texture when the sniper rifle is in zoom. Save and return to Unity. Select the the 3Sniper GameObject then drag the UI GameObject to the GameUI field. All of that was to hide the gun and show the sniper scope when the player zooms. Take it for a test run by clicking Play, pressing 3 to equip the sniper and pressing the RMB to zoom. It works but it’s a rudimentary implementation. You can make it better by setting it so that the player glimpses down the scope when zooming. Click Play and press 3 to equip the sniper rifle. Expand the Position Springs and Rotation Springs for the 3Sniper and tinker with the position and rotation offset values until the scope is front of the camera. Click the Save button at the bottom of the vp_FPWeapon component. Name the file SniperZoom then click Save again. You’ve just finished setting the sniper rifle’s state when it’s zoomed. In this case, you moved the gun up and in front of the camera so it will appear as though the player is looking down the scope, and SniperZoom is where you saved the state. As of this moment, you’ve got the settings for the state but still need to create the required state. Select 3Sniper and expand the vp_FPWeapon then expand the State section. Click the Add State button and replace the word Untitled with Zoom in the new State field. Drag the SniperZoom.txt file from the Assets folder to the Text Asset field of 3Sniper’s vp_FPWeapon component. Pump it Full of Lead Just a few more tasks lie between you and sniping the day away. It does seem like a good idea to add bullets to your gun — of course, you can skip this step, but sniping just wouldn’t be the same. :] Previously the Pistol and Shotgun weapons already had bullets set up. You created the sniper rifle from scratch, and so you’ll also need to configure a prefab to use for its bullets. While you’re still in 3Sniper, expand vp_FPWeaponShooter then expand the Projectile section. Click the small selector button to the right of the Prefab field. Find and select the PistolBullet prefab. Click Play, and focus on the most distant target — you may need to turn around. Press the RMB to zoom then fire the sniper rifle! Now you get the effect of looking down the scope before zooming in. Great zoom’n, partner. :] Where To Go From Here Oh, the places you can go from here. Where to start? UFPS is a rich framework and this tutorial just scratches the surface of what’s possible. A logical next step would be to add an assault rifle complete with realistic animations. Play with the sliders and values in the vp_FPWeapon until it looks and feels right. Remember it should shoot very quickly and have a fast-but-small recoil after each shot. You covered the basics, but there’s still a lot more to learn in the UFPS documentation. For one, you didn’t touch vp_FXBullet. Another key feature to FPS games is inventory and ammo management, which involves working with the UI. You didn’t get to experience UFPS’ many cool features — so check them out! Work through these tutorials if you’d like to learn more about the Unity UI system: Or pick up Unity Games by Tutorials and learn Unity from the inside out while you build four awesome games. As I mentioned earlier, reflection is an advanced skill that’s a great tool to have in your coding toolbox, so read up on it when you get a chance. Lean more about how the event system works by watching this official Unity Events tutorial If you have questions, feedback or just want to ask for more UFPS tutorials, leave a comment in our forums. Your feedback determines what we write next — so speak up! Acknowledgments Special thanks to Wojciech Klimas for the gun models. See more good work here: Team Each tutorial at is created by a team of dedicated developers so that it meets our high quality standards. The team members who worked on this tutorial are: - Author Anthony Uccello - Tech Editor Gijs Bannenberg - Editor Wendy Lincoln - Final Pass Editor Sean Duffy - Team Lead Eric Van de Kerckhove
https://www.raywenderlich.com/151418/introduction-ufps-unity-fps-tutorial
CC-MAIN-2018-13
en
refinedweb
"Michael Abbott" <michael.g.abbott at ntlworld.com> wrote in message news:Xns9145D94516673michaelrcpcouk at 62.253.162.104... > However, Python tuple assignment does look somewhat like pattern matching; > for example, sometimes my .read() method returns some (one) thing, and I > write: > > time, status, ((value, boring),) = myobject.read() > > So here I think of this as matching the one value (itself a pair of values) > that happens to be returned. This looks awfully like pattern matching (cf > Haskell). An so it is. If the patterns do not match, an exception is raised. However, after the match, the names (in your example above) 'time', 'status', 'value', and 'boring' are then bound to the corresponding objects in the current namespace. Terry J. Reedy
https://mail.python.org/pipermail/python-list/2001-October/074529.html
CC-MAIN-2016-40
en
refinedweb
This site works best with JavaScript enabled. Please enable JavaScript to get the best experience from this site. package net.minecraft.src public class CustomEntity extends EntityLiving { mod_BVB.AndysAmuletSword.itemID } Very easily. Instead of putting on the statement "Block.cake" as he did just do: "YourModName.YourItem.itemID". If it is a block just do "blockID" instead of "itemID". mod_BVB.AndysAmuletSword.itemID mod_BVB is my Basemod class. AndysAmuletSword is my Item class with Andy's Amulet and the In Game Name (if that helps) and itemID is what Jestorm suggested. then of course it doesn't work. Or the sarcasm could cease and you could post your code, like I requested. Maybe your entity, mod and item classes? If there's no error description (a claim, by the way, that I'm skeptical of), try running and post the error log. And, in case you didn't mistype, you want your object name, not your class name.
http://www.minecraftforum.net/forums/mapping-and-modding/minecraft-mods/modification-development/1429821-how-do-i-make-a-custom-mob-drop-a-custom-item?cookieTest=1
CC-MAIN-2016-40
en
refinedweb
Multithreading in java is running multiple threads sharing same address space concurrently. A multithreaded program has two or more parts running simultaneously. Each part is called a thread and has a separate path of execution. A thread is run inside a process, which consists of the memory space allocated by the operating system. A thread never exists on its own. Multithreading allows various threads to run inside one process. Carrying out more than one task at a time is called Multitasking. (A CPU does multitasking all the time). Now, multitasking can be carried out in two ways: Multithreading allows a process to run its tasks in parallel mode and execute these different tasks simultaneously. Following are 4 states of Thread: A running thread can further enter into non-runnable states, which are: In Java, lifecycle of thread is controlled by Java Virtual Machine (JVM). It allows a program to be more responsible to the user. When a program contains multiple threads then the CPU can switch between the two threads to execute them at the same time. Following is an example of Multithreading: public class multi extends Thread { @Override public void run() { System.out.println(); for (int x = 1; x <= 3; x++) { System.out.println(x + " Thread name" + Thread.currentThread().getName()); } } public static void main(String[] args) { multi t1 = new multi(); t1.start(); multi t2 = new multi(); t2.start(); multi t3 = new multi(); t3.start(); } } Advertisements Posted on: April
http://www.roseindia.net/java/javatutorial/multithreading-in-java.shtml
CC-MAIN-2016-40
en
refinedweb