url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
---|---|---|---|---|---|---|
https://informer.richxswap.com/blockchain-in-7-minutes-what-is-blockchain-blockchain-explainedhow-blockchain-workssimplilearn/ | code | 🔥 Enroll for FREE Blockchain Course & Get your Completion Certificate: https://www.simplilearn.com/learn-blockchain-basics-skillup?utm_campaign=Skillup-Blockchain&utm_medium=DescriptionFirstFold&utm_source=youtube
This Blockchain video will help you understand what led to the creation of Blockchain, what Blockchain is, how a Bitcoin transaction works, how Blockchain plays an integral role in it with feartures like hash encryption, proof of work and mining and how Blockchain technology is used in real life scenarios. Now, let’s dive into this video and understand the basics of Blockchain and How Blockchain works
To learn more about Blockchain, subscribe to our YouTube channel: https://www.youtube.com/user/Simplilearn?sub_confirmation=1
Watch more videos on Blockchain: https://www.youtube.com/playlist?list=PLEiEAq2VkUUKmhU6SO2P73pTdMZnHOsDB
#Blockchain #Whatisblockchain #BlockChainExplained #Blockchaintutorial #Bitcoin #Blockchainonlinetraining #Blockchainforbeginners #BlockchainTechnology #Simplilearn
Simplilearn’s Blockchain Certification Training has been designed for developers who want to decipher the global craze surrounding Blockchain, Bitcoin and cryptocurrencies. You’ll learn the core structure and technical mechanisms of Bitcoin, Ethereum, Hyperledger and Multichain Blockchain platforms, use the latest tools to build Blockchain applications, set up your own private Blockchain, deploy smart contracts on Ethereum and gain practical experience with real-world projects.
Why learn Blockchain?
Blockchain technology is the brainchild of Satoshi Nakamoto, which enables digital information to be distributed. A network of computing nodes makes up the Blockchain. Durability, robustness, success rate, transparency, incorruptibility are some of the enticing characteristics of Blockchain. By design, Blockchain is a decentralized technology which is used by a global network of the computer to manage Bitcoin transactions easily. Many new business applications will result in the usage of Blockchain such as Crowdfunding, smart contracts, supply chain auditing, etc.
This Blockchain Certification course offers a hands-on training covering relevant topics in cryptocurrency and the wider Blockchain space. From a technological standpoint, you will develop a strong grasp of core Blockchain platforms, understand what Bitcoin is and how it works, learn key vocabulary and concepts commonly used when discussing Blockchain and understand why engineers are motivated to create an app with Ethereum.
After completing this course, you will be able to:
1. Apply Bitcoin and Blockchain concepts in business situations
2. Build compelling Blockchain applications using the Ethereum Blockchain
3. Design, test and deploy secure Smart Contracts
4. Use the latest version of Ethereum development tools (Web3 v1.0)
5. Develop Hyperledger Blockchain applications using Composer Framework
6. Model the Blockchain applications using Composer modeling language
7. Develop front-end (client) applications using Composer API
8. Leverage Composer REST Server to design a web-based Blockchain solution
9. Design Hyperledger Fabric Composer Business Network 10.
10. Understand the true purpose and capabilities of Ethereum and Solidity
The Blockchain Certification Training Course is recommended for:
2. Technologists interested in learning Ethereum, Hyperledger and Blockchain
3. Technology architects wanting to expand their skills to Blockchain technology
4. Professionals curious to learn how Blockchain technology can change the way we do business
5. Entrepreneurs with technology background interested in realizing their business ideas on the Blockchain
Learn more at: https://www.simplilearn.com/blockchain-certification-training?utm_campaign=Blockchain&utm_medium=Description&utm_source=youtube
For more updates on courses and tips follow us on:
– Facebook: https://www.facebook.com/Simplilearn
– Twitter: https://twitter.com/simplilearn
– LinkedIn: https://www.linkedin.com/company/simplilearn
– Website: https://www.simplilearn.com
Get the Android app: http://bit.ly/1WlVo4u
Get the iOS app: http://apple.co/1HIO5J0 | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817674.12/warc/CC-MAIN-20240420184033-20240420214033-00499.warc.gz | CC-MAIN-2024-18 | 4,146 | 33 |
https://aprime.newgrounds.com/news/post/977065 | code | Hey guys, I've created a new game!
For the first time, I've decided to work with others for the production of the game. I didn't just use to code, but also did the Art and Animation (or lack there of).
I've been fortunate enough to work with really great people on this project.
Please check out the game here. Let me know what you think. I'll still be updating it. | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670987.78/warc/CC-MAIN-20191121204227-20191121232227-00505.warc.gz | CC-MAIN-2019-47 | 365 | 4 |
https://opencircuit.nl/Blog/Raspberry-Pi-Real-Time-Clock/Main-Code | code | So now the last thing to do is to put everything together in an everlasting loop (exception the keyboard interupt "ctr + c" will stop the code and clean everything up "GPIO.cleanup()").
So first we have to get the time with our GetTimeToDigit function and safely store the returned variables.
Then we can Show the first digit of the hour with the DisplayDigit function and just repeat this with the other digits with the required amount of delay between the two.
What is going to be handy is to put the everlasting loop in a try so when you press "ctl + c" it will stop the program in a clean way.
So the main loop is going to look like this
Don't forget to put the library code in a clock.py file that is in the same folder as the main file. Becaus otherwise it won't work | s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704821253.82/warc/CC-MAIN-20210127055122-20210127085122-00569.warc.gz | CC-MAIN-2021-04 | 773 | 6 |
https://www.honeybadger.io/blog/authors/jonathanmiles/ | code | Jonathan began his career as a C/C++ developer but has since transitioned to web development with Ruby on Rails. 3D printing is his main hobby but lately all his spare time is taken up with being a first-time dad to a rambunctious toddler.
Rails' date and time helpers are great. They save us from duplicating simple add-duration-to-time logic across our applications and make the code more readable. However, complex date manipulations are dangerous places full of edge-cases. This article discusses some of them. 😅
PostgreSQL and MySQL are great for structuring data using relationships, but what if you don't always know the structure up-front? That's where
ActiveRecord::Store really shines. It's like NoSQL, without changing databases.
You've probably used
Rails.cache to read, write, and fetch cached data in Rails—but did you know you can also work with counters? In this series, Jonathan Miles introduces us to some of the lesser-known tools hidden in your Rails codebase.
The #descendants method is part of Rails. It returns all subclasses that inherit from a given class. In this article, Jonathan Miles shows us how to use this method and how it's implemented. It's a great lesson in the ins and outs of Ruby's object model.
Race conditions are arguably the most insidious kind of bug; they're intermittent, subtle, and most likely to occur in production. ActiveRecord's
update_counter provides us with a convenient way to avoid race conditions when incrementing or decrementing values in the database. In this article, Jonathan Miles shows us how to use it, how it's implemented, and other approaches to avoiding race conditions.
If you've ever checked the environment in your Rails app with Rails.env.production? you've used a fascinating little utility class called StringInquirer. In this post, Jonathan Miles dives into the rails codebase to show us exactly how StringInquirer works and how we can bring a little of its magic to our own apps.
The fastest web page is one you've already loaded. Browsers love to avoid round-trips by caching assets. And HTTP provides ways for us to tell browsers what's changed and what hasn't - so they make the right decisions. In this article, Jonathan Miles introduces us to HTTP caching and shows us how to implement it in Rails.
ActiveRecord makes accessing your database easy, but it can also help make it faster by its intelligent use of caching. In this article, Jonathan Miles shows us the tricks that Rails uses to ensure that your database isn't doing more work than it needs to.
If you've ever built a UI in Rails, you've probably noticed that views tend to get slower over time. That's because adding features to a UI often means adding DB queries to the view. They add up. Fortunately, Rails provides us with an easy-to-apply band-aid in the form of view caching. In this article, Jonathan Miles introduces us to view caching, discusses when it's appropriate to use, and covers common pitfalls to watch out for.
Sometimes when your app is slow, it's not your fault. Your code might be optimized to the teeth, but it won't matter if it has to perform intrinsically slow tasks, like fetching data from an external API. In these situations, Rails' low-level caching can be a life-saver. But caching is infamously tricky. It's dangerous to go alone. In this article, Jonathan Miles guides us through the landscape of low-level caching. He covers the basics, but more importantly, digs into essential details of cache invalidation and points out common pitfalls.
Whoever first said that "the fastest code is no code" must have really liked memoization. After all, memoization speeds up your application by running less code. In this article, Jonathan Miles introduces us to memoization. We'll learn when to use it, how to implement it in Ruby, and how to avoid common pitfalls. Buckle up!
We've all been there. You're clicking around your Rails application, and it just isn't as snappy as it used to be. You start searching for a quick-fix and find a lot of talk about caching. Take your existing app, add some caching, and voila, a performance boost with minimal code changes. However, it's not this simple. Like most quick fixes, caching can have long-term costs. In this article, Jonathan Miles discusses what caching is and what can go wrong, as well as explains non-caching strategies you can use to speed up your Rails app.
We've all worked with tightly-coupled code. If a butterfly flaps its wings in China, the unit tests break. Maintaining a system like this is...unpleasant. In this article, Jonathan Miles dives into the origins of tight-coupling. He demonstrates how you can use dependency injection (DI) to decouple code. Then he introduces a novel decoupling technique based on delegation that can be useful when DI is not an option. | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818464.67/warc/CC-MAIN-20240423033153-20240423063153-00332.warc.gz | CC-MAIN-2024-18 | 4,810 | 17 |
https://www.moonlitdreams.org/shop | code | All e-book sales are final!
Ebooks do not automatically download to the Kindle App. If you are unfamiliar with how to download ebooks, please see the FAQ Page for directions!
Options for book merch items can be found via our
Moonlit Dreams Designs store.
All items are drop-shipped directly from these sites therefor we cannot currently guarantee shipping times.
The currency converter can be used to see what the (current) rate of exchange is between the US Dollar (which we use) and the currency for your country.
If your country is not represented, but you wish it was there, please let us know and we'll add it. | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506320.28/warc/CC-MAIN-20230922002008-20230922032008-00840.warc.gz | CC-MAIN-2023-40 | 615 | 7 |
https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.resmgmt.doc/GUID-FEAC3A43-C57E-49A2-8303-B06DBC9054C5.html | code | Many workloads present opportunities for sharing memory across virtual machines.
For example, several virtual machines might be running instances of the same guest operating system, have the same applications or components loaded, or contain common data. ESXi systems use a proprietary page-sharing technique to securely eliminate redundant copies of memory pages.
With memory sharing, a workload consisting of multiple virtual machines often consumes less memory than it would when running on physical machines. As a result, the system can efficiently support higher levels of overcommitment.
The amount of memory saved by memory sharing depends on workload characteristics. A workload of many nearly identical virtual machines might free up more than thirty percent of memory, while a more diverse workload might result in savings of less than five percent of memory. | s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823445.39/warc/CC-MAIN-20181210212544-20181210234044-00399.warc.gz | CC-MAIN-2018-51 | 869 | 4 |
https://forum.solidworks.com/thread/221685 | code | I have a message box that requires several long lines. Here is the example - it's all one line in the code though. As you can see, it's rather long for just one line.
MsgBox ("This drawing does not appear to have a revision letter. Make sure the Revision Block is properly updated and that the correct revision letter is in the Revision custom property field. After the revision letter has been corrected, please try again.") & vbNewLine & vbNewLine & ("Remember, sometimes the Revision custom property link is disturbed by manual input and must have the current revision letter manually entered before it automatically updates with the revision block again." & vbNewLine & vbNewLine & "Check the PDF folder for this file name without a revision letter.")
Is there an easy way in VBA to wrap a long text string into multiple lines in the code so I don't have to scroll so far to the right to modify it? | s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107907213.64/warc/CC-MAIN-20201030033658-20201030063658-00463.warc.gz | CC-MAIN-2020-45 | 902 | 3 |
https://docs.wandisco.com/live-data-platform/docs/faq/ | code | Find answers to the most common questions asked about LiveData Migrator.
What are the supported operating systems for LiveData Migrator?
LiveData Migrator supports the following Linux-based operating systems:
- Ubuntu 16 and 18
- CentOS 6 and 7
- Red Hat Enterprise Linux 6 and 7
Where should I install LiveData Migrator for production use?
LiveData Migrator needs to be installed on an edge node in your Hadoop cluster. The edge node should have Java 1.8 and Hadoop clients (for example: HDFS client, Hive client, Kerberos client) installed but without any co-located/competing services. We recommend that the node's resources are dedicated to the running of LiveData Migrator.
Does LiveData Migrator support Kerberos authentication?
If your Hadoop cluster has Kerberos enabled, ensure that the edge node has a valid keytab containing a suitable principal for the HDFS superuser.
If you are wanting to migrate Hive metadata from your Hadoop cluster, the edge node must also have a keytab containing a suitable principal for the Hive service.
How can I test LiveData Migrator?
If you want to test LiveData Migrator before you install it on a production environment, use our HDFS Sandbox solution as the source filesystem. Your ADLS Gen2 storage account and container will be the target filesystem.
The Sandbox option can be selected when you create the LiveData Migrator resource through the Azure Portal.
How do I control what is or isn't migrated?
When you migrate data, you select a path on your source filesystem (HDFS) to migrate data from. LiveData Migrator for azure will only migrate the files and subdirectories contained in this path to your target filesystem (ADLS Gen2 container).
You can also exclude certain files and directories from being migrated within this path by creating exclusion templates. Exclusions templates are used to prevent files and directories being migrated based on their size, last modified date, and name.
How do I move my LiveData Migrator resource from one region to another?
Currently, you can't move a LiveData Migrator resource from one region to another.
We recommend you delete the existing resource and recreate it in your desired region. Follow these steps:
What costs will LiveData Migrator incur on my Azure subscription during the trial?
The first 5TB of data migration is free. We'll bill you for anything over this allowance.
How can costs be minimized during/after the trial period? Is there any option to turn down compute when data is not being replicated from on-prem into Azure Data Lake Store?
Cost is calculated based on number of transactions. You won't incur costs if there is low operation or no operation at all.
What network requirements do I need for LiveData Migrator for Azure?
You need to set up your network before you install LiveData Migrator for Azure. See the Network Requirements guide to learn how to set up your virtual network, see the port requirements, and more. | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296819273.90/warc/CC-MAIN-20240424112049-20240424142049-00530.warc.gz | CC-MAIN-2024-18 | 2,940 | 26 |
https://www.libhunt.com/l/java/topic/chat-sdk | code | Java chat-sdk Projects
Kommunicate Live Chat SDK
Kommunicate.io Android Chatbot SDKProject mention: Conversational AI is Taking Over: We Called It | reddit.com/r/resultid | 2023-05-14
If you are someone, looking for an AI chatbot, check out Kommunicate - https://www.kommunicate.io/
Write Clean Java Code. Always.. Sonar helps you commit clean code every time. With over 600 unique rules to find Java bugs, code smells & vulnerabilities, Sonar finds the issues while you focus on the work.
NOTE: The open source projects on this list are ordered by number of github stars. The number of mentions indicates repo mentiontions in the last 12 Months or since we started tracking (Dec 2020). The latest post mention was on 2023-05-14.
Java chat-sdk related posts
Adventures in Tracking Upload Progress With OkHttp and Retrofit
2 projects | dev.to | 10 Dec 2021
Building iOS Chatbot with Dialogflow
4 projects | dev.to | 11 May 2021
|1||Kommunicate Live Chat SDK||69|
Access the most powerful time series database as a service
Ingest, store, & analyze all types of time series data in a fully-managed, purpose-built database. Keep data forever with low-cost storage and superior data compression.www.influxdata.com | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648850.88/warc/CC-MAIN-20230602172755-20230602202755-00036.warc.gz | CC-MAIN-2023-23 | 1,208 | 14 |
https://www.houstonisd.org/domain/51654 | code | Austin High School Bell Schedule
HOW DO STUDENTS ACCESS THEIR CLASSES?
Microsoft Teams Login:
Link to Microsoft Login - https://tinyurl.com/AHSTEAMS
Student Login - User: Student ID with an S in front of it ([email protected])
Password: The password you use to get into your laptops, students DOB (MMDDYYYY)
Example: If my student ID # is 123456 my login will be [email protected]
Example: If my birthday is August 10, 2008 my password will be 08102008
HUB Student Access
The HISD HUB can be accessed from any device or location. Please follow the steps below and if you are still unable to login, please clear the browser cache, restart the computer and then try to login again.
- Navigate to https://houston.itslearning.com/index.aspx
- You will be taken to the HISD Single Sign-On Service page
- The student’s username should be input as: STUDENT\S####### where the #’s are put the student’s ID number.
- The student’s password should be their eight-digit date of birth (no slashes): MMDDYYYY
WHERE CAN THE PARENTS AND STUDENTS SEE THEIR SCHEDULE, ATTENDANCE, AND GRADES?
HISD CONNECT PARENT ACCESS:
The district has a new Student Information System (SIS) for the 2020-2021 school year. HISD Connect by PowerSchool includes student contact, enrollment, and demographic information, as well as grades and online resources.
Parents will be given a unique code, or access ID, for each of their students and will be able to use those codes to set up an account to access their students' profiles through a new parent portal. Parents will receive their student’s access ID from their school by September 14. Parents who haven't received their student's access ID by then should contact their school.
HISD CONNECT STUDENT ACCESS:
- Students can login now using their HISD credentials at the same portal as their parents: https://hisdconnect.houstonisd.org/public/
- Student Login credentials: Network username (your S#######) and password (Do not use @email address or place student\ in front of username)
HELPFUL RESOURCES FOR STUDENTS AND PARENTS:
Student’s login instructions LINK: https://tinyurl.com/AHSOnlineClass
Video #1 Teams and Daily Schedule: https://youtu.be/87EELSIfSEE
Video #2 AHS New Student Orientation: https://youtu.be/_emwp_bq_iw
HISD is working on getting hotspots for all the students who need internet. We will call the parents/students when their hotspot arrives. In the meantime, here are other options for students while waiting on the HISD hotspot:
Houston Libraries: You can check-out a hotspot, it must be an adult checking it out, you need to get a library card which is free.
Houston Churches providing wifi:
Students - Having Trouble with your Technology? Student Tech Ticket Support - CLICK HERE | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224645417.33/warc/CC-MAIN-20230530063958-20230530093958-00212.warc.gz | CC-MAIN-2023-23 | 2,751 | 29 |
https://us.jobs/jobs/the-bank-of-new-york-mellon/principal-java-developer/1565604186491042277 | code | Job Description - Principal Java Developer (1901695)Job Description Principal Java Developer (Job Number:1901695) Description The Clearance and Collateral Technology (CCT) group is responsible for building technology to support three of the most critical services of BNY Mellon - US Government Securities Clearance, US Triparty Repo and Global Collateral Management. RoleWe are looking for strong developers who are excited about part of FED Clearing platform. They will have the unique opportunity to work/optimize/build a sophisticated high performance Clearance system using the latest technology stack and best software engineering practices. They will become the members of the core team driving it forward and will have a significant impact in how its implemented.Principal Developer->> Consults with internal business groups to provide high-level application software development services or technical support. Provides comprehensive senior-level technical consulting to IT management and senior technical staffs. Evaluates compliance with the organization's technology standards. Works with internal business groups on implementation opportunities, challenges, and requirements of various applications. Analyzes information and provides recommendations to address and resolve business issues for a specific business group. Guides and consults with IT management and technical staffs regarding use of emerging technologies and associated services. Participates in defining corporate implementation and integration strategies of new technologies. Advocates for innovative, creative technology solutions. Contributes to the achievement of area objectives. Bachelor's degree in computer science engineering or a related discipline, or equivalent work experience required, 10-12 years of experience in software development required, experience in the securities or financial services industry is a plus.Qualifications BS or MS (Computer Science, Math, Physics, Engineering) 7 years overall experienceStrong CS fundamentals, core Java, IO, multithreading, collections and problem solving skills are requiredStrong skills in design & development using Spring framework, Hibernate, Angular UI and REST API Experience in writing Oracle SQL and performance tuning in distributed environmentExposure to agile development, continuous integration/deployment, selenium or karma/protractor toolsExperience in Financial industry .Primary Location: United States-USA-NJ-Jersey CityOther Locations: United States-USA-NY-New YorkInternal Jobcode: 45198Job: Information TechnologyExperience Level: ExperiencedOrganization: Clearing Markets ISS Svcs Tech-HR16624 Associated topics: algorithm, backend, c c++, c++, java, sdet, software developer, software development engineer, software engineer, software programmer
* The salary listed in the header is an estimate based on salary data for similar jobs in the same area. Salary or compensation data found in the job description is accurate. | s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316021.66/warc/CC-MAIN-20190821131745-20190821153745-00015.warc.gz | CC-MAIN-2019-35 | 2,977 | 2 |
http://cutephp.com/forum/index.php?showtopic=44352 | code | Topic: Activating License
I just bought a license for my website.
I logged in to the customer area and I correctly activated.
The problem is that I can't see the license in the list and I can't download the reg file.
Thank you in advance | s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743184.39/warc/CC-MAIN-20181116194306-20181116220306-00188.warc.gz | CC-MAIN-2018-47 | 237 | 5 |
https://nanonets.com/blog/semantic-image-segmentation-2020/ | code | A 2021 guide to Semantic Segmentation
Deep learning has been very successful when working with images as data and is currently at a stage where it works better than humans on multiple use-cases. The most important problems that humans have been interested in solving with computer vision are image classification, object detection and segmentation in the increasing order of their difficulty.
In the plain old task of image classification we are just interested in getting the labels of all the objects that are present in an image. In object detection we come further a step and try to know along with what all objects that are present in an image, the location at which the objects are present with the help of bounding boxes. Image segmentation takes it to a new level by trying to find out accurately the exact boundary of the objects in the image.
In this article we will go through this concept of image segmentation, discuss the relevant use-cases, different neural network architectures involved in achieving the results, metrics and datasets to explore.
What is image segmentation
We know an image is nothing but a collection of pixels. Image segmentation is the process of classifying each pixel in an image belonging to a certain class and hence can be thought of as a classification problem per pixel. There are two types of segmentation techniques
- Semantic segmentation :- Semantic segmentation is the process of classifying each pixel belonging to a particular label. It doesn't different across different instances of the same object. For example if there are 2 cats in an image, semantic segmentation gives same label to all the pixels of both cats
- Instance segmentation :- Instance segmentation differs from semantic segmentation in the sense that it gives a unique label to every instance of a particular object in the image. As can be seen in the image above all 3 dogs are assigned different colours i.e different labels. With semantic segmentation all of them would have been assigned the same colour.
So we will now come to the point where would we need this kind of an algorithm
Use-cases of image segmentation
Handwriting Recognition :- Junjo et all demonstrated how semantic segmentation is being used to extract words and lines from handwritten documents in their 2019 research paper to recognise handwritten characters
Google portrait mode :- There are many use-cases where it is absolutely essential to separate foreground from background. For example in Google's portrait mode we can see the background blurred out while the foreground remains unchanged to give a cool effect
YouTube stories :- Google recently released a feature YouTube stories for content creators to show different backgrounds while creating stories.
Virtual make-up :- Applying virtual lip-stick is possible now with the help of image segmentation
4.Virtual try-on :- Virtual try on of clothes is an interesting feature which was available in stores using specialized hardware which creates a 3d model. But with deep learning and image segmentation the same can be obtained using just a 2d image
Visual Image Search :- The idea of segmenting out clothes is also used in image retrieval algorithms in eCommerce. For example Pinterest/Amazon allows you to upload any picture and get related similar looking products by doing an image search based on segmenting out the cloth portion
Self-driving cars :- Self driving cars need a complete understanding of their surroundings to a pixel perfect level. Hence image segmentation is used to identify lanes and other necessary information
Nanonets helps fortune 500 companies enable better customer experiences at scale using Semantic Segmentation.
Methods and Techniques
Before the advent of deep learning, classical machine learning techniques like SVM, Random Forest, K-means Clustering were used to solve the problem of image segmentation. But as with most of the image related problem statements deep learning has worked comprehensively better than the existing techniques and has become a norm now when dealing with Semantic Segmentation. Let's review the techniques which are being used to solve the problem
Fully Convolutional Network
The general architecture of a CNN consists of few convolutional and pooling layers followed by few fully connected layers at the end. The paper of Fully Convolutional Network released in 2014 argues that the final fully connected layer can be thought of as doing a 1x1 convolution that cover the entire region.
Hence the final dense layers can be replaced by a convolution layer achieving the same result. But now the advantage of doing this is the size of input need not be fixed anymore. When involving dense layers the size of input is constrained and hence when a different sized input has to be provided it has to be resized. But by replacing a dense layer with convolution, this constraint doesn't exist.
Also when a bigger size of image is provided as input the output produced will be a feature map and not just a class output like for a normal input sized image. Also the observed behavior of the final feature map represents the heatmap of the required class i.e the position of the object is highlighted in the feature map. Since the output of the feature map is a heatmap of the required object it is valid information for our use-case of segmentation.
Since the feature map obtained at the output layer is a down sampled due to the set of convolutions performed, we would want to up-sample it using an interpolation technique. Bilinear up sampling works but the paper proposes using learned up sampling with deconvolution which can even learn a non-linear up sampling.
The down sampling part of the network is called an encoder and the up sampling part is called a decoder. This is a pattern we will see in many architectures i.e reducing the size with encoder and then up sampling with decoder. In an ideal world we would not want to down sample using pooling and keep the same size throughout but that would lead to a huge amount of parameters and would be computationally infeasible.
Although the output results obtained have been decent the output observed is rough and not smooth. The reason for this is loss of information at the final feature layer due to downsampling by 32 times using convolution layers. Now it becomes very difficult for the network to do 32x upsampling by using this little information. This architecture is called FCN-32
To address this issue, the paper proposed 2 other architectures FCN-16, FCN-8. In FCN-16 information from the previous pooling layer is used along with the final feature map and hence now the task of the network is to learn 16x up sampling which is better compared to FCN-32. FCN-8 tries to make it even better by including information from one more previous pooling layer.
U-net builds on top of the fully convolutional network from above. It was built for medical purposes to find tumours in lungs or the brain. It also consists of an encoder which down-samples the input image to a feature map and the decoder which up samples the feature map to input image size using learned deconvolution layers.
The main contribution of the U-Net architecture is the shortcut connections. We saw above in FCN that since we down-sample an image as part of the encoder we lost a lot of information which can't be easily recovered in the encoder part. FCN tries to address this by taking information from pooling layers before the final feature layer.
U-Net proposes a new approach to solve this information loss problem. It proposes to send information to every up sampling layer in decoder from the corresponding down sampling layer in the encoder as can be seen in the figure above thus capturing finer information whilst also keeping the computation low. Since the layers at the beginning of the encoder would have more information they would bolster the up sampling operation of decoder by providing fine details corresponding to the input images thus improving the results a lot. The paper also suggested use of a novel loss function which we will discuss below.
Deeplab from a group of researchers from Google have proposed a multitude of techniques to improve the existing results and get finer output at lower computational costs. The 3 main improvements suggested as part of the research are
1) Atrous convolutions
2) Atrous Spatial Pyramidal Pooling
3) Conditional Random Fields usage for improving final output
Let's discuss about all these
One of the major problems with FCN approach is the excessive downsizing due to consecutive pooling operations. Due to series of pooling the input image is down sampled by 32x which is again up sampled to get the segmentation result. Downsampling by 32x results in a loss of information which is very crucial for getting fine output in a segmentation task. Also deconvolution to up sample by 32x is a computation and memory expensive operation since there are additional parameters involved in forming a learned up sampling.
The paper proposes the usage of Atrous convolution or the hole convolution or dilated convolution which helps in getting an understanding of large context using the same number of parameters.
Dilated convolution works by increasing the size of the filter by appending zeros(called holes) to fill the gap between parameters. The number of holes/zeroes filled in between the filter parameters is called by a term dilation rate. When the rate is equal to 1 it is nothing but the normal convolution. When rate is equal to 2 one zero is inserted between every other parameter making the filter look like a 5x5 convolution. Now it has the capacity to get the context of 5x5 convolution while having 3x3 convolution parameters. Similarly for rate 3 the receptive field goes to 7x7.
In Deeplab last pooling layers are replaced to have stride 1 instead of 2 thereby keeping the down sampling rate to only 8x. Then a series of atrous convolutions are applied to capture the larger context. For training the output labelled mask is down sampled by 8x to compare each pixel. For inference, bilinear up sampling is used to produce output of the same size which gives decent enough results at lower computational/memory costs since bilinear up sampling doesn't need any parameters as opposed to deconvolution for up sampling.
Spatial Pyramidal Pooling is a concept introduced in SPPNet to capture multi-scale information from a feature map. Before the introduction of SPP input images at different resolutions are supplied and the computed feature maps are used together to get the multi-scale information but this takes more computation and time. With Spatial Pyramidal Pooling multi-scale information can be captured with a single input image.
With the SPP module the network produces 3 outputs of dimensions 1x1(i.e GAP), 2x2 and 4x4. These values are concatenated by converting to a 1d vector thus capturing information at multiple scales. Another advantage of using SPP is input images of any size can be provided.
ASPP takes the concept of fusing information from different scales and applies it to Atrous convolutions. The input is convolved with different dilation rates and the outputs of these are fused together.
As can be seen the input is convolved with 3x3 filters of dilation rates 6, 12, 18 and 24 and the outputs are concatenated together since they are of same size. A 1x1 convolution output is also added to the fused output. To also provide the global information, the GAP output is also added to above after up sampling. The fused output of 3x3 varied dilated outputs, 1x1 and GAP output is passed through 1x1 convolution to get to the required number of channels.
Since the required image to be segmented can be of any size in the input the multi-scale information from ASPP helps in improving the results.
Improving output with CRF
Pooling is an operation which helps in reducing the number of parameters in a neural network but it also brings a property of invariance along with it. Invariance is the quality of a neural network being unaffected by slight translations in input. Due to this property obtained with pooling the segmentation output obtained by a neural network is coarse and the boundaries are not concretely defined.
To deal with this the paper proposes use of graphical model CRF. Conditional Random Field operates a post-processing step and tries to improve the results produced to define shaper boundaries. It works by classifying a pixel based not only on it's label but also based on other pixel labels. As can be seen from the above figure the coarse boundary produced by the neural network gets more refined after passing through CRF.
Deeplab-v3 introduced batch normalization and suggested dilation rate multiplied by (1,2,4) inside each layer in a Resnet block. Also adding image level features to ASPP module which was discussed in the above discussion on ASPP was proposed as part of this paper
Deeplab-v3+ suggested to have a decoder instead of plain bilinear up sampling 16x. The decoder takes a hint from the decoder used by architectures like U-Net which take information from encoder layers to improve the results. The encoder output is up sampled 4x using bilinear up sampling and concatenated with the features from encoder which is again up sampled 4x after performing a 3x3 convolution. This approach yields better results than a direct 16x up sampling. Also modified Xception architecture is proposed to be used instead of Resnet as part of encoder and depthwise separable convolutions are now used on top of Atrous convolutions to reduce the number of computations.
Global Convolution Network
Semantic segmentation involves performing two tasks concurrently
The classification networks are created to be invariant to translation and rotation thus giving no importance to location information whereas the localization involves getting accurate details w.r.t the location. Thus inherently these two tasks are contradictory. Most segmentation algorithms give more importance to localization i.e the second in the above figure and thus lose sight of global context. In this work the author proposes a way to give importance to classification task too while at the same time not losing the localization information
The author proposes to achieve this by using large kernels as part of the network thus enabling dense connections and hence more information. This is achieved with the help of a GCN block as can be seen in the above figure. GCN block can be thought of as a k x k convolution filter where k can be a number bigger than 3. To reduce the number of parameters a k x k filter is further split into 1 x k and k x 1, kx1 and 1xk blocks which are then summed up. Thus by increasing value k, larger context is captured.
In addition, the author proposes a Boundary Refinement block which is similar to a residual block seen in Resnet consisting of a shortcut connection and a residual connection which are summed up to get the result. It is observed that having a Boundary Refinement block resulted in improving the results at the boundary of segmentation.
Results showed that GCN block improved the classification accuracy of pixels closer to the center of object indicating the improvement caused due to capturing long range context whereas Boundary Refinement block helped in improving accuracy of pixels closer to boundary.
See More Than Once – KSAC for Semantic Segmentation
Deeplab family uses ASPP to have multiple receptive fields capture information using different atrous convolution rates. Although ASPP has been significantly useful in improving the segmentation of results there are some inherent problems caused due to the architecture. There is no information shared across the different parallel layers in ASPP thus affecting the generalization power of the kernels in each layer. Also since each layer caters to different sets of training samples(smaller objects to smaller atrous rate and bigger objects to bigger atrous rates), the amount of data for each parallel layer would be less thus affecting the overall generalizability. Also the number of parameters in the network increases linearly with the number of parameters and thus can lead to overfitting.
To handle all these issues the author proposes a novel network structure called Kernel-Sharing Atrous Convolution (KSAC). As can be seen in the above figure, instead of having a different kernel for each parallel layer is ASPP a single kernel is shared across thus improving the generalization capability of the network. By using KSAC instead of ASPP 62% of the parameters are saved when dilation rates of 6,12 and 18 are used.
Another advantage of using a KSAC structure is the number of parameters are independent of the number of dilation rates used. Thus we can add as many rates as possible without increasing the model size. ASPP gives best results with rates 6,12,18 but accuracy decreases with 6,12,18,24 indicating possible overfitting. But KSAC accuracy still improves considerably indicating the enhanced generalization capability.
This kernel sharing technique can also be seen as an augmentation in the feature space since the same kernel is applied over multiple rates. Similar to how input augmentation gives better results, feature augmentation performed in the network should help improve the representation capability of the network.
For use cases like self-driving cars, robotics etc. there is a need for real-time segmentation on the observed video. The architectures discussed so far are pretty much designed for accuracy and not for speed. So if they are applied on a per-frame basis on a video the result would come at very low speed.
Also generally in a video there is a lot of overlap in scenes across consecutive frames which could be used for improving the results and speed which won't come into picture if analysis is done on a per-frame basis. Using these cues let's discuss architectures which are specifically designed for videos
Spatio-Temporal FCN proposes to use FCN along with LSTM to do video segmentation. We are already aware of how FCN can be used to extract features for segmenting an image. LSTM are a kind of neural networks which can capture sequential information over time. STFCN combines the power of FCN with LSTM to capture both the spatial information and temporal information
As can be seen from the above figure STFCN consists of a FCN, Spatio-temporal module followed by deconvolution. The feature map produced by a FCN is sent to Spatio-Temporal Module which also has an input from the previous frame's module. The module based on both these inputs captures the temporal information in addition to the spatial information and sends it across which is up sampled to the original size of image using deconvolution similar to how it's done in FCN
Since both FCN and LSTM are working together as part of STFCN the network is end to end trainable and outperforms single frame segmentation approaches. There are similar approaches where LSTM is replaced by GRU but the concept is same of capturing both the spatial and temporal information
Semantic Video CNNs through Representation Warping
This paper proposes the use of optical flow across adjacent frames as an extra input to improve the segmentation results
The approach suggested can be roped in to any standard architecture as a plug-in. The key ingredient that is at play is the NetWarp module. To compute the segmentation map the optical flow between the current frame and previous frame is calculated i.e Ft and is passed through a FlowCNN to get Λ(Ft) . This process is called Flow Transformation. This value is passed through a warp module which also takes as input the feature map of an intermediate layer calculated by passing through the network. This gives a warped feature map which is then combined with the intermediate feature map of the current layer and the entire network is end to end trained. This architecture achieved SOTA results on CamVid and Cityscapes video benchmark datasets.
Clockwork Convnets for Video Semantic Segmentation
This paper proposes to improve the speed of execution of a neural network for segmentation task on videos by taking advantage of the fact that semantic information in a video changes slowly compared to pixel level information. So the information in the final layers changes at a much slower pace compared to the beginning layers. The paper suggests different times
The above figure represents the rate of change comparison for a mid level layer pool4 and a deep layer fc7. On the left we see that since there is a lot of change across the frames both the layers show a change but the change for pool4 is higher. In the right we see that there is not a lot of change across the frames. Hence pool4 shows marginal change whereas fc7 shows almost nil change.
The research utilizes this concept and suggests that in cases where there is not much of a change across the frames there is no need of computing the features/outputs again and the cached values from the previous frame can be used. Since the rate of change varies with layers different clocks can be set for different sets of layers. When the clock ticks the new outputs are calculated, otherwise the cached results are used. The rate of clock ticks can be statically fixed or can be dynamically learnt
Low-Latency Video Semantic Segmentation
This paper improves on top of the above discussion by adaptively selecting the frames to compute the segmentation map or to use the cached result instead of using a fixed timer or a heuristic.
The paper proposes to divide the network into 2 parts, low level features and high level features. The cost of computing low level features in a network is much less compared to higher features. The research suggests to use the low level network features as an indicator of the change in segmentation map. In their observations they found strong correlation between low level features change and the segmentation map change. So to understand if there is a need to compute if the higher features are needed to be calculated, the lower features difference across 2 frames is found and is compared if it crosses a particular threshold. This entire process is automated by a small neural network whose task is to take lower features of two frames and to give a prediction as to whether higher features should be computed or not. Since the network decision is based on the input frames the decision taken is dynamic compared to the above approach.
Segmentation for point clouds
Data coming from a sensor such as lidar is stored in a format called Point Cloud. Point cloud is nothing but a collection of unordered set of 3d data points(or any dimension). It is a sparse representation of the scene in 3d and CNN can't be directly applied in such a case. Also any architecture designed to deal with point clouds should take into consideration that it is an unordered set and hence can have a lot of possible permutations. So the network should be permutation invariant. Also the points defined in the point cloud can be described by the distance between them. So closer points in general carry useful information which is useful for segmentation tasks
PointNet is an important paper in the history of research on point clouds using deep learning to solve the tasks of classification and segmentation. Let's study the architecture of Pointnet
Input of the network for n points is an n x 3 matrix. n x 3 matrix is mapped to n x 64 using a shared multi-perceptron layer(fully connected network) which is then mapped to n x 64 and then to n x 128 and n x 1024. Max pooling is applied to get a 1024 vector which is converted to k outputs by passing through MLP's with sizes 512, 256 and k. Finally k class outputs are produced similar to any classification network.
Classification deals only with the global features but segmentation needs local features as well. So the local features from intermediate layer at n x 64 is concatenated with global features to get a n x 1088 matrix which is sent through mlp of 512 and 256 to get to n x 256 and then though MLP's of 128 and m to give m output classes for every point in point cloud.
Also the network involves an input transform and feature transform as part of the network whose task is to not change the shape of input but add invariance to affine transformations i.e translation, rotation etc.
A-CNN proposes the usage of Annular convolutions to capture spatial information. We know from CNN that convolution operations capture the local information which is essential to get an understanding of the image. A-CNN devised a new convolution called Annular convolution which is applied to neighbourhood points in a point-cloud.
The architecture takes as input n x 3 points and finds normals for them which is used for ordering of points. A subsample of points is taken using the FPS algorithm resulting in ni x 3 points. On these annular convolution is applied to increase to 128 dimensions. Annular convolution is performed on the neighbourhood points which are determined using a KNN algorithm.
Another set of the above operations are performed to increase the dimensions to 256. Then an mlp is applied to change the dimensions to 1024 and pooling is applied to get a 1024 global vector similar to point-cloud. This entire part is considered the encoder. For classification the encoder global output is passed through mlp to get c class outputs. For segmentation task both the global and local features are considered similar to PointCNN and is then passed through an MLP to get m class outputs for each point.
Let's discuss the metrics which are generally used to understand and evaluate the results of a model.
Pixel accuracy is the most basic metric which can be used to validate the results. Accuracy is obtained by taking the ratio of correctly classified pixels w.r.t total pixels
Accuracy = (TP+TN)/(TP+TN+FP+FN)
The main disadvantage of using such a technique is the result might look good if one class overpowers the other. Say for example the background class covers 90% of the input image we can get an accuracy of 90% by just classifying every pixel as background
Intersection Over Union
IOU is defined as the ratio of intersection of ground truth and predicted segmentation outputs over their union. If we are calculating for multiple classes, IOU of each class is calculated and their mean is taken. It is a better metric compared to pixel accuracy as if every pixel is given as background in a 2 class input the IOU value is (90/100+0/100)/2 i.e 45% IOU which gives a better representation as compared to 90% accuracy.
Frequency weighted IOU
This is an extension over mean IOU which we discussed and is used to combat class imbalance. If one class dominates most part of the images in a dataset like for example background, it needs to be weighed down compared to other classes. Thus instead of taking the mean of all the class results, a weighted mean is taken based on the frequency of the class region in the dataset.
The metric popularly used in classification F1 Score can be used for segmentation task as well to deal with class imbalance.
Area under the Precision - Recall curve for a chosen threshold IOU average over different classes is used for validating the results.
Loss function is used to guide the neural network towards optimization. Let's discuss a few popular loss functions for semantic segmentation task.
Cross Entropy Loss
Simple average of cross-entropy classification loss for every pixel in the image can be used as an overall function. But this again suffers due to class imbalance which FCN proposes to rectify using class weights
UNet tries to improve on this by giving more weight-age to the pixels near the border which are part of the boundary as compared to inner pixels as this makes the network focus more on identifying borders and not give a coarse output.
Focal loss was designed to make the network focus on hard examples by giving more weight-age and also to deal with extreme class imbalance observed in single-stage object detectors. The same can be applied in semantic segmentation tasks as well
Dice function is nothing but F1 score. This loss function directly tries to optimize F1 score. Similarly direct IOU score can be used to run optimization as well
It is a variant of Dice loss which gives different weight-age to FN and FP
It is a technique used to measure similarity between boundaries of ground truth and predicted. It is calculated by finding out the max distance from any point in one boundary to the closest point in the other. Reducing directly the boundary loss function is a recent trend and has been shown to give better results especially in use-cases like medical image segmentation where identifying the exact boundary plays a key role.
The advantage of using a boundary loss as compared to a region based loss like IOU or Dice Loss is it is unaffected by class imbalance since the entire region is not considered for optimization, only the boundary is considered.
The two terms considered here are for two boundaries i.e the ground truth and the output prediction.
Image annotation tool written in python.
Supports polygon annotation.
Open Source and free.
Runs on Windows, Mac, Ubuntu or via Anaconda, Docker
Link :- https://github.com/wkentaro/labelme
Computer Vision Annotation Tool :-
Video and image annotation tool developed by Intel
Free and available online
Runs on Windows, Mac and Ubuntu
Link :- https://github.com/opencv/cvat
Vgg image annotator :-
Free open source image annotation tool
Simple html page < 200kb and can run offline
Supports polygon annotation and points.
Link :- https://github.com/ox-vgg/via
Paid annotation tool for Mac
Can use core ML models to pre-annotate the images
Supports polygons, cubic-bezier, lines, and points
Link :- https://github.com/ryouchinsa/Rectlabel-support
Paid annotation tool
Supports pen tool for faster and accurate annotation
Link :- https://labelbox.com/product/image-segmentation
As part of this section let's discuss various popular and diverse datasets available in the public which one can use to get started with training.
This dataset is an extension of Pascal VOC 2010 dataset and goes beyond the original dataset by providing annotations for the whole scene and has 400+ classes of real-world data.
The COCO stuff dataset has 164k images of the original COCO dataset with pixel level annotations and is a common benchmark dataset. It covers 172 classes: 80 thing classes, 91 stuff classes and 1 class 'unlabeled'
Link :- http://cocodataset.org/
This dataset consists of segmentation ground truths for roads, lanes, vehicles and objects on road. The dataset contains 30 classes and of 50 cities collected over different environmental and weather conditions. Has also a video dataset of finely annotated images which can be used for video segmentation. KITTI and CamVid are similar kinds of datasets which can be used for training self-driving cars.
The dataset was created as part of a challenge to identify tumor lesions from liver CT scans. The dataset contains 130 CT scans of training data and 70 CT scans of testing data.
Cloth Co-Parsing is a dataset which is created as part of research paper Clothing Co-Parsing by Joint Image Segmentation and Labeling . The dataset contains 1000+ images with pixel level annotations for a total of 59 tags.
A dataset created for the task of skin segmentation based on images from google containing 32 face photos and 46 family photos
Inria Aerial Image Labeling
A dataset of aerial segmentation maps created from public domain images. Has a coverage of 810 sq km and has 2 classes building and not-building.
This dataset contains the point clouds of six large scale indoor parts in 3 buildings with over 70000 images.
We have discussed a taxonomy of different algorithms which can be used for solving the use-case of semantic segmentation be it on images, videos or point-clouds and also their contributions and limitations. We also looked through the ways to evaluate the results and the datasets to get started on. This should give a comprehensive understanding on semantic segmentation as a topic in general.
To get a list of more resources for semantic segmentation, get started with https://github.com/mrgloom/awesome-semantic-segmentation.
- An overview of semantic image segmentation
- Semantic segmentation - Popular architectures
- A Beginner's guide to Deep Learning based Semantic Segmentation using Keras
You might be interested in our latest posts on:
- AWS Textract
- Data Extraction
- Data Extraction
- Best OCR Software
- PDF to Excel
- BPO Automation
- Invoice Processing
- Fuzzy Matching
- Fuzzy Logic
- Google Cloud Vision
- Invoice Management
- Purchase Order Matching or PO Matching
- Three-way Matching
- Payment Reconciliation
- AP Automation
Added further reading material. | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100056.38/warc/CC-MAIN-20231129041834-20231129071834-00862.warc.gz | CC-MAIN-2023-50 | 32,916 | 160 |
https://blog.mbedded.ninja/programming/languages/c-sharp/binding/ | code | The best tutorial of Binding in WPF that I’ve found is here. It includes a project download which is great for binding that works out-of-the-box, which you can then hack/adjust to your own needs.
ObservableCollection() class is used plenty when it comes to binding.
Searching For And Selecting A Particular Element
The following code searches through an observable collection and finds items based a string match with one of the elements properties.
Obtaining The Current DataContext For A UI Element
Obtaining the current DataContext for a particular UI element us useful when you want to set-up binding. The following code shows how to get the data context, obtaining the data context for the entire window (because it uses
this, but you could replace this with any particular UI element if you wish).
This work is licensed under a Creative Commons Attribution 4.0 International License . | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100264.9/warc/CC-MAIN-20231201021234-20231201051234-00535.warc.gz | CC-MAIN-2023-50 | 892 | 8 |
https://knowledge.broadcom.com/external/article/218857/during-installation-the-scheduler-starts.html | code | When installing an AutoSys Scheduler, the installer gives you a choice of whether to start the Scheduler after installation or not:
Start the scheduler following installation: Y/N
A case was reported where the Scheduler started and ran for approx 20 seconds before stopping again, despite the following install selections:
Set scheduler to start at system startup: Y
Start the scheduler following installation: N
The database was not empty, as it had been copied over from another instance. This caused jobs to run for a period of about 20 seconds.
Release : 11.3.6
Component : CA Workload Automation AE (AutoSys)
If you select EEM Security as part of the installation, the Scheduler does get start and run briefly in order to register AutoSys to EEM. If there are jobs defined in the AutoSys database, some jobs may be triggered.
The following is an example where this occurred.
If you select EEM activation during installation, there is no way to avoid the Scheduler being activated temporarily to activate EEM.
The best way to avoid unwanted jobs being triggered by the Scheduler in the above scenario would be to follow one of these two install paths:
1) Point your new AutoSys installation to an empty database. The installation will activate the Scheduler for a few seconds to register to EEM, but no jobs will run as the database will not contain any. After the installation, you can copy over the data to the new AutoSys database.
2) Select Native Security during the installation and activate EEM post-install using the as_safetool method as documented here: | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510368.33/warc/CC-MAIN-20230928063033-20230928093033-00329.warc.gz | CC-MAIN-2023-40 | 1,567 | 14 |
http://joeadok.deviantart.com/art/Eggman-s-Lackeys-271481779 | code | Here's Snively and Lien-da as they appear in the Archie Sonic comics. Sort of, considering how the art styles shift between issues and artists
Making these was quite a few lessons in Blender and my own ignorance, but I'm resonably happy with how they came out.
Can anyone recognise their poses from somewhere?
[Modelled and rendered in Blender 2.48] | s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463122.1/warc/CC-MAIN-20150226074103-00191-ip-10-28-5-156.ec2.internal.warc.gz | CC-MAIN-2015-11 | 349 | 4 |
https://archived.forum.manjaro.org/t/tens-trusted-end-node-security/120655 | code | Despite the fact this is waving a cape at a bull - the product's very well made.
So please - no heated arguments for or against government institutions.
The stand-alone encryption wizard is state of the art - giving military grade encryption a brand new face .
The encryption wizard is a Java program and can be run on any device with a Java environment. If you raise a concerned voice - yes Java in the browser can be - is insecure - Java on the computer is not and the encryption wizard does not make use of anything browser related - I have tested it.
Don't you turn into ...
How do I know your software isn't full of backdoors?
Because doing so would violate principles of enlightened self-interest in exchange for no benefit. In other words, "we don't do that because that would be dumb".
The AES algorithms and their underlying Rijndael ciphers are well known, publically available, and extensively analyzed. No feasible attacks against AES have yet been demonstrated. The attacks which have been published to date fall into two broad categories. The first are academic/theoretical (in which the actual attack would take millennia, require calculating power that makes a Star Trek computer look like a microwave oven, or both). Technically this is faster than brute-forcing the keys, but still not practical. (my emphasize)
Some concluding observations from a pragmatic point of view:
- Deliberate backdoors are a violation of our own tenets of cybersecurity.
- If we were willing to hide backdoors in public software, we'd be willing to lie about it on a public webpage. Sending us an email to ask if we have backdoors is not a useful thing for you to do with your time.
- A backdoor to a system needs a key. If the key to a backdoor were to get out (whether by accident, malfeasance, or disgruntled employees is irrelevent), then whatever is protected by that system becomes vulnerable. Given that the primary use of Encryption Wizard is to protect sensitive information relevant to the DoD, inserting a master backdoor would be dangerously risky and profoundly shortsighted. | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989914.60/warc/CC-MAIN-20210516201947-20210516231947-00379.warc.gz | CC-MAIN-2021-21 | 2,083 | 12 |
https://mirsk.zendesk.com/hc/en-us/articles/212625763-What-is-the-Note-field-how-long-can-it-be- | code | The note field may be used by both dictating and transcribing users. For dictating users it is a way to send supplementary information or instructions to the transcriber. The transcribing user can add to existing, or make new notes to a dictation. The note should be no more that 200 characters long.
Articles in this section
- Controlling the audio player
- Remote Desktop Foot Switch Support
- 'No audio device' error message
- Hey! where's my signup mail ?!
- I get the message: "Can not upload recording. You need to fill out one or more fields."
- Which network ports need to be open?
- Cannot log in from Internet Explorer 10 (IE10)
- Priority does not show in dictation client
- Dictation Type does not show in dictation client
- What Browsers are supported? | s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027323067.50/warc/CC-MAIN-20190825042326-20190825064326-00257.warc.gz | CC-MAIN-2019-35 | 765 | 12 |
http://serverfault.com/questions/419539/accessing-windows-server-2008-r2-virtual-servers-using-hyper-v-manager-for-win | code | I’m having several Windows Servers 2008 R2 which I previously accessed using Hyper-V Manager from Windows 7. On upgrading to Windows 8 and enabling the feature Hyper-V, this is no longer possible. I get an error message stating:
“This version of Hyper-V manager cannot be used to manage servers running Hyper-V in Windows Server 2008 or Windows Server 2008 R2.”
How do I solve this problem and access my virtual servers using Hyper-V manager for Windows 8? | s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257827080.38/warc/CC-MAIN-20160723071027-00289-ip-10-185-27-174.ec2.internal.warc.gz | CC-MAIN-2016-30 | 462 | 3 |
https://www.fi.freelancer.com/projects/android/build-android-app-with-admin-34344858 | code | I want Android app like oraan same ase some features change chek app link
[login to view URL]
If any person interested than bid my project
I have not any advance money Those who want advance, they do not come please!
13 freelanceria on tarjonnut keskimäärin ₹9038 tähän työhön
Hello I'm interested in doing your project As a mobile developer and firebase e-commerce specialist I know the things you mention. Please share details for further processing. | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00453.warc.gz | CC-MAIN-2022-40 | 460 | 6 |
https://www.confreaks.tv/videos/railsconf2013-tdding-ios-apps-for-fun-and-profit-with-rubymotion | code | As Ruby Developer I've had a pretty involved relationship with my Mac. I own iPads and iPhones since Apple started to make them. A few years back I told myself I was going to build apps for the Mac/iPhone/iPad but then reality sunk in when I started learning Objective-C and using XCode. The environment (and the language) felt like a trip back to 1995.
If you are a Web developer used to working with dynamically-typed, lightweight languages, following agile practices like Test-Driven Development, and comfortable with a Unix Shell, then jumping into a development world with an ugly cousin of C++ and an IDE that looks like an F16 cockpit just doesn’t seem appealing.
Luckily for us there is an alternative in RubyMotion, a Ruby-based toolchain for iOS that brings a Ruby on Rails style of development to the world of iOS application development.
In this talk I will show you how you can use well engrained Ruby practices like TDD to build iOS Apps with RubyMotion. | s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587719.64/warc/CC-MAIN-20211025154225-20211025184225-00391.warc.gz | CC-MAIN-2021-43 | 970 | 4 |
https://libguides.lib.msu.edu/raspberry_pi/take_photo_w_python | code | First open the python editor, Thonny.
This is what the editor looks like.
Let's start coding! First, import the packages we will use.
Next, set up the camera. Also called, instantiating the camera.
Now, write the code for taking the photo.
Before testing the code, it needs to be saved. Save the file in the folder where you want your image to be saved.
Click the run button to start the program.
Once your code is finished running, the new image will be saved in your folder!
Now, let's add a little more code to create a time lapse video.
Here we are adding a for loop which repeats the code indented underneath it as many times as you tell it. The range (0, 15) will run this code 15 times. However, we also need to change the file name of the image or else the image will be overwritten 15 times! Here I am adding the current value of i to the end of the filename. The value of i will move through the range during the for loop.
After running the new code (saved as a new python file in a different folder) you can see the Pi has taken a series of images all named in sequential order. The numbers start with 0 and end with 14 which is a total of 15 images.
The images are great, but it would be even better if python stitched them all together into a gif!
To do this, we will use the package, Image Magick. However, there are many options out there.
You may need to install Image Magick, simply type "sudo apt-get install imagemagick -y" in the Pi terminal.
Once you have installed the package, write in the code to import it into your python program.
After the end of the for loop, add the code to stitch together the images. This can take a little while so I am adding a print statement below so that I know when it is finished.
After running the code, you can see the print statement in the shell at the bottom of Thonny.
Go to your folder and view your timelapse!
Next, you can modify the code by setting the time lapse to start after pressing a button, to run only during specific hours, or to tweet you the pictures! The options are really endless. I'd recommend picking up The Official Raspberry Pi Camera book. You can download it for free and see tons of great projects along with the full code and an explanation of what the code does. You can make an underwater camera, an infrared bird box camera, a wildlife camera, and so much more!
You can also use computer vision on the Pi for more advanced image detection. A wonderful computer vision library is OpenCV. It can be a bit of a trick to install so follow the next page of the libguide for instructions. | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817153.39/warc/CC-MAIN-20240417110701-20240417140701-00579.warc.gz | CC-MAIN-2024-18 | 2,572 | 20 |
http://www.cinemacenter.org/movies/movies/now-showing.php | code | Life Itself - One Week Only!
This is a documentary film that recounts the inspiring and entertaining life of world-renowned film critic and social commentator Roger Ebert - a story that is by turns personal, funny, painful, and transcendent. Based on his bestselling memoir of the same name, LIFE ITSELF, explores the legacy of Roger Ebert's life, from his Pulitzer Prize-winning film criticism at the Chicago Sun-Times to becoming one of the most influential cultural voices in America.
"3.5/4 Stars. Far more than just a tribute to the career of the world's most famous and influential film critic, the often revelatory 'Life Itself' is also a remarkably intimate portrait of a life well lived - right up to the very last moment." - Chicago Sun-Times.
"It's another mark of the director's skill that he took me deeper into aspects of that life that I thought I knew the most about." - Los Angeles Times.
"A fulsome appreciation of the life and work of the world's most famous film critic." - Hollywood Reporter.
112 min., Rated R.
Monty Python Live (Mostly)
August 6th at 7:30pm & August 10th at 4pm
No Cinema Center passes, Patron Member discount or Lincoln discount can be applied to the purchase of this event's ticket.
For the first time in more than three decades, comedy legends Monty Python will perform live on stage together this year. Broadcast from London’s O2 Arena, Monty Python Live (mostly) will play in cinemas around the globe.
At a combined age of just 358, John Cleese, Terry Gilliam, Eric Idle, Terry Jones and Michael Palin will once again perform some of their greatest hits, with modern, topical, Pythonesque twists.
Monty Python are rightfully regarded as among the world’s finest-ever comedians. They influenced a generation and revolutionized comedy. Their eagerly awaited reunion promises to be among the biggest live events of 2014.
Coherence - Last Shows!
On the night of an astronomical anomaly, eight friends at a dinner party experience a troubling chain of reality bending events. Part cerebral sci-fi and part relationship drama, "Coherence" is a tightly focused, intimately shot film whose tension intensely ratchets up as its numerous complex mysteries unfold.
"The conceit is alluringly mind-bending without ever seeming off-puttingly brainy." - Los Angeles Times.
"An ingenious micro-budget science-fiction nerve-jangler which takes place entirely at a suburban dinner party, Coherence is a testament to the power of smart ideas and strong ensemble acting over expensive visual pyrotechnics." - Hollywood Reporter.
"An uncommonly clever genre movie, reliant not on special effects-of which there are basically none-but on heavy doses of paranoia." - AV Club.
89 min., Unrated.
Belle - Last Shows!
The true story of Dido Elizabeth Belle (Gugu Mbatha-Raw), the illegitimate mixed race daughter of Admiral Sir John Lindsay (Matthew Goode). Raised by her aristocratic great-uncle Lord Mansfield (Tom Wilkinson) and his wife (Emily Watson), Belle's lineage affords her certain privileges, yet her status prevents her from the traditions of noble social standing.
"The weave of the personal and the political finally proves as irresistible as it is moving, partly because it has been drawn from extraordinary life." - New York Times.
"The performances, from a top cast including Matthew Goode, Miranda Richardson, Tom Felton and Emily Watson, are predictably flawless. The luminous Mbatha-Raw more than holds her own." - Minneapolis Star Tribune.
"A lavish 18th-century historical piece that blends a Jane Austen-like romance with a political drama that explores slavery from a unique perspective." - Toronto Star.
105 min., Rated PG. | s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274866.27/warc/CC-MAIN-20140728011754-00225-ip-10-146-231-18.ec2.internal.warc.gz | CC-MAIN-2014-23 | 3,672 | 24 |
https://psystenance.com/2010/09/ | code | Open Data Waterloo Region
There is now a website for Open Data Waterloo Region as well as a mailing list.
Open Data movements are about getting governments to open up their data sets in accessible electronic formats for citizens to use as they see fit. This allows people to render the data more widely understandable and readable, and to combine data in fruitful ways. I’ll leave further explanation as links: Three Laws of Open Data, 8 Principles of Open Government Data, and Creating Effective Open Government Portals.
In Ontario, Ottawa and London have strong Open Data movements. Here in Waterloo Region, despite the high-profile technology focus and the proliferation of Blackberries, we don’t yet have one. I’m hoping to fix that, perhaps in time to have some impact on the upcoming municipal elections.
Anyone who is interested in helping to build Open Data Waterloo Region — to advocate for and to use local government data to improve our community — is invited to an informal organizational meeting this Thursday in Waterloo. (See the Facebook event listing if you like.) Whether or not you can make it, you are welcome to join the Facebook group to show your support and stay updated on progress. Please direct people who may be interested to this post.
Thursday, September 9, 6pm – 8pm.
Huether Hotel – the BarleyWorks operations room (2nd floor of BarleyWorks)
59 King St N (at Princess St), Waterloo, ON
(Google Maps link. Note that getting there requires traversing several flights of stairs.) | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945433.92/warc/CC-MAIN-20230326044821-20230326074821-00433.warc.gz | CC-MAIN-2023-14 | 1,522 | 9 |
http://superuser.com/questions/306244/adding-root-to-a-group | code | I am not new to Linux, but there's this strange behaviour I'm seeing on my Fedora 15 box. I want to add the superuser to a group called, say,
# usermod -a -G thisgroup root # groups # root bin daemon sys adm disk wheel #
thisgroup is absent. Surprisingly, when I thought of editing
root was present there!
Anyone on why
groups didn't show my new addition? | s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257826916.34/warc/CC-MAIN-20160723071026-00323-ip-10-185-27-174.ec2.internal.warc.gz | CC-MAIN-2016-30 | 355 | 6 |
https://www.techshadows.com/specifying-proper-case/ | code | If you receive information from others as an odd assortment of upper- and lowercase characters, you may want to put the PROPER worksheet function to work for you. This function converts text so that the first letters of any words are uppercase and everything else is lowercase. Actually, what it does is make everything lowercase except any letters that do not follow another letter. Thus, any letters following spaces, punctuation, or numbers would be converted to uppercase.
As an example, if cell D4 contains “THIS IS MY TEXT”, you could use the following formula in cell E4:
The result is that cell E4 will contain “This Is My Text”. | s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711162.52/warc/CC-MAIN-20221207121241-20221207151241-00536.warc.gz | CC-MAIN-2022-49 | 645 | 3 |
https://vtsoftware.co.uk/transplushelp/vat-moss.html | code | The information on this page is for general guidance only and should not be taken as definitive VAT advice, since individual circumstances may vary. You should refer to European Commission guidance for details of your obligations for EU VAT.
If you are a UK (Great Britain and Northern Ireland) business that sells goods and certain services through e-commerce to consumers in EU countries, subject to certain conditions, you may have the option to register for VAT in a single EU country and pay the VAT due to each country through the One-Stop Shop (OSS) Scheme, rather than register for VAT in each country.
VAT OSS returns cannot be produced in VT Transaction+, however sales under VAT OSS can be entered as described below. You can then run a report of VAT on these sales to help you determine the figures for your VAT OSS return.
For more information on the OSS please refer to EC website.
To enter VAT OSS sales in VT Transaction+:
(Note: Screenshots below refer to 'MOSS', which was the former name of the scheme prior to 1 July 2021)
The steps below can also be used if you are not in the OSS scheme but pay VAT to an individual EU country(s), or if you pay VAT or sales tax to a non-EU country(s). You just need to name the account in Step 2. as the VAT/sales tax liability of the particular country, e.g. VAT liability - Republic of Ireland.
1.Create a new income account (Set Up>Accounts>All>New) for these type of sales, where Entries analysed to this account are normally within the scope of VAT is unticked. This is so that these sales do not get picked up on your UK VAT return in VT Transaction+. For example:
2.Create a new creditors account (Set Up>Accounts>All>New) for the VAT OSS liability. Entries analysed to this account are normally within the scope of VAT should be unticked:
3.Enter the sale using the SIN function (or REC function for non-invoiced sales) e.g. for a sale to a customer in the Republic of Ireland for 100.00 net and 23% VAT:
•enter the total amount of the sale including the EU VAT in the Total column, e.g. 123.00 (if you are entering in Euros or another foreign currency, you need to set up multi-currency as explained in Multi-currency accounting)
•leave Output VAT field blank
•in the Analysis of net amount section, enter the net value of the sale on the 1st line, e.g. 100.00; the Analysis Account for this line should be the income account you created in Step 1
•on the 2nd line, enter the EU VAT charged to the customer e.g. 23.00; the Analysis Account for this line should be the VAT OSS creditor account you created in Step 2
•Select Normal for the Type of Sale (for VAT purposes)
4.Repeat step 3. for each sale to non-business customers in other EU countries
5.At the end of each VAT quarter for your OSS return, change the current period caption to the relevant quarter:
6.Run the report of transactions in the VAT OSS account for the quarter by selecting Display>All Accounts>VAT MOSS:
7.Copy the report by clicking on the copy icon in the top toolbar, and paste it to a spreadsheet. You can then use this information to determine the total VAT and total net amounts by country. This is made easier if you can identify the country of each transaction, by including an identifier of the country when creating the customer, e.g. naming an Irish customer account something like 'IRL: A Customer' or 'A Customer: IRL'.
8.You can enter the payment of VAT under OSS in the usual way for entering a payment, i.e. by entering a PAY transaction, making sure the analysis account for the transaction is the VAT OSS liability account you created in Step 2. | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570827.41/warc/CC-MAIN-20220808122331-20220808152331-00439.warc.gz | CC-MAIN-2022-33 | 3,612 | 20 |
https://www.cre8asiteforums.com/forums/topic/36994-kim-krause-berg-on-blog-usability/?pid=182153 | code | Kim Krause Berg on Blog Usability
Posted 18 May 2006 - 03:47 PM
Good reading. Check it out.
Posted 18 May 2006 - 05:17 PM
I have some tweaking to do on my own blog now after reading it.
Posted 18 May 2006 - 05:24 PM
Posted 18 May 2006 - 06:51 PM
With the amount of blogs out there offering a Blog Usability service is a great idea Kim. And having Aaron's as the first is just great. His re-design speaks volumes. Well done!
Posted 18 May 2006 - 09:14 PM
As I did the research for the blog testing, I came away with the thought that we've only touched the surface with blogging - though my sense is it won't be called that for long. It's morphing and evolving into something else.
Posted 05 July 2006 - 05:07 AM
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users | s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823153.58/warc/CC-MAIN-20171018214541-20171018234541-00504.warc.gz | CC-MAIN-2017-43 | 782 | 13 |
https://libraries.io/pypi/pyxpdf | code | pyxpdf is a fast and memory efficient python module for parsing PDF documents based on xpdf reader sources.
- Almost x20 times faster than pure python based pdf parsers (see Speed Comparison)
- Extract text while maintaining original document layout (best possible)
- Support almost all PDF encodings, CMaps and predefined CMaps.
- Extract LZW, RLE, CCITTFax, DCT, JBIG2 and JPX compressed images and image masks along with their BBox.
- Render PDF Pages as image with support of '1', 'L', 'LA', 'RGB', 'RGBA' and 'CMYK' color modes.
- No explict dependencies (except optional ones, see Installation)
- Thread Safe
- Speed Comparison
pyxpdf is licensed under the GNU General Public License (GPL), version 3. See the LICENSE | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00044.warc.gz | CC-MAIN-2022-40 | 723 | 10 |
http://www.steveschofield.tech/techbiz/2012/12/29/christmas-prezzies-from-the-tech-world | code | I wrote about my impending Christmas of heads down grafting on a new, improved product for managing your online information. Well, Christmas has come and gone, leaving only an empty box of Lindt Lindor chocolates, 1/2 stone in excess body fat and a little 21 month old boy even more addicted to trains than he was before.
And true to my intentions, we've been busy.
I'll be giving you regular updates on that from now on, as and when there's something significant to show and tell. But in the meantime, I want to say that having delved into the world of real-time, complex web-app development from scratch, how impressed I am with the open source eco-system.
The technologies, tools, libraries and frameworks at our disposal our numerous, with more being created all the time. This post is a brief overview of those which we have decided to implement as core, or test as necessary. Some you will certainly know, some you may not... all worth noting and adding to your list of tech-to-try.
Heroku - Although it is my first real experience building on Heroku, the choice was an easy one. Free setup, easy to scale, support for and easy installation of 3rd party apps like SendGrid, New Relic and others, the list goes on. I've had to become a little more familiar with GIT (I use SVN normally) but deploying to our repo at Beanstalkapp.com has been pretty easy.
MongoDB - Another first has been a foray into the world of noSQL databases. Not because it's trendy, hip or fashionable to do so, but because we felt this would be the best fit for database needs - primarily due to our perceived "need for speed". We took a lot of guidance from the team at Trello, who have been kind enough to detail lots of thoughts on their choice of stack. I'm not qualified to debate on SQL over noSQL, but experience with query speeds using mySQL previously (on just about every other project I've built) urged me to try something new, and for this product, definitely seems to be a good fit.
Ruby on Rails - At MySpareBrain, the app was built on Google App Engine and thus, any server side scripting was written in Python. For this new product, we discounted Python due to new team skills and experience. We toyed with using Node.js which has been growing in popularity, but in the end opted to progress with Ruby on Rails providing the server side MVC we need, which also works very well with MongoDB.
Devise for RoR - Devise is a modular authentication solution for RoR. We've used Devise in conjunction with Facebook and Twitter social login API's for what we think is a nicely rounded model for creating and managing user accounts with different roles & capabilities. Check out the docs on Github
Angular.js - This is a little misleading, as we're not currently using Angular. Consider it a bonus. But I have been playing with it a bit (I have the makings of a nifty little to-do list app). It's interesting because it enables you to write functional code into the html which makes it quick, relatively easy and clear to read when building web applications.
Animate.css - A sweet little CSS3 library for CSS animations that's very easy to use, cross-browser compatible. Use with class and subtlety. Overuse will kill you.
Kinetic.js - Since we made the decision to use SVG for objects in the new product, Kinetic wasn't really going to be all that much help to us. But, I found a little time to use it to fake the drag and drop of objects that we create in the app using Raphael, in a little section of my attempt to take the online Pitch Deck to the next level. I'll be posting about that soon... Kinetic is good for lots of reasons, just a quick look at some of the demos / examples show just how complex graphical animations can be produced.
SASS - Less or SASS, Less or SASS? OK, SASS. Development team choice, I think related to personal preference of indentations and workflow with HAML, which SASS takes it's own inspiration from. See this to help you decide which to use.
HAML - Our pursuit for beautiful code, something which we decided we wanted to get right from the off, led us to use HAML, for simplified template creation - particularly useful for RoR apps. Our development principles align with HAMLs stated objectives:
- Markup should be beautiful
- Markup should be DRY
- Markup should be well indented
- HTML structure should be clear | s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524879.8/warc/CC-MAIN-20190716201412-20190716223412-00209.warc.gz | CC-MAIN-2019-30 | 4,339 | 17 |
https://embed.planetcalc.com/8008/ | code | Let's suppose you analyze some random data by nature, and you count the number of times a particular value appeared in your data. Or, in terms of probability theory, a number of times a particular event has happened.
A good example of such a task is the analysis of letter frequencies in the text. You have the text, and then you count how much each letter of the alphabet appeared in your text. After that, you probably want to compare your results with theoretical letter (or bigrams, or whatever you count) frequencies, which are often given by probabilities. So, you need to convert from counts to probabilities. It is actually easy - you need to sum all counts and then divide each letter's value to the total number of letters in the text. But, to do it by hand can be boring and tedious - say, you need to import your data to a spreadsheet program, sum the column, fill another column with results of division, etc.
That's why I've created the calculator below. It takes a list of events and the number of times the particular event occurred and calculates the probability of each event by dividing the event count by the total number of events. Also, if there are many events, sometimes you need logarithms of probabilities instead of probabilities - and I've included this option as well. However, note that you can't take the log of zero, so if any event has a count of zero, a log is computed for some small value, in this case, 0.01 divided by total count.
Paste your data, tweak regular expressions used for parsing if needed, then choose the result columns' separator and what values you want to see in the results.
As for regular expression, the only requirement is to produce two capture groups - first for event name and second for event count; by default, it assumes that you have the event name and count separated by a semicolon.
I hope it can save some time for somebody. Enjoy.
- • Probability of given number success events in several Bernoulli trials
- • Bernoulli trials table
- • Binomial distribution, probability density function, cumulative distribution function, mean and variance
- • Urn probability simulator
- • Poisson Distribution. Probability density function, cumulative distribution function, mean and variance
- • Statistics section ( 36 calculators ) | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473472.21/warc/CC-MAIN-20240221102433-20240221132433-00585.warc.gz | CC-MAIN-2024-10 | 2,302 | 12 |
http://www.indeed.com/cmp/Covidien/reviews | code | Covidien no place for Business Intelligence
Senior Data System Architect and Engineer (Former Employee) – Boulder, CO – November 10, 2017
This company dipped its toe into business intelligence, and didn't have a clue what it was doing. It eliminated its R&D efforts in BI when a new CEO came in, and despite our projects success, the company terminated the whole shebang.
Management change doomed BI efforts | s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805809.59/warc/CC-MAIN-20171119210640-20171119230640-00689.warc.gz | CC-MAIN-2017-47 | 411 | 4 |
https://www.simplilearn.com/tutorials/sql-tutorial/what-is-sql?source=frs_author_page | code | Most industries in today’s world, from banks to software companies, deal with a vast amount of data. Therefore, it is essential for us to know how to make sense of this massive amount of data. SQL is the most commonly used language for managing data in relational databases. By looking at the job postings on the website LinkedIn.com, we can see that in just India alone, more than 50,000 job listings mention SQL as one of their top required skills.
What Is SQL?
SQL, or Structured Query Language, is a data management language used to handle data in relational databases. With the help of SQL, you can create and modify the structure of databases and tables. You can also store, manipulate, and retrieve data from databases and tables using SQL. It is a non-procedural or declarative query language, which means that the user specifies which data is required without specifying how to retrieve it.
SQL is a standard language of the International Organization of Standardization (ISO) and one of the most sought after skills in the industry.
Now that we have the answer to the “what is SQL” question, let’s see why it is important in the modern world.
Importance of SQL
The Most Popular and Universal Database Language
SQL is the language most professionals turn to when it comes to handling data. The most popular open-source databases support SQL, making it the most commonly used relational database language.
Easy to Learn
Because SQL is a query language and not a programming language, it is comparatively easier to grasp than any other language with a syntax similar to logical English sentences.
Standard Relational Database Language
Both ANSI and ISO standardized SQL. It hasn’t changed much over the years, and once you learn SQL, you won’t have to worry about keeping up with too many changes in the years to come.
Helps You Understand Your Dataset
You can use SQL commands to obtain a detailed understanding of your dataset, which is crucial in order to retrieve any useful information from it.
Handle Massive Datasets
SQL can handle large datasets that Excel and regular spreadsheets cannot.
If you’re aspiring to work in data analytics or data science, SQL is one of the fundamental skills you’ll need to have.
The Need for SQL
In the early 1970s, IBM had developed two non-relational methods of storing and retrieving data called ISAM and VSAM. Using these methods, you could perform elementary operations, such as storing, deleting, and retrieving one record at a time.
In 1973, IBM started working on a relational database language based on Edgar F. Codd’s relational model paper published in 1970. They named this language SEQUEL, which was later changed to SQL. In 1979, Oracle released the first commercial version of SQL, which was based on the IBM version.
SQL has an edge over previous data handling methods for the following reasons:
- Users can access multiple records of data with a single line of command.
- Users have a clear understanding of the internal structure of data, enabling them to perform complicated queries based on it.
- It provides data security and integrity.
- There is no need to specify how to get the data.
- The syntax is easily understandable and similar to the English language.
The conception of SQL revolutionized the world of data, and that is why it remains at the number one position, even after 40 years of its existence.
Let’s move on to how you can start learning and practicing SQL yourself to get an even clearer picture of what SQL is.
How to Start Practicing SQL
To start practicing SQL, you need to have a Relational Database Management System (RDMS) on your device. You can choose from a wide variety of popular open-source databases available online, the most popular of which is MySQL. It is free and compatible with a variety of operating systems, such as Windows and Linux, and it is simple to install.
The steps to download MySQL on your device are as follows:
- Visit the following URL: https://dev.mysql.com/downloads/mysql/
- Select your device’s operating system from the drop-down list.
- From the list of available versions that appear, download the most suitable version for your device.
- Once the installation of the version of MySQL you selected is complete, you’re all set to start practicing SQL.
To properly understand what is SQL, you need to know about the SQL commands.
Syntax and Basic SQL Commands
To perform all the required commands, SQL consists of the following sub-languages:
Data Definition Language (DDL)
This includes commands like CREATE and DROP, which enable the creation and modification of database objects, such as tables.
Data Manipulation Language (DML)
DML is used to store and modify the data in the database. Commands like INSERT and SELECT belong to this sub-language.
Data Control Language (DCL)
This sub-language comes into play when you want to control the access to your database using the commands GRANT and REVOKE.
Transaction Control Language (TCL)
TCL is used to handle the modifications that DML commands made to the data by using the COMMIT and ROLLBACK commands.
Let’s see how some of these commands work. The first thing we need to do is create a database, which is a collection of structured data stored in any electronic device that a relational database management system oversees.
To create a database, we’ll use the CREATE command. CREATE is a DDL command, and it is used for the creation of both databases and tables. The syntax of the CREATE command is as follows:
CREATE DATABASE database_name;
The name of the database you create should start with a letter and contain only alphanumeric characters and underscores. We'll create a database called our_first_databse and then begin adding objects to that database.
CREATE DATABASE our_first_databse;
To check whether our database was successfully created or not, we can use the
SHOW DATABASES command.
As you can see, our database has been successfully created.
- To use this database, we’ll use the following command:
- Now that we have our database, our next step would be to create a table and populate it.
A table is a collection of rows and columns in which every column represents an attribute and has a data type, and every row is an instance in the table.
We must use the following command to create a table:
CREATE TABLE table_name (column datatype);
SQL has the following six categories of pre-defined data types that you can use:
- Numeric data types, like int and float.
- Date/Time data types, such as Timestamp and Date.
- Character/String data types, such as Char and Varchar.
- Binary data types, including Binary and Varbinary.
- Unicode Character data types, such as NChar and NVarchar.
- Other miscellaneous data types, like Clob and XML.
We’ll create the following table in our database:
The data type of the attribute “Name” is varchar(250), so this attribute will contain a maximum of 250 characters. Attributes “ID” and “Salary” will only contain integers.
- To populate the table we just created, we’ll use the INSERT command, which is a part of the DML sub-language.
The syntax of the INSERT command is as follows:
INSERT INTO table_name (column1, column2…)
VALUES (value1, value2…);
Now, we’ll populate our table “Person” using the following command:
This will result in the following table:
Now that we’ve populated our table, we’ll use the SELECT command to retrieve information. The following is the syntax of the SELECT command:
SELECT column FROM table_name;
If you want to retrieve all the records from a table, you can use the following command:
SELECT * FROM table_name;
Using the example from our “Person” table, we’ll retrieve the attributes “ID” and “Name” using the following command:
SELECT ID, Name
The output of the above command will be as follows:
- Occasionally, you need to delete records from the table you created.
The DELETE command from the DML sub-language can be used to delete a row or a set of rows from a table. The syntax of the delete command is as follows:
DELETE FROM table_name
The DELETE command contains a WHERE clause, enabling users to specify the condition according to the rows they want to delete. If there aren't any specified conditions, all the rows from the table will be deleted.
For example, if you want to delete all the records that contain salaries less than or equal to Rs. 20000 from our “Person” table, you can use the following query:
The above query will result in the following output:
- There is another command in the DDL sub-language that deletes the table or the database from the memory permanently.
The DROP command carries out this function. The syntax is as follows:
DROP DATABASE database_name;
DROP TABLE table_name;
The following command will erase our database, our_first_database, permanently:
DROP DATABASE our_first_database;
Now, we have deleted the database we created, including all of its contents. We can use the SHOW DATABASES command to confirm this.
With this, we reach the end of this “what is SQL?” article.
Learning SQL can potentially open up doors to lucrative careers and trending positions, such as business analyst, data analyst, NET developer, data scientist, and many more. All of these exciting opportunities can either benefit from, or require, learning SQL skills. It is also the standard database language utilized in the most popular database systems, such as MySQL, Oracle, and MS Access.
Now that you know the basics of SQL, you’ve taken your first step towards becoming an expert in SQL. If you liked this article and want to get certified, you can check out Simplilearn's Business Analyst Master’s Program, which also covers SQL in great depth.
Do you have any questions for us? Be sure to leave them in our comments section, and we’ll have our experts in the field answer them for you. | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358480.10/warc/CC-MAIN-20211128073830-20211128103830-00535.warc.gz | CC-MAIN-2021-49 | 9,873 | 98 |
https://lists.debian.org/debian-devel/1999/07/msg01999.html | code | Whose user/group to monitor log files?
I have a package that needs to read (just read) log files. I don't want to
make it run as root (for security reasons, the less privileges, the better).
What user or group can I use?
The logs files are readable by the 'adm' group. But there is no user in that
group by default. I would like to create one just for this purpose (packages
are not supposed to create users lightly, Policy 3.2). What about a 'monitor'
user, member of the 'adm' group, which could be used by all the packages? | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539101.40/warc/CC-MAIN-20220521112022-20220521142022-00401.warc.gz | CC-MAIN-2022-21 | 526 | 8 |
https://community.juniper.net/answers/communities/community-home/digestviewer/viewthread?GroupId=109&MID=71103&CommunityKey=c1a2ae9d-fa3e-41f5-82dc-a447b7b0da24&tab=digestviewer | code | Is it possible to use the supplied USB cable to give console access to an SRX?
I have downloaded the drivers to enable this but despite playing with the settings in SecureCRT, I am unable to get this functioning.
Does anyone use this method, and if so, what else will I need to do?
If you have installed the drivers and plugged the usb cable from your PC to the mini-USB port, does the serial port show up in you device manager? It should register itself as a COM port with next available number assigned (Eg. COM3).
Can you see it in the device manager with the right driver/function? (not showing up as an unknown device).
Sure, I can see it. I have played withe the settings in SecureCRT and cant get a live connection to it. These are my settings;
STOP BITS: 1
Is that correct?
The default settings are 9600 8N1.. So try changing your baud rate and revert with the result.
Success, thanks. Not working on a colleagues machine but working on mine 🙂 | s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360293.33/warc/CC-MAIN-20210228054509-20210228084509-00196.warc.gz | CC-MAIN-2021-10 | 954 | 10 |
https://gitlab.com/gitlab-org/gitlab/-/issues/14061 | code | Add license "Policy" tab to 'License Compliance' page so that users can easily see existing license policies when viewing licenses
Title was: Add classifications selection and policies to license compliance
Problem to solve
This issue is based on discovery work done in https://gitlab.com/gitlab-org/gitlab-ee/issues/12941. We now have a dedicated license compliance section, that shows licenses detected in a project per the license scan. Currently, adding a license and classification policy is done in Project>Settings>CI/CD>License Compliance. This means the licenses detected are visible to all users, but the policies are not (unless a newly detected license appears in an MR).
Additionally, in order to mark a license as denied or allowed (binary, one or the other currently no neutral option), the admin user has to manually add the license and classifications to the “License Compliance” settings area. This is a manual process and a significant burden on the user to set up. Also, consider projects that already have licenses in them, in which case the users would have no awareness of these already committed licenses that may be out of compliance.
License compliance classification names have changed, per this issue: #12937 (closed). In %12.5 we are updating/adding the license management/policy section #14061 (closed), which will also update the new classification names in that section. However, the classification names need to be updated in the UI seen in the MR widget.
This is MVC following ~"product discovery" #12941 (closed)
When complete we should be able to click to view policies from the license list in a new tab so i don't have to go to the settings area.
- Compliance Role wants to see that they are following policies that have been set, edit policies as needed, and set policies for unclassified licenses.
- Delaney (Development Team Lead)
- Sasha (Software Developer)
- Sam (Security Analyst)\
- Legal and/or person responsible for orgs compliance
- This MVC lays the foundation for the following next steps: (policies shown against licenses currently detected in the project)
- Updates classification names and adds uncategorized option #12937 (closed)
Job's to be done
- User that is responsible for compliance: When my organization has policies with licenses, I want to be aware of my companies policies, so I can make sure my project licenses are in compliance with my orgs compliance.
- User that is accountable for compliance: When I need to enforce our organization's licenses restrictions, I want to be able to view them and define policies, so that I can ensure a project's compliance.
- add policy tab and count
- on policy tab display columns license and policy
- display comments icon next to policies in the column if present, don't if not
- mousing over comment icon gets you tooltip with comments
Improve the information architecture by unifying licenses detected in a project, with policies designated and created by the admin. This way policies set by the admin will be visible to all project participants.
|UI, seen by both developer/maintainer (follow up issue: #34698 (closed) to add edit/add policy)|
Update license names that are seen in the MR. This issue is closely related to #12530 (closed)
Classification names that require change:
- Uncategorized, newly detected or admin has not selected classification
- Approve => Allowed, admin has classified license as acceptable
- Approve => Allow, used in the call-to-action seen in the MR (admin view) to classify license as Allowed
- Blacklist => Denied, project participant views this classification when admin classified license as not allowed
- Blacklist => Deny, used in the call-to-action seen in the MR (admin view) to classify license as unacceptable
These changes would be reflected in: merge request (license modals), Settings > CI (adding new and existing license dropdown), and then in the new policies tab.
Permissions and Security
- Developer view may view policies, but can't adjust them
- Maintainer may view/add/edit/delete policies
- Public projects policy section is not visible to non-project participants (#33659 (closed))
- not logged in - no tab and no count
- License compliance foundations document
- Updated classification names issue #12937 (closed)
- Update docs https://docs.gitlab.com/ee/user/application_security/license_compliance/#project-policies-for-license-compliance with additional way to see policies
- unit test on NOT seeing as non developer (not logged in, logged in but not dev)
- unit test can't see
- unit test can see as maintainer, and can edit
What does success look like, and how can we measure that?
- User navigates to license compliance section then policies tab, when tasked with adding a license classification policy
- User understands the difference between "detected in project" and "Policies" section
- User is able to add a license and a classification to the policies list
- (We can measure these items in an upcoming user test - ToDo create solution validation issue)
- Usage ping for policies added?
We are striving to make the person in charge of compliances job direct and with the least amount of manual work or busy work (copy paste). This should make it simpler to interact with all licenses in the project to be able to see their state, and quickly update as needed.
What is the type of buyer?
Links / references
- Discovery issue: #12941 (closed)
Subissue - implement feature flag in UI to toggle tabs and additional "Policies" tab.
- Implement feature flag to toggle on and off the displaying of the additional two new tabs "Detected in project" and "Policies"
- Implement tabs. This covers rendering the existing licenses table in "Detected In Project"
- Show counts in the tabs Notes: This will set us up to start merging this work in pieces without exposing it in production
Subissue - Display Add license header in "Policies tab" and table with dropdown and modal
Refactor existing add license UI that we use in license management page so we can use it in two places. In particular the add licenses modal and table.(Now covered in Issue 2 since its a good chunk of work)
- Need to decide if we will use client side search or not In the license management page search is done client side. License management uses client side pagination with the Paginated-List component from gitlab-ui. We don't have a re-usable server side pagination table as far as I know. I'm working on one for License List we should be able to use.
After refactor, implement table and modal.
Note: We may have to create an entirely separate issue to refactor the license management views/store.(Issue 2 created below)
- Provide the API to display the policies for a project
- Provide the API to create a policy.
- Provide the API to update a policy.
- Implement feature flag and policy tab w/ table
- Update everywhere in the ui we re-named approval status -> classifications
Documentation update - Who is responsible for this?
- Release Post ready just needs images | s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703550617.50/warc/CC-MAIN-20210124173052-20210124203052-00093.warc.gz | CC-MAIN-2021-04 | 7,018 | 69 |
https://forums.adobe.com/thread/1252632 | code | In bootstrap.css have a look at Line 6111
See what happens when you set the values to zero (0)
In bootstrap.css change to the following:
padding: 60px 0;
margin-bottom: 30px 0;
background-color: #999; /**adjust value as desired**/
Thanks for the response guys, the problem is that I don't have the bootstrap.css. I used the bootstrap sample page templates and it did not gave me that particular file.
it only produced the bootstrap-responsive.css which has no hero tags.
I apologize, I did not notice the folder that contained the bootstrap.css. Okay, the new values that you guys provied me, did some improvement but the banner shift to the left. It can be a benefit so i can add the information about the product in my slide show but how i can shft it to the center or the rigiht. Much appreciated.
You'll need to tweak the code. Try opening the page in Firefox with Web Developer Toolbar add-on. This will allow you to edit the CSS code on screen until you get what you want. When you're happy with it, copy & paste the new CSS code into your current style sheet in DW. | s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257644701.7/warc/CC-MAIN-20180317055142-20180317075142-00267.warc.gz | CC-MAIN-2018-13 | 1,072 | 10 |
https://inspiredideasblogging.wordpress.com/2014/04/28/crit-pecha-kucha-presentation-feedback/ | code | So today I did a Pecha Kucha Presentation. Here is my website design so far. Here I will record the feedback I received.
Feedback for the welcome page animation
People said that I should make the logo change colour according to the background so that it would stand out. Some people thought there should be some music playing during this animation.
Feedback for the descriptive words animation
No one really liked this unfortunately so I’ll bin it. One tutor thought it was okay. The other tutor said that it was as if the viewer was being told what to think about the music.
I will change the colour of the logo on the welcome page. The animation of the descriptive words was just a random idea I had so I don’t mind replacing it with something more traditional and recognisable/understandable. There is still lots to do. | s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647584.56/warc/CC-MAIN-20180321063114-20180321083114-00593.warc.gz | CC-MAIN-2018-13 | 826 | 6 |
https://about.me/bharekar | code | Web Developer, Software Engineer, and Designer in Pune, India
Hi, I’m shubham. I’m a web developer living in Pune, India. I am a fan of web development, design, and technology. I’m also interested in music and photography. You can visit my website with a click on the button above. | s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864919.43/warc/CC-MAIN-20180623015758-20180623035758-00308.warc.gz | CC-MAIN-2018-26 | 287 | 2 |
https://support.gainsight.com/PX/Integrations/01Technology_Partner_Integrations/Salesforce_(SFDC)_Integration | code | This article explains how to integrate Salesforce with Gainsight PX.
The Gainsight PX Salesforce integration allows you to copy data from Salesforce to Gainsight PX on the Account and User records. If a matching SFDC record is found for a given Gainsight PX record, then the chosen fields on the field mapping screen are copied from the Salesforce object to the matched Gainsight PX record.
Note: For those that do not use the standard SFDC Account and/or standard SFDC Contact objects and instead have your Accounts and Contacts defined as an SFDC Custom Object, this integration also allows you to specify any SFDC Custom Object as the SFDC Source object.
Set Screen For Control Matching Logic
Click the Settings icon on the Salesforce card in the Integrations screen to enable data retrieval settings for Accounts and Contacts on the Matching Logic dialog.
Account Match Scenario
For Gainsight PX account records; if the account is matched before and the account.sfdcId field matches the id that was matched previously, Gainsight PX retains the previous match and skips remaining matching logic. Else, the integration finds the SFDC account that matches the "best" by applying a weighted value to how well it matches on the criteria that are selected in the SFDC integration screen in the application.
If there are multiple SFDC accounts that match, the account with highest cumulative score is considered a match.
|sfdcId to Salesforce ID||account.sfdcId equal to sfdcAccount.id||2.0|
|Custom Field Matching||Matching value in the two given fields||2.0||Fields must be of the same type, only strings and integral (whole number) numerics are supported.|
|Website Domain||account.website domain equal to sfdcAccount.website domain||1.0|
|Name||account.name equal to sfdcAccount.name||1.0|
|Recent user domain||user.email domain equal to sfdcAccount.website domain||1.0||Retrieves 100 most recently seen users and extracts they domains from their email addresses. If there is a mixture of email domains, the score is weighted by the portion of the users that have the same domain.|
Contact Match Scenario
For Gainsight PX user records; if the contact is matched before and the user.sfdcContactId matches the id that was matched previously, Gainsight PX retains the previous match and skips the remaining matching logic. Else, finds the first matching contact by iterating through the matching criteria that are selected in the SFDC integration screen. The first contact found that matches is selected.
The matching is done in the following order:
|sfdcContactId to Salesforce Contact ID||user.sfdcContactID equal to contact.id|
|Custom Field Matching||Matching value in the two given fields||Fields must be of the same type, only strings and integral (whole number) numerics are supported.|
|user.email equal to contact.email||If more than one matching email on SFDC, not considered a match|
|Phone||user.phone equal to contact.phone||If more than one matching phone on SFDC, not considered a match|
Field mapping allows you to map the field(s) in SFDC that you want to push to Gainsight PX. Authorize your access to Salesforce using the Authorize button on the Salesforce card. Click on the Map Fields icon on the Salesforce card. The Mapping fields window is displayed.
Following are the supported field types:
- DATE_TIME types in PX map to the following field types in SFDC
DATE, DATETIME, TIME
- STRING types in PX map to the following field types in SFDC
STRING, TEXTAREA, PICKLIST, MULTIPICKLIST, COMBOBOX, EMAIL, URL, ID, PHONE
- NUMBER types in PX map to the following field types in SFDC
INT, DOUBLE, PERCENT, CURRENCY
- BOOLEAN types in PX map to the following field types in SFDC
For more information on how to configure field mapping, refer to the Data Pulled from Salesforce section in the Salesforce Integration in Gainsight PX (Bi-Directional) article from the Additional Resources section.
|Salesforce Integration in Gainsight PX (Bi-Directional)| | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00039.warc.gz | CC-MAIN-2022-40 | 3,965 | 31 |
https://forum.arduino.cc/t/library-import-not-working/49286 | code | I recently downloaded the SoftwareServo library and installed it in the library folder. I imported the library using the Sketch -> Import Library path, but when it puts the include in the file the SoftwareServo.h file doesn't change into a red color like all the other standard libraries. I ignored that difference and tried to run a sketch, but it wouldn't run. I'm assuming because the IDE for some reason doesn't see the library. What do I do to fix this?
but it wouldn't run
Are you saying the IDE crashed?
Coloring is based on a file supplied with the library. That file may or may not be complete. It is the compiler's ability to process the sketch that is important, not the color of the text on the screen.
If the compiler is able to compile, link, and upload a sketch, the IDE was able to find the library.
I ignored that difference and tried to run a sketch, but it wouldn't run.
Ignoring the lack of syntax coloring is fine. Failing to explain the "it wouldn't run" statement is not.
Was the compiler able to compile, link, and upload the sketch? What did you add to the sketch that involved the servo? What symptoms did the Arduino exhibit that caused you conclude that "it wouldn't run"?
By it wouldn't run I meant that the same code that used the Servo.h library wouldn't run with the SoftwareServo.h file even though it compiled without error. I did make the necessary Servo versus SoftwareServo instantiation changes. I will take a look at it again and ignore the color. Thanks. | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510498.88/warc/CC-MAIN-20230929054611-20230929084611-00232.warc.gz | CC-MAIN-2023-40 | 1,494 | 9 |
https://melissacrmexpert.wordpress.com/ | code | Salesforce.com (SFDC) dropped the potential sale to Microsoft (MSFT) for $55M last year…and now, MSFT just checkmated. Now what? What is going to happen to the well-loved LinkedIn Navigator integration with SFDC?
If you look at the landscape of what LinkedIn can do for business applications like SFDC and Dynamics, it is clear that this acquisition was a power play to bolster Dynamics as a contender in this space. Dynamics has struggled for some time trying to gain traction as the go-to tool versus SFDC. With LinkedIn, Dynamics gains a lot of attention…you cannot build a business without have a strong amount of potential buyers to market and sell to. Buyers will need to weigh what it means to now purchase and.or expand SFDC with the perceived risk not having integration with LinkedIn and.or LinkedIn Navigator. There is not and easy, cost-effective way, particularly for SMD and MidMarket businesses, to get the quality and amount of data for business use elsewhere nor by down loading directly from LinkedIn. (Do you hear that? That is the sound of list purchase vendors celebrating!)
As a heavy SFDC user, there are obvious reasons why Dynamics is overlooked consistently. SFDC has the Apple-esque model, using developers to build out applications. SFDC’s user community blows away Dynamics like an exploding volcano. DreamForce is THE event of the year for business applications users AND developers, and has been for some time- think 150K people attending and shutting down SF big if you have not been. Dynamics gets zero points for being closed off to outside developers. Dynamics is not user-friendly either in comparison.
So now what? Here are my predictions:
- SFDC will continue to talk about selling to Oracle, which is the latest rumors about SFDC selling given Oracle having just raised $10M. It’s a bad idea in my opinion, even thought SFDC is built on the Oracle platform. Historically, Oracle ruins business products for business users- take Eloqua and Siebel as prime examples. Oracle with small business and end users being able to use their tools do not all belong in the same sentence. Unless they have a major strategy and new team for SMB and MidMarket, the idea frightens me!
- SFDC will continue to solidify both data and eCommerce platform partnerships- Google, Amazon.com. Rumors have been swirling about SFDC being purchased by either of these as well.
- LinkedIn Navigator and LinkedIn integrations with SFDC will continue without much addition R&D, and eventually be phased out over the next 1-2 years, if not sooner. Why would you give a competitor access to your purchase for very long after you figure out how it has leveraged your asset? I already am hearing very loud, worried cries from SFDC users, and potentially purchasers for LinkedIn tools with SFDC reconsidering moving forward.
- We will see LinkedIn competitors start gaining major traction, winning significant investments… MSFT haters will work their very best to make sure that happens!
Let’s see if my crystal ball is accurate! Until then, I am downloading my contacts from LinkedIn, thank you very much! | s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809160.77/warc/CC-MAIN-20171124234011-20171125014011-00389.warc.gz | CC-MAIN-2017-47 | 3,122 | 9 |
http://forum.archosfans.com/viewtopic.php?f=50&t=37317&p=251912 | code | Baasje thank you for answering. Unfortunately adjusting the resolution didnt help either.
Ive searched the net and Ive found the (almost) perfect solution!
Screenmouserotate is a tool that rotates your screen and automatically your mouse pointer also. However, there is one little problem. I have no idea how to create a shortcut for this (dont know to much about computers). A keyboard shortcut or a (permanent) icon in the systemtray would be perfect. Any idea how to do this ?
Screenmouserotate can be downloaded here :http://www.math.uaa.alaska.edu/~afkjm/t ... ?q=node/70 | s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347434137.87/warc/CC-MAIN-20200603112831-20200603142831-00080.warc.gz | CC-MAIN-2020-24 | 576 | 4 |
https://www.multiplemedia.com/en/blog/crashlytics-mobile-applications | code | Crashlytics, Analysis of Applications Stability in a Decentralized Production Environment
Unlike a website, it is rather difficult to test the stability of a mobile application. With a website, we have server logs to know the errors that occur in the backend, and front-end errors are easily tested with a few browsers on a computer running Windows or MacOS. When it comes to mobile application, the process is a bit more complicated.
The testing and debugging of the API is quite easy since this data source is centralized on a server and the logs are easily accessible. It is way more complex with the application because it is installed on phones. It is not possible to fetch the logs from thousands of devices every time there is a crash, or even to know that there is a crash without having manually tested it before the deployment. | s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583083.92/warc/CC-MAIN-20211015192439-20211015222439-00692.warc.gz | CC-MAIN-2021-43 | 837 | 3 |
https://www.vault17.com/harryvangestel | code | Harry van Gestel
1953 - Reusel, The Netherlands
Van Gestel's work has been exhibited all over the world from China to the US and Brazil. In 2006 Sotheby's amsterdam organised a oeuvre exhibition of Harry van Gestel. Now van Gestel resides back home in Amsterdam and can often be found creating new work in the Harry van Gestel gallery. | s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948516843.8/warc/CC-MAIN-20171212114902-20171212134902-00500.warc.gz | CC-MAIN-2017-51 | 335 | 3 |
https://www.biztalkgurus.com/tag/azure-advisor/ | code | Do you feel difficult to keep up to date on all the frequent updates and announcements in the Microsoft Integration platform?
Integration weekly update can be your solution. It’s a weekly update on the topics related to Integration – enterprise integration, robust & scalable messaging capabilities and Citizen Integration capabilities empowered by Microsoft platform to deliver value to the business.
If you want to receive these updates weekly, then don’t forget to Subscribe!
Cloud and Hybrid Integration:
- C#/BizTalk Developer Novo Technologies Modesto, CA, US
- Sr Biztalk Developer Stafflabs Inc Princeton, NJ, US
- BizTalk Developer First Tech Federal Credit Union Rocklin, CA, US
- Biztalk Developer Jobspring Partners Los Angeles, CA, US
- EDI Integration Developer Seaboard Foods Shawnee, KS, US
Hope this would be helpful. Please feel free to let me know your feedback on the Integration weekly series. | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506028.36/warc/CC-MAIN-20230921141907-20230921171907-00292.warc.gz | CC-MAIN-2023-40 | 920 | 10 |
https://docs.raisenow.com/concepts/payments/ | code | What is a Payment?
A payment represents an attempted or succeeded transfer of money (for a donation, a purchase, a subscription fee...) from a payer to an organisation's account.
One-off payments can be created through the payments API, usually using a payment widget, or by charging an existing payment source. Recurring payments are initiated by a subscription by charging automatically a payment source. The structure of a payment object is very similar in both cases.
Payments are rather complex objects. They contain some core information (like date, amount and currency, status, the payment method that was used), a supporter snapshot if provided, custom parameters, possibly references to the payment source and subscription that were charged or that were created together with the payment, besides technical data from the payment provider.
amount holds the paid amount in minor units - basically, in cents or thousandths for currencies supporting them, in units otherwise. It is responsibility of the client to know what are the minor units of each currency.
currency_identifier contains the currency ISO 4217, in capital letters.
The main identifier of a payment, like for most other RaiseNow objects, is the
Payment providers usually assign an id to every payment, and return it in the payment response: This is stored as
provider_payment_id. It's available only after the payment request has been sent to the provider, and may be missing if the payment failed for technical reasons. It can usually be used to look up a payment in the provider dashboard or API.
Some providers must receive an identifier and may not support uuids. In this case, RaiseNow generates the
For advanced integrations, key-value pairs are stored in the object
additional_identifiers. These are handled by RaiseNow and cannot be specified by the customer.
Payment method-related fields
method_profile_uuid specifies the method profile used in the payment request or attached to the payment source, and
provider_profile_uuid the provider profile to which it belongs. Similarly,
organisation_uuid indicate the account and organisation of the provider profile.
test_mode is a boolean reporting whether the method profile is in test mode or production - in normal cases, it should always be
payment_method contain the identifiers of the payment provider and method that were used to make the payment.
Payment providers as of 2022:
|Payment provider code||Name|
|twint||TWINT (Switzerland only)|
|raisenow_reconciliation||RaiseNow direct integrations with banking systems|
Payment methods as of 2022:
|Payment method code||Name|
|twint||TWINT (Switzerland only)|
|card||Credit or debit card|
|bank_transfer||Bank wire transfers|
|sepa_dd||SEPA Direct Debit|
|wallet||Wallets (Google Pay, Apple Pay)|
Credit or debit card payments can have a field
brand_code with the card brand (e.g. Visa, Mastercard, etc.). The codes may change depending on payment provider and flow.
Wallet payments can have a field
wallet_type - as of 2022,
Bank transfers can have a field
bank_transfer_type - as of 2022, one of
dd, esr, es, wire.
status_history contains objects reporting a
timestamp (UNIX timestamp in seconds), a coarse
status and a more granular
|The payment has been created but not yet sent to the payment provider|
|The request has been forwarded to the payment provider, we are waiting for an answer|
|The request has been forwarded to the payment provider but there's been an unexpected error, we don't know if the payment succeeded or not|
|The payment was successful|
|There was a clear error, the payment didn't go through|
|User interaction was needed to proceed, but it didn't come in a timely manner|
|The user explicitly aborted the payment|
|The payment request was technically correct, but the payment provider declined it (e.g. insufficient funds, fraud detection)|
|The payment has been completely refunded|
|The payment has been partially refunded|
|The payment has been reversed|
created holds the creation timestamp, which should be the same as the
last_status_timestamp can be used to see more conveniently the most current status.
Refunds and reversals
Refunds and reversals, if any, are saved in objects
reversals. It is possible to have multiple refunds, for instance if some of them failed, or in case of partial reversals. The total sum of successful refunds cannot exceed the payment's original amount. There should never be more than one reversal per payment - however, due to the way they are imported such a case cannot be excluded. If there are refunds or reversals, flags
has_reversals are set to
true, to simplify search and consumption.
Each refund exposes fields
uuid, created, amount, last_status, last_status_reason, last_status_timestamp. Their meaning is the same as for payments.
Reversals have the same fields, plus
method_profile_uuid, needed because a reversal may be imported from external sources without a matching payment. Reversals are created during the reconciliation process based on information from banks, enabled only upon request and for specific setups.
If a payment source or a subscription were created together with the payment, fields
created_subscription_uuid will be filled. On the dual case, if the payment was created by charging a payment source or subscription they will be reported in
charged_by indicates whether the charge was triggered through RaiseNow or a payment provider.
If the payment is linked to a stored supporter, its UUID is saved as
supporter_uuid. A copy of the supporter's data at the time of the payment is saved as a supporter snapshot in the object
supporter_snapshot. If the payment was done charging an existing subscription or payment source, the current supporter's data is also copied into the supporter snapshot.
Custom and RaiseNow parameters
custom_parameters holds key-value pairs that can be freely set by the customer. Values are limited to strings. They can be set when the payment is made or changed with the replace or patch custom parameters endpoints.
raisenow_parameters holds an arbitrary structure used by RaiseNow for internal purposes.
Custom parameters for payments done through the payments API are taken from the request. For payment source charges, they are taken first from the payment source's custom parameters, and can be overridden in the request. For subscription charges, they are taken first from the subscription's custom parameters, then from the payment source's.
Pricing data is saved in the
pricing object. This will report only the fees RaiseNow computes directly. For instance, Stripe fees would not be included, as they are computed and applied by Stripe.
metadata contains technical information about the parameters used to make the payment, responses from the payment provider, etc. It holds a sequence of objects with a compound key formed by
type, group, name, a string
value and a | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506028.36/warc/CC-MAIN-20230921141907-20230921171907-00561.warc.gz | CC-MAIN-2023-40 | 6,885 | 76 |
https://www.101phonerepairs.com/forum/welcome-to-the-forum/igor-pro-mac-download-crack-12-top | code | You don't have the account credential that is used on the Samsung phone and you want to bypass Samsung FRP. Go for the best Samsung FRP tool which is the Samsung FRP tool download.
We hope that this blog proved helpful and gave you the step by step instructions to bypass Samsung FRP lock with the best Samsung FRP bypass tool download. Let us know if this worked for you.
Best Samsung FRP solution is the Samsung FRP bypass tool download. It is the best option to restore your Samsung phone to its original factory condition. For any problem related to Samsung FRP and other stuff, you can email [email protected]
GigaOM Pro is a subscription-based service, but you can still enjoy it for free with a limited number of articles each month. We hope you enjoy the coverage. If you do, consider upgrading to the full version of the service for more in-depth and timely stories about the future of technology. If you're already a subscriber, we'd appreciate your support in continuing to let us publish.
The Licensee is not purchasing the source code or object code of the Software Product. The Licensee is purchasing a license to download, use, copy, or change the Software Product as set forth in this Agreement.
The Licensee acknowledges that the object code of the Software Product is a product that is protected by copyright laws and treaties, as well as laws and treaties related to other forms of intellectual property. The Licensee is not purchasing the right to use, copy, or change the Software Product, but is purchasing a license to download, use, copy, and change the Software Product as set forth in this Agreement.
The Sumita Arora python class 12 book is an ideal study guide for students who are preparing for the competitive exams. It includes concepts that are designed to help students build a strong foundation for the competitive exams and prepare them to build their careers as professionals. You can also use the Sumita Arora python class 12 book to prepare for competitive exams like Olympiad, GATE, and IIT-JEE. 827ec27edc | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511364.23/warc/CC-MAIN-20231004084230-20231004114230-00525.warc.gz | CC-MAIN-2023-40 | 2,045 | 7 |
https://backerjack.dreamhosters.com/tag/refold/ | code | The unnatural requirement for most office-bound workers to sit for eight hours a day is not only an exercise in boredom but unhealthy to boot. Not being active enough throughout the day because reports need to be typed up and emails need to be replied to is harmful to one’s health in a myriad of ways, something that has prompted many inventors and small companies to remedy the situation.
From wearables to standing desk, every team has a solution. The GAZELAB team’s? Its Gaze Desk, a connected standing desk with a few tricks up its sleeve. For one, both of its sections — front and back — can be independently controlled so that the monitor and keyboard are at different heights, something that helps avoid awkward and sometimes painful positioning of the elbow and neck. This all happens automatically after inputting height and weight into the Gaze Desk’s companion app so that it calculates the optimal height for itself. | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816587.89/warc/CC-MAIN-20240413083102-20240413113102-00012.warc.gz | CC-MAIN-2024-18 | 939 | 2 |
https://forums.asp.net/t/969684.aspx?HELP+Please+ | code | Last post Mar 07, 2006 07:27 PM by ScottGu
Mar 07, 2006 09:38 AM|RDF|LINK
Ok, I've converted a project from 2003 to 2005 and I get 5 errors, all being the same but with a different name in the property area, ex:
ERROR: The member declaration for 'lblTitleName' was removed and its accessibility has been changed from 'public' to 'protected'. To access this member from another page you should create a public accessor property for it. controls\AM_Header.ascx.cs
So no big deal here, all I have to do is make public properties for this and change the code in the appropriate areas and I'll be fine. But my question is, how do I view the base class which 2005 created for all my pages? This article
http://webproject.scottgu.com/CSharp/UnderstandingCodeBehind/UnderstandingCodeBehind.aspx shows that there should be a 'PageName.aspx.designer.cs' file which is where that code is kept, but my VS 2005 doesn't show that. I just see a .aspx page and a .aspx.resx
Can anyone tell me how to get these files to appear on the Solution Explorer? I also don't have a 'Show All Files' on my Solution Explorer like it used to in 2003, any ideas?
Mar 07, 2006 07:27 PM|ScottGu|LINK
It sounds like above you are using a VS 2005 Web Site Project -- which is different from the VS 2005 Web Application Project option that I described in that link above. If you want to migrate using the VS 2005 Web Application Project option, I'd recommend
following this link:
http://webproject.scottgu.com/CSharp/Migration/Migration.aspx This will allow you access to the field members, and allow you to change their access from protected to public.
If you use a web-site project, the .designer.cs file is generated for you automatically. In this case you'll want to add a property access to your code-behind file to expose controls (note: I'd recommend this pattern anyway since it is cleaner -- but with
the web application project type you can still get away with just marking the field public).
Hope this helps, | s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202711.3/warc/CC-MAIN-20190323020538-20190323042538-00493.warc.gz | CC-MAIN-2019-13 | 1,984 | 14 |
https://docs.wclovers.com/geolocation-frontend-search/ | code | As discussed, WCFM will allow the users to search nearby product and vendors, in this section, let’s see how the search filter works from an end user’s ( or customer’s) perspective.
VENDOR SEARCH :
The users can locate their nearby vendors using the search filter in the store-list page as shown below:
You can enter your address ( or enable location location tracking ) for filtering out the stores available in your nearest location. As an example, you can see the search result for location enabled feature as shown below:
Note: For further information on Store-list page setup, click here.
Users can also search the available products in his/her nearest location in the shop page. Similar to searching a vendor, the user here can enable it’s location tracker to filter out the products present in a radius of provided range. Here’s again a screenshot of the same showing product search.
Pic showing all 29 products with Geo-location enabled.
Turning ON the Geo-location will show the search result to 9 products as shown below: | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473360.9/warc/CC-MAIN-20240221002544-20240221032544-00849.warc.gz | CC-MAIN-2024-10 | 1,041 | 8 |
http://tunicatemeeting.info/22906-julien-mairal-thesis.html | code | Julien, mairal - Publications by Topic
A major research effort is currently taking place in order to develop methods that are simpler to use and which are capable of exploiting a large quantity of unlabelled data. Proximal Methods for Hierachichal Sparse Coding. In, international Conference on Machine Learning, 2009. Journal of Machine Learning Research (jmlr). Natural Language Processing Thesis Page generated 22:09:55 cest, by jemdoc. How are you going to make use of this grant? Complexity Analysis of the Lasso Regularization Path.
Julien mairal thesis | s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314696.33/warc/CC-MAIN-20190819073232-20190819095232-00484.warc.gz | CC-MAIN-2019-35 | 559 | 3 |
https://vaadin.com/forum/thread/1476729/gwt-graphics-pie-chart | code | To simplify things and help our users to be more productive, we have archived the current forum and focus our efforts on helping developers on Stack Overflow. You can post new questions on Stack Overflow or join our Discord channel.
Hello every one! I am decide to use gwt-graphics to show PieChart in my project. But I do not understood is there in gwt-graphics.jar class or functions to draw Pie Chart? Is it true that gwt-graphics allow paint only simple figures (circle, bars, gradients and e.t.c)? | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500042.8/warc/CC-MAIN-20230203024018-20230203054018-00229.warc.gz | CC-MAIN-2023-06 | 502 | 2 |
http://forums.slimdevices.com/showthread.php?15076-Playlist-Help&p=44265&mode=threaded | code | I am trying to find an easy way for my wife to configure playlists for our Squeezebox. She currently uses the browser interface and has all kind of issues with it. Ideally I would love to use iTunes, but a similar editor would work fine too. Here are our setup details:
Music library resides on my Macintosh. My wife accesses through a laptop over a network. The Mac can create Squeezebox native playlists with the browser or iTunes playlists that also can be accessed from Squeezebox. The PC, however, cannot create iTunes playlists. I tried making my music library on the Apple shared. While this allows the PC iTunes to see all the music on the Mac server, the PC is not permitted to create a playlist with songs from the shared library. In a perfect world, it would be great if I could do something to get iTunes to permit playlist creation from a shared library, but I am assuming this will not be possible.
I suppose the next best thing would be to find an editor with the simplicity of itunes that works on a networked library (and ideally stores files in Squeezebox readable format). Any ideas? Even better would be if there was a way to download the song list to the laptop, and then create and modify a playlist off line from the server (and without all the songs filling the harddrive), and finally upload the list back to the server at a later date. Thoughts on that?
Results 1 to 1 of 1
Thread: Playlist Help
2005-07-13, 05:35 #1
- Join Date
- May 2005 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702525329/warc/CC-MAIN-20130516110845-00070-ip-10-60-113-184.ec2.internal.warc.gz | CC-MAIN-2013-20 | 1,465 | 8 |
https://code.videolan.org/videolan/libdvdnav/-/tags | code | libdvdnav 6.1.1 Very small release fixing build and CI issues
libdvdnav 6.1.0 This is a major release fixing bugs for some specific DVDs, some crashes and introducing a new API to improve the logging of the library.
libdvdnav 6.0.1 This release improves random playback mode, to work-around broken discs, fixes divide-by-zerop issue in tmap search and a compilation issue on OS/2.
libdvdnav 6.0.0 This release fixes long-standing bugs in the VM and a few crashes and compilation issues.
libdvdnav 5.0.3 This minor release fixes a regression when reading a DVD label, and adds dvdnav_open_stream to read from virtual devices with read/seek callbacks
libbdvdnav 5.0.2 This release is another minor release of libdvdnav, fixing important bugs present in 5.0.1, and reported often. The important fixes include 2 crashes and wrong asserts, notably around dvdnav_get_position()
libdvdnav 5.0.1 This release is a minor release of libdvdnav, fixing important bugs present in 5.0.0. The important fixes include double-free in dvdnav_free_dup, integer overflow, data race condition and improved compatibility with some DVDs.
libdvdnav 5.0.0 This release is a major version of libdvdnav. This is a very important release focused on cleaning the code, the buildsystem, and fixing potential crashes and security issues. It also adds support for Android and OS/2. It should stay API and ABI compatible with older releases. | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510358.68/warc/CC-MAIN-20230928031105-20230928061105-00751.warc.gz | CC-MAIN-2023-40 | 1,408 | 8 |
https://plugins.unforget.rs/how-to-galdget/how-to-set-up-multiple-galleries-in-one-post-or-page/ | code | If you want to display several different galleries on one post or page of your blog, or even in a number of separate widgets, it is easily achieved with [Galdget] (or [Galdget plus+]).
Since Galdget comes in two flavors, you have two options:
- Option 1: Fancy, with nice frame, premium (need to buy a plugin)
- Option 2: Free but cool, without a frame but with lots of other features
Option 1: Fancy, with nice frames, premium (need to buy a plugin)
Something like the Smartberry in the left sidebar on this page, maybe with some other “frame”. Frames, of course, can be changed:
You go and buy [Galdget plus+] plugin, and install it as described in the 5 minute setup. You may be happy with only steps 1 and 2 of 5 minute setup, since the rest is covered in more detail down this page.
Option 2: Free but cool, without frames but with lots of other features
Something like the above, but without frame.
Install free [Galdget] plugin, it also has its 5 minute setup. You may be happy with only step 1 of 5 minute setup, since the rest is covered in more detail down this page.
The rest of the setup (for both fancy and free)
In the text above there are three Galdget plus+ and three Galdget galleries which, since this is a post, are all on the same post. In the widget area to the left of this text are three widgets which, again, are on the same post. Now let’s see how this is done. We will have to set up:
1. Multiple groups in the URL list
Basic URL list setup is described in another how-to, here. The only addition here is that we have to use groups. If we want to display three different galleries, we should have three distinct group numbers. It can be 1,2,3 or 10,11,12 or any three numbers of your choice.
It is a good practice to put the images of one gallery in a separate folder, somewhere on your web site, and then refer to the gallery with the
dir://folder/subfolder in a single line in the URL list setup, instead of listing each image separately.
You may assign one item in the URL list (dir or single image, or a web page) to more than one group, just put more group numbers separated with commas. If you want it to show up in all groups – just leave it to group 0.
Finally, you end up with the list somewhat like this one:
Don’t forget to save it before you go on to step 2 or 3.
2. Multiple shortcodes in the post/page
Shortcodes are described here (or here if you are free). It is simple, if you want more galleries on your post or page – you put more shortcodes. Of course, you want your galleries to display different content, so you use different group for each one.
Here is a simple sample (semple sumple…) shortcode, with the group identifier shown in red:
[galdget_pp width=350 height=350 group=2 animation=cw]
If you want to have more control of the layout of galleries, you may group them into various
div tags or, maybe even better, into a
table. The ones displayed above use the following code:
<table> <tr> <td style="vertical-align:middle"> [galdget_pp width=180 height=267 align=center group=1 random=no buttons=nonstop images=fill animation=rnd frame=frames/frame-1210491_640_p.txt ] </td> <td style="vertical-align:middle"> [galdget_pp width=267 height=180 align=center group=2 random=no buttons=nonstop images=fill animation=rnd frame=extraframes/album-1448918_640.txt ] </td> </tr> <tr> <td colspan=2> [galdget_pp width=600 height=350 align=center group=3 animation=ocw buttons=nonstop frame=frames/smartphone-157082_l.txt ] </td> </tr> </table>
This, of course, works in the “Text” mode in post/page editor. Don’t be afraid to use it, it may be less ‘visual’ than “Visual” mode, but you have more control of your content here.
3. Multiple widgets
Basic [Galdget] or [Galdget plus+] widget setup is described in more detail here. When setting up more than one, you simply repeat the procedure more than once.
You should set up a correct group that each of the widgets will display, and it would be nice to set different titles. | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648695.4/warc/CC-MAIN-20230602140602-20230602170602-00096.warc.gz | CC-MAIN-2023-23 | 3,996 | 31 |
http://scrapgirls.net/forum/topic/56875-new-to-forums/ | code | New To Forums
Posted 01 August 2012 - 01:48 PM
My name's Vicki and I really have never been in any forums before. I've been getting the newsletters for years and buying stuff from the Boutique. I was really surprised today when I opened my email (usually only look at it about once a week; I work on computers all day long at work; at home sick today) and got a Birthday Wish from AggieB. I tried to reply, and hope it worked. Thought I'd try this also. I have 3 grown sons, two granddaughters, and a grandson due in a few weeks.
Have a great day!
Posted 01 August 2012 - 05:39 PM
Posted 01 August 2012 - 06:58 PM
Posted 01 August 2012 - 08:06 PM
Posted 01 August 2012 - 08:22 PM
Posted 01 August 2012 - 11:25 PM
Posted 01 August 2012 - 11:26 PM
Posted 02 August 2012 - 06:20 AM
Posted 02 August 2012 - 08:05 AM
Posted 02 August 2012 - 10:59 PM
The Adagio digital scrapbooking kit will spark your creativity in creating layouts for travel, performances, music, romance,
special occasions like dances, and heritage layouts. (Images are clickable)
Posted 03 August 2012 - 11:40 AM
Posted 13 August 2012 - 07:33 PM
I was a lurker for a year or so, too, and then I started to figure out how to actually *use* those freebies I'd been piling up. The fun had only begun!
Sooooo... welcome back! Look forward to seeing you around!
Posted 15 August 2012 - 09:11 AM
"Life is not the days that go by, but the days you remember"
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121000.17/warc/CC-MAIN-20170423031201-00111-ip-10-145-167-34.ec2.internal.warc.gz | CC-MAIN-2017-17 | 1,488 | 23 |
https://rd-alliance.org/user/7744 | code | Dr Adam Kriesberg
I am a Postdoctoral Scholar at the University of Maryland College of Information Studies. My current research is in the area of agricultural data curation as part of a cooperative agreement with the National Agricultural Library. I completed my PhD at the University of Michigan in 2015. My disseration, “The Changing Landscape of Digital Access: Public-Private Partnerships in US State and Territorial Archives,” used a mixed methods design to examine the digitization partnerships between government archives and private companies. | s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525374.43/warc/CC-MAIN-20190717181736-20190717203736-00509.warc.gz | CC-MAIN-2019-30 | 555 | 2 |
http://emily-maxwell.com/blog/a-new-website | code | A New Website
Anyone who is used to the usual layout of this site might be surprised to find it has completely changed! Well, it's changed in appearance, at least. In terms of where you find things, general layout has remained pretty much the same.
A few months ago, I took it upon myself to update my old website from PHP to React. I created a big, complex, two-site system with a front-end in React and a back-end in Node. I made APIs to get my blog posts, did all that stuff. Then I realized, it's still expensive to run. In fact, it's more expensive now, because I'm running on a new server and my database is on my old server, and I really didn't feel like migrating it.
So I re-rebuilt my website. Now it's built in Next.js, it's all static pages other than lazy-loading on scroll. It's way lighter, it runs in a custom container with a fraction of the resources my React site needed. We also have the advantage of post stubs, so you can link directly to a specific post.
~~If you're ever unable to find something you're looking for, I've kept the old website up on https://old.emily-maxwell.com for now. This will remain until I have confirmed 100% parity, and then I will be tearing down the machine that hosts it and saving the money.~~
In other news, it's spring! I'll be getting out there and exploring a bit before the rainy season comes... I guess that's when I'll start a new project. Who knows?
Last updated on . | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510888.64/warc/CC-MAIN-20231001105617-20231001135617-00587.warc.gz | CC-MAIN-2023-40 | 1,427 | 7 |
https://dev.bukkit.org/projects/music/files/2497773 | code | UploadedNov 6, 2017
Supported Bukkit Versions
- CB 1.7.9-R0.2
- CB 1.7.9-R0.1
Added station ownership verification
Fixed naming of arguments
General code cleanup
Fixed NPE when clicking outside of inventory
Cleaned up code.
Fixed some NPES
Starting to move to a Jukebox-only sound system.
Updated sound files to support 1.12
Updated code for the new BukkitDev site
Fixed config bug
Removed the /loop command (only use /music now)
Renamed a bunch off commands
All "streams" are now "stations"
You are now able to add multiple songs to a station
Fixed the /music get message
Fixed help message
Minor bug fixes.
Added new updater
Made changes so that the music loader runs faster and uses actual time (not the time*4)
Fixed some text and links for files.
Fixed minor errors.
Added README.txt generator
Made autogenerator for text files
Possibly fixed auto updater bug.
Added /loop createSong (Displayname) (Songname) (time)
With this, you can easily create a song right from your server. Note: Must be op in order to use
Bug fixes and code clean-up.
Fixed bug: Song does not play while using the "playonce" feature.
Fixed bug: Creating a file called "ResoucePack" instead of "ResourcePack".
Added resourcepack join messages. Will add the ability to disable soon.
Fixed some text files Fixed readme.txt
Added the ability to auto generated files. This means the download size will be smaller.
Added Larger radius for sounds
Added a READ ME.txt
Bugfixes Fixed Timers
Added the Music.class
Added three new songs
Removed older txt files
Added a menu to select songs
Added /loop menu play (stream)
Added /loop menu playonce
Fixed minor bugs
Fixed ColorCodes for messages (no more boring white text when an error comes up)
Added pitfall when players don't use the command the right way
Cleaned up code
Added /loop playOnce to play a song once
Fixed issues with using /loop with no args | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649105.40/warc/CC-MAIN-20230603032950-20230603062950-00302.warc.gz | CC-MAIN-2023-23 | 1,875 | 51 |
https://www.gianamar.com/ | code | In 2023, I completed my PhD, titled 'Globalising China: Jesuits, Eurasian Exchanges, and the Early Modern Sciences', in the Department of History and Philosophy of Science at the University of Cambridge. The dissertation reveals how the Manchu conquest of China in 1644 transformed the sciences across Europe. It reorients common accounts of the history of science by showing that several scientific debates typically deemed 'European' originated in China, emerging through local peoples’ interactions with Jesuit missionaries. Focusing on the Jesuit Martino Martini’s writings, my PhD explains how Chinese cultures of knowledge became valuable intellectual and political resources in early modern Europe.
I have held several visiting fellowships at top international research institutions. In Autumn 2021, I was a Visiting Predoctoral Fellow in Department III at the Max Planck Institute for the History of Science in Berlin, where I led the project 'Of Soils and Stars: Jesuit Perceptions of Chinese Agricultural Practices through Calendrical Construction'. The project examined how Jesuit missionaries made sense of the historical connections between agriculture and astronomy in late Ming and early Qing China. In Spring 2022, I was a Junior Fellow at the Descartes Centre for the History and Philosophy of the Science and the Humanities at Universiteit Utrecht, where I studied early modern Dutch representations of southern Africa and its inhabitants. In June 2022, I won a Lisa Jardine Grant Award to study the reception of Chinese astronomy at the Royal Society in London. I was a Freer Prize Fellow of the Royal Institution for the academic year 2022-23.
I am passionate about globalising research and pedagogy in the history of science and provincialising European contributions to 'science' and 'modernity'. My research interests include cultural and intellectual histories of intercultural encounters, the Jesuit China mission, history of scholarship, histories of race, science and empire studies, and the sociology of scientific knowledge.
In 2023 I was shortlisted as a BBC AHRC New Generation Thinker, and was elected an Associate Fellow of the Royal Historical Society.
The Imperial Astronomical Bureau of Beijing, taken from the French Jesuit Louis Le Comte's (1655-1728) Memoirs and observations topographical, physical, mathematical, mechanical, natural, civil, and ecclesiastical (London: 1698).
I can be contacted by email at gg410[at]cam.ac.uk (replace [at] with @) | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818732.46/warc/CC-MAIN-20240423162023-20240423192023-00701.warc.gz | CC-MAIN-2024-18 | 2,492 | 6 |
http://buzzlefeed.com/list/celebrities/page/2/ | code | I’ have actually been browsing around for a couple of hrs to discover you this fantastic article ..
Steven Seagal These celebrities were once at the top of their game, starring in hit movies and TV shows, ..
On this day in history, Some huge events went down. Check the list to see why today isnt just like any other ..
Olivia Newton John and daughter Chloe Lattanzi
A good GIF is worth its weight in gold? Wait, how much does it weigh? Ok, so maybe thats not the best ..
In my humble opinion (who am I? Angela Chase?), 2013 was a great year for film and it saw some amazing ..
When you live in the limelight its easy to get caught off your game. Unfortunately for Britney, Beyonce, ..
Pornstars are people too! Here are the top 20 best pornstar instagram accounts. Theres no nudity, just ..
Nothing identifies us or makes us more inspired than good music. The bands and their songs that shaped .. | s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423992.48/warc/CC-MAIN-20170722102800-20170722122800-00239.warc.gz | CC-MAIN-2017-30 | 899 | 9 |
https://lists.fedorahosted.org/archives/list/[email protected]/2015/10/?page=6 | code | On 10/25/2015 10:27 AM, Matthew Saltzman wrote:
> On Sat, 2015-10-24 at 19:30 -0700, Joseph Loo wrote:
>> On 10/24/2015 02:54 PM, Matthew Saltzman wrote:
>>> My issue is the reverse of the thread about openweathermap.org.
>>> Openweathermap.org is working fine for me in the OpenWeather GNOME
>>> plugin, but if I switch to forecast.io, nothing loads. This used to
>>> work fine a couple of months ago, but now it fails on multiple
>>> machines. I have API keys for both sources and I can log into both
>>> a browser.
>>> I find the forecast.io reports more accurate than
>>> so I'd prefer to switch, if I could get it working again.
>>> Any suggestions or related experiences?
>> Did you load the api key using the xxx.io registered key?
> My API key appears in the OpenWeather settings window under Weather
> provider. Is that what you're asking, or is there something else I need
> to do? I can use that key to log into forecast.io's Web site.
I am not sure, but remember there are 2 keys, one for .org and the
other for .io. Just a guess, since you are using the .io you might need
to get the api key from .io and put it in to make it work.
-----BEGIN PGP SIGNED MESSAGE-----
On 10/25/2015 01:29 PM, Matthew Saltzman wrote:
> On Sun, 2015-10-25 at 03:06 +0000, Andre Robatino wrote:
>> Matthew Saltzman <mjs <at> clemson.edu> writes:
>>> My issue is the reverse of the thread about
>>> openweathermap.org. Openweathermap.org is working fine for me
>>> in the OpenWeather GNOME plugin, but if I switch to
>>> forecast.io, nothing loads. This used to work fine a couple of
>>> months ago, but now it fails on multiple machines. I have API
>>> keys for both sources and I can log into both from a browser.
>>> I find the forecast.io reports more accurate than
>>> openweathermap.org, so I'd prefer to switch, if I could get it
>>> working again.
>> Are you on F21? Recently forecast.io changed to require HTTPS,
>> and F22 and F23 have stable update fixes, but the F21 version
>> is still in testing (but just reached 7 days in testing so
>> probably will be pushed to stable soon, even if no one tests
> Sorry, should have mentioned, I'm on F22. OpenWeather version is
> 39, installed via the GNOME Plugins Web site and TweakTool.
Well, I'm on F22 and
(the current stable version) is working fine for me using either
forecast.io or openweathermap.org. I don't know whether what you're
using has the fix, but probably not.
P.S. The F21 version was just submitted for stable and should be in
the next F21 push. I received your message as an email but it hasn't
appeared in the users list, even though you CC'd it to that, and I
don't know if this one will either.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
-----END PGP SIGNATURE-----
I'm a novice at assessing the question of multi-boot vs using
I've seen a lot of discussion on this topic on this email list; some of it
expressed with frustration ("I can't upgrade my Fedora installation
because of my multi-boot setup ..."; or "why use a multi-boot system; use
So my general interest is to continue using Fedora as my primary OS; but
enough times, I need to use the MS Office suite.
I have a laptop (Dell Latitude E6220 with MS Windows 7 factory-installed;
750GB HD, 8GB RAM). I've begun a few multi-boot installations using
Fedora, to see what some of direct challenges are (the first
steps to shrink an existing partition, in order to create room for a
Fedora installation). But have never completed an installation.
My issue is the reverse of the thread about openweathermap.org.
Openweathermap.org is working fine for me in the OpenWeather GNOME
plugin, but if I switch to forecast.io, nothing loads. This used to
work fine a couple of months ago, but now it fails on multiple
machines. I have API keys for both sources and I can log into both from
I find the forecast.io reports more accurate than openweathermap.org,
so I'd prefer to switch, if I could get it working again.
Any suggestions or related experiences?
Clemson University Math Sciences
mjs AT clemson DOT edu
Over the last few days I've become increasingly frustrated with the
touchpad and keyboard on my Dell Laptop. Often, the <right-click> menu
would show up for whatever application that I was using in that moment.
And with a little more time, the <Alt-Tab> function would fail. Lastly,
the the click function on the mouse/touchpad was somehow disabled, so that
selecting something from a menu was not possible.
My general solution was to login from another machine via ssh, su to root,
and then poweroff/reboot the machine.
I also dis/re-assembled the machine and cleaning it, on the speculation
that there could be something that was causing a short somewhere.
As a troubleshooting possiblity, on bootup today I used the prior kernel
(kernel-4.1.7-100.fc21.x86_64). So far, no mouse/touchpad/keyboard
Any recommendations on troubleshooting would be appreciated in the event
this isn't a buggy kernel issue.
Is there a way to control gnome-alsamixer, so I can toggle the mute on
and off for microphone *monitoring*? (That's hearing it back through
the soundcard, using the hardware mixer.)
Or another controller for the mixer? No, the CLI alsamixer doesn't give
control for that parameter. Well, not that I can see.
I want to hear sidetone when doing VOIP while wearing headphones. And I
don't want to have to open the mixer panel every time I start or stop a
Leaving the microphone monitoring on all the time isn't an option. It
adds unwanted noise, and is a potential audio feedback source if not
[tim@localhost ~]$ uname -rsvp
Linux 3.9.10-100.fc17.x86_64 #1 SMP Sun Jul 14 01:31:27 UTC 2013 x86_64
All mail to my mailbox is automatically deleted, there is no point
trying to privately email me, I will only read messages posted to the
Long ago I gave up on using Windows (TM) [Tantrum Machine], and I've
never regretted it.
Cutting or copying in my main desktop and pasting into
a virtual machine in virt viewer has always "just worked"
as long as I can remember. Today it isn't working.
I think saw a lot of qemu and libvirt updates come through
recently. Anyone know if something broke this? Or is it only
broken for me?
The lisbsndfile-1.0.25-14.fc22 spec removes the GSM610 code and then
does an autoreconf.
What is the GSM610 code and why is it removed?
What is the autoreconf step supposed to accomplish? Build seems fine on
CentOS 6 without it.
mount -t life -o ro /dev/dna /genetic/research
Where does packageKit, which is (I assume) invoked by the KDE apper
program, keep its log. The log can be viewed in apper, but I would
like to have it in text form, so I can edit and search it. The
Fedora "User Guide - Managing Software" claims "Every completed
transaction records the affected packages in the log file /var/log/dnf.
log", but I can't find any records of updated packages there.
Thanks - jon | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302740.94/warc/CC-MAIN-20220121071203-20220121101203-00119.warc.gz | CC-MAIN-2022-05 | 6,843 | 118 |
https://devops-jobs.net/job/3168-devops-release-reliability-engineer/ | code | DevOps / Release / Reliability Engineer
NYC / Remote
Exactly 45 billion dollars is locked up in cash security deposits. We prove that there’s a better way. Rhino is bringing financial flexibility to renters everywhere.
We give renters a choice in a transaction that never offered choice before. We replace upfront security deposits with affordable insurance that saves renters hundreds and thousands of dollars. To date, we have saved renters over $150 million and are trusted in over 1 million homes across all 50 states.
Founded in 2017 and VC backed, Rhino's mission is to utilize technology to offer products and services that break down financial barriers and offer a win-win solution for both renters and property owners/managers alike.
As a Senior Full Stack Engineer at Rhino, you’ll join a formidable, passionate tech team, helping to build the foundation of a company that is positioned to revolutionize renting.
In this role you will:
- Build and improve tools and workflows to increase development velocity, such as Continuous Deployment
- Improve monitoring and alerting systems
- Maintain our excellent track record of uptime and site reliability as we rapidly scale the platform
- Leverage our existing infrastructure in order to make rapid, iterative improvements
- Work with our Internal Tools, Data, Product, and QA teams to optimize feedback loops and decision making
- Provide feedback on features being developed across the company and suggest solutions
- Participate in code reviews, standups, and planning sessions, while listening to feedback and commenting on others’ approaches
We’re ideally seeking:
- 5+ years of experience
- Experience with--and preference for--PaaS providers like Heroku
- A passion for Developer Experience and Continuous Delivery
- Backend and API development experience
- A kind, friendly, and empathetic person
- An Engineer that is excited to quickly ship features in a collaborative, rigorous, and fast-paced environment
- Competitive compensation and 401k
- Unlimited PTO to give our employees a little extra R&R when they need it
- Stock option plan to give our employees a direct stake in Rhino’s success
- Comprehensive health coverage (medical, dental, vision)
- Remote Work Program to allow for flexibility between home and the office
- Generous Parental Leave to create a family-friendly culture
- Wellness Perks (Gym, Classpass, & Citibike Memberships)
- Commuter Benefits through a Flexible Spending Account
- Fintech Equality Coalition Founding member | s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141743438.76/warc/CC-MAIN-20201204193220-20201204223220-00712.warc.gz | CC-MAIN-2020-50 | 2,524 | 30 |
https://www.buymeacoffee.com/jxeeno | code | Hey 👋 I'm Ken.
I built and maintain COVID-19 Near Me, a COVID-19 exposure site map and checklist app for Australia.
I also maintain a lot of the data feeds and scrapers which pull Australia's vaccination data. These feeds are what drives most publication's statistics on Australia's vaccination effort.
If you like the work I'm doing, you can now buy me a coffee!
You can also tip me via beem it or PayID:
• beem it: @jxeeno
• PayID: [email protected] | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104668059.88/warc/CC-MAIN-20220706060502-20220706090502-00245.warc.gz | CC-MAIN-2022-27 | 458 | 7 |
https://www.hackster.io/pooja_baraskar/burglar-alarm-with-pir-motion-sensor-964c42 | code | A friend asked me about PIR Motion Sensors and how they
work. Recently I got one for one of my projects. As soon as I got it the
first thing I thought of was to write an article about it. Working with
Sensors is always amazing. I am always excited when I run my first
Sketch with a new type of Sensor and they never fail to impress me. This
Sensor is one of them, it is so small yet powerful. When somebody walks
into my room it notifies me, isn’t that cool? For a demonstration I
made a Burglar Alarm System with this. Its detecting range is up to 6
meters. It plays a buzzer and lights a LED when it detects an intrusion
in my room. Let us understand about PIR Motion Sensors and how they work
and how to create some cool stuff like this.
“A passive infrared sensor (PIR sensor) is an electronic sensor that measures infrared (IR) light radiating from objects in its field of view.” It is also referred to as a PIR, "Passive Infrared", “PID”, "Pyroelectric", or "IR motion" sensors. These are small and inexpensive and operates on low power. PIR sensors are most commonly used to detect motion like if someone moves in or out within the Sensor’s range.
emits some radiation whether human, animal or any object passing within
the sensor’s range, the temperature at that point in the sensor's field
of view will rise from room temperature to body temperature and then
back again. The sensor will detect the change in Infrared Radiation. PIR
Sensors are basically made of a “Pyroelectric sensor” that can detect
levels of infrared radiation. “Pyroelectricity” means heat that
generates electricity, hence these sensors convert changes in Infrared
Radiation to a change in output voltage. These sensors usually have a
small plastic covering that is actually the lens that increases its
sensing range. The plastic lens have may multiple facets to focus the
infrared energy onto the sensor. Each individual facet is a Fresnel
lens. PIRs does not emit any Infrared and unlike other active sensors
they do not send anything out hence they are called Passive Sensors.
This image will help you to understand how a PIR sensor works.
For this project we will make a Burglar Alarm System. It will detect intrusion or any fire outbreak in your home. Let us see how it works in action.
Arduino Compatible board with power supply
In this project I will use Grove’s PIR Motion Sensor and will show how it works with Intel Galileo Gen1. You can reproduce the same example with Galileo gen2 or Arduino boards. This sensor has a Grove compatible interface, you need to just connect it through the Base Shield and start programming for it. If you are unfamiliar with Grove Sensors then please go through my article Grove Starter Kit With Intel Galileo Gen 2: Getting Started . This sensor has a detection range of a max of 6 meters but by default it is 3 meters, you can increase or decrease it by attaching a Potentiometer to the circuit.
Other Specifications are:
- Grove compatible interface
- Voltage range: 3V–5V
- 2.0cm x 4.0cm twig module
- Detecting angle: 120 degree
- Detecting distance: max 6m (3m by default)
- Adjustable detecting distance and holding time
Since we are using Grove’s Sensors we don’t need to worry much about the connections or polarity. Here I have placed the Base Shield on my Galileo Gen 1 and connected the PIR Sensor to pin D2 and I also hooked up an LED to pin D4 and a Buzzer to pin D3 so when motion is detected it will alert us. | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647614.56/warc/CC-MAIN-20230601042457-20230601072457-00798.warc.gz | CC-MAIN-2023-23 | 3,478 | 37 |
https://mailman.videolan.org/pipermail/vlc-devel/2014-November/100294.html | code | [vlc-devel] How to loads VLC quickly
Juliano Niero Moreno
julianomoreno at gmail.com
Wed Nov 12 00:27:14 CET 2014
My name is Juliano.
I'm a beginner user on VLC.
I'm developing an application Java that executes external applications.
The first time, after the boot of SO, that my application fires the VLC,
there is a delay until VLC to load completely. After this first time, the
VLC loads quickly.
I would like to know how to load the VLC faster?
Currently I fire VLC using this comand: "vlc --play-and-exit video.avi"
Thanks in advanced.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the vlc-devel | s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413406.70/warc/CC-MAIN-20200531120339-20200531150339-00317.warc.gz | CC-MAIN-2020-24 | 652 | 16 |
http://techbirds.in/page/60/ | code | Developers which are new to xcode 5 and iOS 7 are facing this situation, in which view, tableview comes hidden under navigation bar in iOS 7 , but not in ios6, so here is a job to do, that makes view position same in ios6 and 7. 1. Open storyboard scene in xcode 5. 2.
833 total views, no views today | s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945942.19/warc/CC-MAIN-20180423110009-20180423130009-00292.warc.gz | CC-MAIN-2018-17 | 300 | 2 |
https://docs.aws.amazon.com/awscloudtrail/latest/processinglib/com/amazonaws/services/cloudtrail/processinglibrary/serializer/SourceSerializerChain.html | code | public class SourceSerializerChain extends java.lang.Object implements SourceSerializer
SourceSerializerimplementation that chains together multiple source serializers. When a caller passes
Messageto this serializer, it calls all the serializers in the chain, in the original order specified, until one can parse and return
CloudTrailSource. If all source serializers in the chain are called, and they cannot successfully parse the message, then this class throws an
IOExceptionthat indicates that no sources are available.
This class remembers the first source serializer in the chain that can successfully parse messages, and will continue to use that serializer when there are future messages.
|Constructor and Description|
Constructs a new
|Modifier and Type||Method and Description|
Get CloudTrail log file information by parsing single SQS message.
public SourceSerializerChain(java.util.List<? extends SourceSerializer> sourceSerializers)
SourceSerializerChainwith the specified source serializers.
SourceSerializerFactory.createSourceSerializerChain() for default construction.
When source are required from this serializer, it will call each of these source serializers in the same order
specified here until one of them return
sourceSerializers- A list of at least one
public CloudTrailSource getSource(Message sqsMessage) throws java.io.IOException | s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141169606.2/warc/CC-MAIN-20201124000351-20201124030351-00410.warc.gz | CC-MAIN-2020-50 | 1,359 | 17 |
https://engineering.purdue.edu/paramnt/FAQ/SEC_Running_OpenMP_Programs/Q001.html | code | Of course there are many reasons that a program may crash, but a common reason is that the stack size is too small. If your program crashes as soon as you begin to execute it (I mean instantly!), this may be the problem. Try increasing the stacksize by typing:
>> limit stacksize n
where n is the size you'd like (in kbytes). You can see what the current size is by typing "limit" by itself:
peta.ecn.purdue.edu 52: limit cputime unlimited filesize unlimited datasize 2097148 kbytes stacksize 8192 kbytes coredumpsize unlimited vmemoryuse unlimited descriptors 64
Sometimes the backend compiler will warn you if it thinks that the stacksize is very large, other times it won't. It usually is a good idea to try increasing the stacksize before spending hours trying to find the "bug" in your program. | s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710974.36/warc/CC-MAIN-20221204140455-20221204170455-00515.warc.gz | CC-MAIN-2022-49 | 799 | 5 |
http://nanohub.org/tags/tutorial?sort=date&limit=20&limitstart=180 | code | Find information on common issues.
Ask questions and find answers from other users.
Suggest a new site feature or improvement.
Check on status of your tickets.
NEMO 1-D: The First NEGF-based TCAD Tool and Network for Computational Nanotechnology
out of 5 stars
28 Dec 2004 | | Contributor(s):: Gerhard Klimeck
Nanotechnology has received a lot of public attention since U.S. President Clinton announced the U.S.National Nanotechnology Initiative. New approaches to applications in electronics, materials,medicine, biology and a variety of other areas will be developed in this new multi-disciplinary...
Scientific Computing with Python
24 Oct 2004 | | Contributor(s):: Eric Jones, Travis Oliphant
INSTRUCTORS: Eric Jones and Travis Oliphant.Sunday, October 24, 9:00 a.m. - 5:00 p.m.Room 322, Stewart CenterPython has emerged as an excellent choice for scientific computing because of its simple syntax, ease of use, and elegant multi-dimensional array arithmetic. Its interpreted evaluation...
Nanotechnology 101 Lecture Series
25 Aug 2004 |
Welcome to Nanotechnology 101, a series of lectures designed to provide an undergraduate-level introduction to nanotechnology. In contrast, the Nanotechnology 501 series offers lectures for the graduate-level and professional audiences. | s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267866965.84/warc/CC-MAIN-20180624141349-20180624161349-00524.warc.gz | CC-MAIN-2018-26 | 1,278 | 14 |
http://www.knittingdaily.com/forums/t/18438.aspx | code | I am knitting your simple prayer shawl, but it seems as if all the directions are not there. Do you begin decreasing - when and how? If I just keep repeating the same 2 rows, it will just keep getting bigger and bigger, with no other point. HELP!!!
The shawl is a triangle - so you start at the point and increase until it's 36" or desired size. I think you might be thinking that its a square? and that's why you're asking about another point?
Hope this helps. | s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427132827069.83/warc/CC-MAIN-20150323174707-00095-ip-10-168-14-71.ec2.internal.warc.gz | CC-MAIN-2015-14 | 461 | 3 |
https://www.thingiverse.com/thing:3421202 | code | Niskin3D is low-cost, open-source water sampler made from 3D-printed parts and controlled by a waterproof servo.
The Niskin bottle, a seemingly simple tube designed to take water samples at discrete depths, is one of the most important tools of oceanography. Coupled with a CTD, an array of Niskin bottles fit into a rosette provides everything an oceanographer needs to profile the ocean. Niskin bottles are neither cheap nor particularly easy to use. A commercial rosette requires a winch to launch and recover, requiring both a vessel and a crew to deploy. For informal, unaffiliated, or unfunded researchers, as well as citizen scientists or any researcher working on a tight budget, getting high-quality, discrete water samples is an ongoing challenge.
The Niskin3D lowers the cost of discrete-depth water sampling and makes this common tool of oceanographic research available to anyone.
Read the full build instructions, access the bill of materials, and download additional source code in the Niskin3D Github Repository.
To support more weird and wonderful projects like this, please consider contributing to my Patreon. | s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525483.64/warc/CC-MAIN-20190718022001-20190718044001-00047.warc.gz | CC-MAIN-2019-30 | 1,128 | 5 |
https://finhealthnetwork.org/event-session/pushing-progress-for-good-the-future-of-the-un-scaled-economy/ | code | As a result, we now have challenging choices to make about how to innovate and how to ensure that the algorithms we employ uphold our values as a society. In conversation, CFSI CEO Jennifer Tescher and Hemant explore critical themes from his book “Unscaled: How AI and a New Generation of Upstarts Are Creating the Economy of the Future”, including:
- How the forces of unscale are creating new opportunities across industries
- Why the era of “Move Fast and Break Things” in is over
- How AI can be used as a force of innovation — and good — in Financial Services, specifically for Main Street, USA, and the un- and underbanked. | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104542759.82/warc/CC-MAIN-20220705083545-20220705113545-00115.warc.gz | CC-MAIN-2022-27 | 641 | 4 |
https://anime.stackexchange.com/questions/41731/ln-alicization-no-appearance-of-murderer-assassin-tribe-in-battle | code | The Murderer's Tribe doesn't appear in the battle / make a contribution. They are non-existent in the diagrams displaying how both sides are positioned as well.
Is this something the author missed or did I miss something along the way?
It seems strange as each tribe is meant to contribute to the war effort in some way.
I've finished LN Vol. 17 and just noticed it. Judging from the plot, I don't find it likely they'll have a chance to appear later on either | s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154356.39/warc/CC-MAIN-20210802172339-20210802202339-00334.warc.gz | CC-MAIN-2021-31 | 460 | 4 |
https://oz-craft.pickardayune.com/man/components.1/ | code | components - view installed components
components displays a list of all components installed in the computer, in the form TYPE ADDRESS. If type is specified, components will only display components of type type.
Display all installed components.
Display all installed filesystem components
components is copyright (c) 2014 Sangar as part of OpenOS.
component(2), computer(2), lshw(1) | s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704798089.76/warc/CC-MAIN-20210126042704-20210126072704-00391.warc.gz | CC-MAIN-2021-04 | 384 | 6 |
http://forum.behringer.com/list.php?author/8297-JoelMac | code | Thank you, Paul. That's what I'm really wanting to know. While I don't really understand why the incoming engineers want this, I would like to know
In this case, I'm not sure if we were ever told whether there is a hardware limitation in this regard. Robert, do remember whether that was answered during
Never got to get that far in my discussions with the engineers. They simply stated that the console would not work for them as a monitor console because
I think that we might be talking about two different things.
Could I ask you tell me some quick practical real life setup with maybe four
Robert, that won't do it! Unless I'm misunderstanding, you cannot spilt the Mixbus pair which what the engineers have asked for. These are not novice
© Copyright 2011. MUSIC Group IP Ltd. All rights reserved. | s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119647865.10/warc/CC-MAIN-20141024030047-00092-ip-10-16-133-185.ec2.internal.warc.gz | CC-MAIN-2014-42 | 804 | 7 |
https://github.com/TheCraigHewitt/Seriously-Simple-Stats/pull/2 | code | Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Corrected stripos() argument order #2
Thanks for this - silly mistake with the argument order there on my part.
The improvements for the iOS Podcasts app detection there are solid - that should help a lot and fix one of the main issues with this plugin that I just haven't had the time to investigate properly.
I'll merge this in now and push it out in the next release - thanks! | s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590199.42/warc/CC-MAIN-20180718135047-20180718155047-00511.warc.gz | CC-MAIN-2018-30 | 537 | 6 |
https://writemyessay.one/programming-assembly-arduino-program-report/ | code | 1) Write assembly /Arduino program
A smart city is a technologically modern urban area that uses different types of electronic methods, voice activation methods and sensors to collect specific data. Information gained from that data is used to manage assets, resources and services efficiently. In 2021, Oman’s Muscat Municipality has signed an agreement with Signify International to implement Smart Street lighting project in the Omani capital. Muscat Municipality is using more than 20,000 connected street lights which are providing up to 85 percent energy saving and reducing maintenance costs
Signify International is using different types of sensors in controlling smart street lights. smart streetlights may require more technology, such as humidity sensors, proximity sensors, sound sensors and IR sensors for ON and OFF purpose. By using assembly/Arduino language, keeping in mind finite state concepts, you are required to simulate the functionalities of the Signify International unit to control street lights.
2) A written report
Write a report that covers the following points based on task 1,
Objectives of Task1: Assembly /Arduino program | s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711074.68/warc/CC-MAIN-20221206060908-20221206090908-00175.warc.gz | CC-MAIN-2022-49 | 1,156 | 6 |
https://github.com/jpellman | code | Create your own GitHub profile
Sign up for your own profile on GitHub, the best place to host code, manage projects, and build software alongside 28 million developers.Sign up
An implementation of Statistical Parametric Mapping written in R, specifically focusing on fMRI.
An R package that reads in Praat TextGrid annotations so that they can be manipulated as R objects.
Forked from jamespooley/brain-age-prediction
Notes on developmental trajectories, predicting brain maturation, and brain development within the context of resting-state fMRI.
OpenSCAD and STL files for 3d printing.
Attempts at learning new things- for fun, practice and general edification.
Various brief notes related to information technology and neuroscience. Subject to revision as my own knowledge of these fields changes. Not updated for a long while- mostly kept around for nostalg…
107 contributions in the last year
Created a pull request in zulip/python-zulip-api that received 8 comments
JSON post requests aren't supposed to be form-encoded (see http://docs.python-requests.org/en/master/user/quickstart/#more-complicated-post-requests)
Created an issue in artefactual/archivematica that received 7 comments
We noticed while running Archivematica 1.7.0 under Ubuntu 16.04 with a 351 GB transfer that instead of showing the transfer in the dashboard after … | s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221209755.32/warc/CC-MAIN-20180815004637-20180815024637-00164.warc.gz | CC-MAIN-2018-34 | 1,345 | 14 |
https://support.manufacturingtransformation.io/hc/en-us/articles/9556326310301-Uninstalling-EZ-Software-slow-cabinet-fix | code | If the cabinet is extremely slow and unresponsive, it's likely that the EZ config software is installed, and uninstalling it will fix it. It's no longer needed since Cribwise changed the way they interact with the scanner.
To quickly see if it's installed, exit SFI by pressing CTRL+ALT+SHIFT+Q at the same time, and look for this icon on the desktop:
If it's present, move on to the next step by shutting down SFI properly by location this icon, right-clicking on it, and select "EXIT". This will shut down all SFI services so SFI doesn't come back up again while we uninstall.
Next, go to the start menu and type "Add" in the search box to pull up "Add or remove programs" and select it.
What you will get then is the below window, where you select to uninstall both EZConfig-Agent and EZConfig-Scanning v4.
Once that is done, reboot the cabinet and it should be fine. | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818468.34/warc/CC-MAIN-20240423064231-20240423094231-00306.warc.gz | CC-MAIN-2024-18 | 870 | 6 |
http://www.armdesigner.com/article_137.html | code | Linaro announces Android Open Source Project port for ARMv8-A Architecture is ready and running on a 64-bit mulAuthor: ARM Date: 2014-07-04
[Cambridge, UK; 2 July 2014] Following the recent announcement of the Android™ L Developer Preview, Linaro, the collaborative engineering organization developing open source software for the ARM® architecture, today announced that a port of the Android Open Source Project (AOSP) to the ARMv8-A architecture has been made available as part of the Linaro 14.06 release. This port has been tested on an ARMv8-A 64-bit hardware development platform, code-named "Juno", available from ARM for lead and ecosystem partners.
The Linaro ARMv8-A reference software stack combined with the ARM Development Platform (ADP) provides the ARM ecosystem with a foundation to accelerate Android availability on 64-bit silicon. The availability of this port is the culmination of a broad architecture enablement program carried out by Linaro, ARM and the ARM partnership. ARM partners now have access to a 64-bit AOSP file system, together with a broad range of supporting material including the ARMv8 Fast Models, open source toolchain from Linaro and supporting documentation.
"The ARM ecosystem is rapidly preparing for the benefits a 64-bit ARM architecture will bring to devices starting this year," said James McNiven, general manager of systems and software at ARM. "Our collaboration with Linaro will enable our partners to create 64-bit devices that will drive the best next-generation mobile experience on Android operating systems, while also providing full compatibility with today's 32-bit mobile ecosystem that is optimized on ARM-v7A."
The Linaro 14.06 release includes a 64-bit primary/32-bit secondary binary image and source code based on the Linaro Stable Kernel (LSK) 3.10 for Android, compiled with GCC 4.9 and tested on both the ARMv8-A 64-bit hardware platform and ARMv8-A Fast Models. The AOSP is based on the Open Master snapshot downloaded on June 1st with HDMI drivers loaded as modules. The release is built with the Android runtime (ART) compiler as the default virtual machine, supporting a 64-bit user space on hardware and virtual platforms. Peripheral and advanced power management support plus several accelerations will not be available in this release, but will follow in future releases on a monthly cadence.
"We have been using ARM Fast Models to develop ports for AOSP for a long time and it is testament to the quality of our collaborative engineering that we have delivered them running on the ARMv8-A hardware platform so quickly," said George Grey, Linaro CEO. "We look forward to working closely with our members to enable them to deliver next generation Android solutions rapidly to the market."
The ARMv8-A hardware development platform includes an SoC with a quad-core ARM Cortex®-A53 CPU and dual-core ARM Cortex-A57 CPU in an ARM big.LITTLE™ processing configuration with a quad-core ARM Mali™-T624 GPU linked via ARM CoreLink™ system IP and implemented using ARM Artisan® physical IP. The development platform with its ARMv8-A software stack provides ARM software and silicon partners with a common foundation to accelerate their ARMv8-A software development. Further information about this platform is available from the ARM website here: www.arm.com/juno. | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817073.16/warc/CC-MAIN-20240416062523-20240416092523-00090.warc.gz | CC-MAIN-2024-18 | 3,340 | 7 |
http://www.vlijmweb.nl/qle/ | code | |QLE Unity Quicklist Editor
QLE Unity Quicklist Editor is a solution to easily and quickly edit (Ubuntu) Unity Quicklists (static section).
Although the editor is mainly developped and tested on 12.04, it should work on earlier versions of Unity as well.
by means of ppa:
run in a terminal:
sudo add-apt-repository ppa:vlijm/qle
sudo apt-get update
sudo apt-get install qle
Debian installer download:
Editing options include:
- adding commands
- adding dividers
- adding LibreOffice applications (from a list, to any quicklist)
- adding shortcuts to applications (creating grouped quicklists from an application list that is presented)
- adding directories
- adding virtual boxes (from a list that is presented)
- adding url's
- renaming existing entries
- changing the command of an existing entry
- moving entries up or down in the quicklist
- importing entries from textfiles or other quicklists* (experimental)
- removing entries
- resetting the quicklist to as it was before QLE editor was installed
- creating new quicklists
- removing standalone quicklists
Additionally, the editor recognizes and corrects some properties of quicklists that might cause problems (like no ";" at the end of the entry line). The editor recognizes and edits both "old" (<12.04) and "new" style desktop files.
"standalone" desktop files (made by the user, not representing an application) are recognized and listed, and can be edited like any quicklist.
When an icon (quicklist) is edited for the first time, a re- login might be necessary to see the effect. Afterwards the changes will effect immediately.
The editor's main window
The Unity Quicklist Editor comes with a database of commonly used applications. This database can easily be supplemented by the user with installed applications. The list of applications shows only applications from this database that are actually installed on the system (no matter if the application's icon is locked in the launcher or not). Additionally, also standalone quicklists (if any) in the directory: ~/.local/share/applications are listed in the list and can be editeded likewise.
When an application is selected from the list, its current entries will show in the listbox at the right. These entries can be edited (name or command) with the "pencil" -button, or moved up or down with the arrow keys. New entries can be created by pushin the "+" -button, that will show a number of options.
The editor's main window
type one or two of the first characters of the application to make a selection
Search an application by typing a few characters
Add your own applications to the list
To edit an application's quicklist that is not in the database, add the application to the list with the Manager - menu (Manager > Add your application).
Add applications to the database
Remove standalone (possibly "orphans") from the applications- list
Adding new entries
Add a libreOffice module to the quicklist
Editing existing entries
Edit a command of an existing entry
From the "Manager" menu, choose "create new", fill in at least the necessary fields (*). After you press "ok", the new quicklist will appear in the launcher, ready to be edited. Use the browser (click on the folder icon) to choose (browse to-) the icon you want to use.
The browser window
Create a standalone quicklist
For various tasks, like creating directory shortcuts or choosing files, the file / directory browser window is available, through the folder icon in the dialoque windows. The browser has various options:
Resetting a quicklist
When you use the editor for the first time, it creates a backup of the current quicklist. If you manually edited the quicklist before, the edited quicklist will be backed up. Use the reset option to restore the quicklist as it was before you used the quicklist editor for the first time. | s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864795.68/warc/CC-MAIN-20180622201448-20180622221448-00398.warc.gz | CC-MAIN-2018-26 | 3,820 | 48 |
https://forums.meteor.com/t/any-recommendations-for-a-spinner-to-use-with-react-apps/22510 | code | - Would prefer to not use the Sacha: spin package since that would require wrapping in Blaze.
- Tried using this package that wraps spin.js, however it uses the Meteor react package which I do not want to do.
- Tried the react-spinner npm package but it does not seem to provide a basic spinner out of the box.
Obviously, I can just roll my own, but wanted to first see what others are doing. Any recommendations would be appreciated! | s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828697.80/warc/CC-MAIN-20181217161704-20181217183704-00171.warc.gz | CC-MAIN-2018-51 | 434 | 4 |
https://askubuntu.com/questions/16672/where-can-i-find-ubuntu-10-04-1-netbook-i386-iso | code | In the various download repositories, there are 10.04.1 ISO images of Ubuntu desktop, alternate and server editions, but I can only find the original ubuntu-10.04-netbook-i386.iso, not an updated ubuntu-10.04.1-netbook-i386.iso. Is the latter ISO available somewhere? If no, why doesn't the Ubuntu maintainers create one for this version.
Unfortunately, there is no netbook iso for 10.04.1. For some reason, unlike the desktop and server editions, Ubuntu Netbook 10.04 is not considered a Long Term Support (LTS) release. Though, all hope is not lost. If you install from the original ubuntu-10.04-netbook-i386.iso and apply all the updates, you will effectively have Ubuntu Netbook 10.04.1. If you look at 10.04.1's changelog, it includes many netbook fixes even though it is not a LTS.
I found a mirror that has a copy of it here:
My guess would be there is no 10.04.1 since an upgrade from CD or via Internet is available for the 10.10
Ubuntu Netbook 10.04 is not an LTS version (LTS is only "classic" Ubuntu and Ubuntu server) -- so there are no maintaince updates (10.04.x) for netbook version.
(The same as with Kubuntu 8.04.)
The official site will give you the newest, 10.10, or you can go here for 10.04.1 - http://releases.ubuntu.com/lucid/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710933.89/warc/CC-MAIN-20221203143925-20221203173925-00360.warc.gz | CC-MAIN-2022-49 | 1,250 | 7 |
http://www.fevaworks.com/portal/site/course.asp?code=MOD10962&categoryid=13 | code | straight from the source.
Now you can build Microsoft technical expertise while balancing the demands of your schedule. FevaWorks is now offering Microsoft Official Courses On-Demand (MOC On-Demand).
MOC On-Demand combines high-quality video, reading, live hands-on labs and knowledge checks in a self-paced format to help you build skills on Microsoft technologies as your schedule allows—all at once or five minutes at a time. The modular, self-directed course structure adapts to your learning needs and learning style.
Brought to you by the people who write the software, and available through certified Microsoft Learning Partners like FevaWorks.
About this course
Learn how to automate and streamline day to day management and administration tasks and functions in your Windows Server Infrastructure.
This three-day course is a follow on course from the 10961B: Automating Administration with Windows PowerShell course. It is built on Windows Server 2012 R2 and Windows 8.1 and while it is specifically focused on Windows PowerShell v4.0, is also relevant in v2.0 and v3.0 Windows PowerShell environments.
Expand and build upon the knowledge already acquired in course 10961B and focus on building more scalable and usable Windows PowerShell scripts for use in your organization by building your own Windows PowerShell tools. Learn about areas such as the creation of advanced functions, script modules, advanced parameters attributes and controller scripts. Also learn how to make your scripts more robust by learning about handling script errors and the analysis and debugging Windows PowerShell scripts. The course will also cover the use of Windows PowerShell cmdlets with .NET Framework as well as teaching how to configure your Windows Servers using Desired State Configuration and providing an understanding of Windows PowerShell workflow.
The detailed hands on labs and in depth content and learning will help remove manual tasks that you may currently have to perform as an Administrator, allowing you to make your own Windows PowerShell tools for automated, repeated, accurate management and provisioning of your Windows Server infrastructure.
This course is intended for IT Professionals already experienced in general Windows Server and Windows Client administration or already experienced in administering and supporting Application servers and services including applications such as Exchange, SharePoint, and SQL. System, Infrastructure and Application Administrators working in a Windows or Windows hybrid environment will all find this course relevant to their day to day jobs and future career and skills development.
The course is also intended for IT Professionals who want to build upon existing Windows PowerShell knowledge and skill to learn how to build their own tools for broader general use in their organization, using any Microsoft or independent software vendor (ISV) product that supports Windows PowerShell manageability.
At course completion
After completing this course, students will be able to:
• Create Advanced Functions
• Use Cmdlets and Microsoft .NET Framework in Windows PowerShell
• Write Controller Scripts
• Handle Script Errors
• Use XML Data Files
• Manage Server Configurations by Using Desired State Configuration
• Analyze and Debugging Scripts
• Understand Windows PowerShell Workflow
FevaWorks 15 周年大優惠: 優惠價 $299 換 Lenovo 7 吋平板電腦 (TAB 3 7),價值 $1,599
7 吋 IPS 多點觸控顯示屏幕
Android 5.0 作業系統
支援 GPS/ 藍芽4.0
內置 16 GB 記憶體
支援 64GB Micro SD Card
前置杜比 Dolby 雙揚聲器
或可選擇以優惠價升級至 SAMSUNG Galaxy S8 / S8+ / NOTE 8 智能電話。
若想更了解以上資訊,歡迎致電 3106 8211 查詢。
本中心歡迎各公司、機構或團體包團報讀課程,安排公司活動或同事培訓。想進一步查詢詳情,可致電熱線 3748 9826。
本中心榮獲各大國際機構 (Adobe, Autodesk, Microsoft, Unity, H3C, Lenovo, Corel, Prometric, VUE,
Certiport, Wacom 等等) 邀請成為香港區指定的認可教育中心及連續10 年榮獲香港社會服務聯會嘉許為「商界展關懷」公司,以表揚 Feva Works 對社會的貢獻。
除此之外,Feva Works 更連續 10 年獲 Microsoft 頒發全港最佳 Microsoft 授權培訓中心 (Best Microsoft Certified Partner for Learning Solutions of the Year) 及被 Adobe 選定為 Adobe CS4 & CS5 & CS6 & Creative Cloud 指定認可培訓中心。最近,Feva Works 更連續 10 年獲e-zone 電腦雜誌頒發最佳IT培訓中心。
Module 1: Creating Advanced Functions
In this module students will learn how to parameterize a command into an advanced function. It is designed to teach several key principles in a single logical sequence, by using frequent hands-on exercises to reinforce new skills.
• Converting a Command into an Advanced Function
• Creating a Script Module
• Defining Parameter Attributes and Input Validation
• Writing Functions that use Multiple Objects
• Writing Functions that Accept Pipeline Input
• Producing Complex Function Output
• Documenting Functions by using Content-Based Help
• Supporting -Whatif and -Confirm
Lab : Converting a Command into an Advanced Function
Lab : Creating a Script Module
Lab : Defining Parameter Attributes and Input Validation
Lab : Writing Functions that use Multiple Objects
Lab : Writing Functions that Accept Pipeline Input
Lab : Producing Complex Function Output
Lab : Documenting Functions by using Content-Based Help
Lab : Supporting -Whatif and -Confirm
Module 2: Using Cmdlets and Microsoft .NET Framework in Windows PowerShell
Windows PowerShell provides commands that accomplish many of the tasks that you will need in a production environment. Sometimes, a command is not available but the .NET Framework provides an alternate means of accomplishing a task. Because Windows PowerShell is built on the .NET Framework, it is able to access those alternate means. In this module, you will learn how to discover and run Windows PowerShell commands, and how to use .NET Framework components from inside Windows PowerShell. These two techniques will provide you with the most flexibility and capability for accomplishing tasks in a production environment.
• Running Windows PowerShell Commands
• Using Microsoft .NET Framework in Windows PowerShell
Lab : Using .NET Framework in Windows PowerShell
Module 3: Writing Controller Scripts
In this module, students will learn how to combine tools – advanced functions that perform a specific task – and a controller script that provides a user interface or automates a business process
• Understanding Controller Scripts
• Writing Controller Scripts that Show a User Interface
Lab : Writing Controller Scripts that Display a User Interface
Module 4: Handling Script Errors
In this module, students will learn how to perform basic error handling in scripts. The focus will be about how to add error handling to existing tools, primarily as a time-saving mechanism (instead of having students write new tools). A side benefit of this approach is that it will help build the skills that you must have to analyze and reuse existing code written by someone else.
• Understanding Error Handling
• Handling Errors in a Script
Lab : Handling Errors in a Script
Module 5: Using XML Data Files
In this module, students will learn how to read, manipulate, and write data in XML files. XML files provide a robust, yet straightforward way to store both flat and hierarchical data. XML files are more flexible than CSV, more accessible for small amounts of data than SQL Server, and easier to code against that Excel automation.
• Reading, Manipulating and Writing Data in XML
Lab : Reading, Manipulating and Writing Data in XML
Module 6: Managing Server Configurations by Using Desired State Configuration
In this module, students will learn how to write Desired State Configuration (DSC) configuration files, deploy those files to servers, and monitor servers’ configurations.
• Understanding Desired State Configuration
• Creating and Deploying a DSC Configuration
Lab : Creating and Deploying a DSC Configuration
Module 7: Analyzing and Debugging Scripts
In this module, students will learn how to use native Windows PowerShell features to analyze and debug existing scripts. These skills are also useful when students have to debug their own scripts.
• Debugging in Windows PowerShell
• Analyzing and Debugging and Existing Script
Lab : Analyzing and Debugging and Existing Script
Module 8: Understanding Windows PowerShell Workflow
In this module, students will learn about the features of the Windows PowerShell Workflow technology.
• Understanding Windows PowerShell Workflow | s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806438.11/warc/CC-MAIN-20171121223707-20171122003707-00225.warc.gz | CC-MAIN-2017-47 | 8,774 | 84 |
http://www.pledgebank.com/contact?pledge_id=7109&comment_id=3004772 | code | You are reporting the following comment to the PledgeBank team:
Too bad the Xmarks team didn't send out email notifications about this pledge. I only found out about it by accident when I decided to visit the xmarks website, and I was a few hours late to pledge my own support. This would have been one of those rare cases when I wouldn't have objected against a piece of spam.
Hope you've reached your goal without me and will keep this wonderful service running!Tim A, 8 years ago. | s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039745015.71/warc/CC-MAIN-20181119023120-20181119045120-00488.warc.gz | CC-MAIN-2018-47 | 483 | 3 |
https://forum.kirupa.com/t/animating-a-drawing-api-shape/206659 | code | Drawing API examples always seem to be either really simple or really complex. I’m interested in doing something fairly simple (I think). I’d like to draw an abstract curved shape with the Drawing API and animate the shape of the object to slowly change over time. Nothing really dramatic. Kind of like a lava lamp, I suppose. I understand this is done with movie clip control points but I haven’t seen any examples I can understand. I’m attaching a file showing an example of what I mean (done with manual tween in MX 2004). Thanks in advance for your help. | s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400203096.42/warc/CC-MAIN-20200922031902-20200922061902-00181.warc.gz | CC-MAIN-2020-40 | 566 | 1 |
http://codingplayground.blogspot.com/2009/02/cuda-massive-parallelism-on-my-notebook.html | code | Cuda is a parallel computing architecture developed by NVIDIA. Nvidia GPUs have a parallel "many-core" architecture, each core capable of running thousands of threads simultaneously. I installed CUDA 2.1 on my Sony notebook Vaio FZ21M with Geoforce 8400M by using the modded drivers available at http://www.laptopvideo2go.com/
CUDA supports C, C++, Python, and Fortran. The paper Fast N-body Simulation with CUDA describes an a CUDA parallel program which runs 50 times as fast as a highly tuned serial implementation for the N-body simulation program. Wow! this is amazing. I still remeber my parallel implementation on Cray T3E, with 128 alpha processors. More than 10 years ago.
Well, now I have thousands of threads running on a multi-core GPU under my fingers (plus the dual-core CPU). Cuda documentation is pretty extensive. I am right now studying the CuBlas libraries for linear algebra and starting to code.
Here you have an introduction to cuda by the way of Dr. Dobbs | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120844.10/warc/CC-MAIN-20170423031200-00033-ip-10-145-167-34.ec2.internal.warc.gz | CC-MAIN-2017-17 | 978 | 4 |
http://zivios.org/index/whyzivios | code | What is Zivios?
Zivios aims to be a consolidated management portal for providing core infrastructure services using opensource technologies. The long term goals of Zivios are:
Single Sign-on and Certificate authority
Package and Patch Management
Core Infrastructure Services (NTP,DNS, etc)
With an infinitely extensible plugin architecture, Zivios extends all aforementioned fundamentals to an arbitrary number of services. The ideology behind Zivios is that datastore for services can be (and most likely, will be) different. As such, we cannot depend on Ldap or Kerberos to fulfill our identity Management needs. Zivios allows the use of plugins which maintain their own independent datastore following CRUD operations.
With an extensibile framework, Zivios addresses the needs of complex heterogeneous deployments by providing an open and scalable API for modular development and sanitized consolidation.
Opensource software has progressed to the point where it can (feature-wise) compete with many proprietary offerings. However, use of such technologies stay limited to large corporations who have experienced and highly skilled IT staff. Zivios consolidates and simplifies the use and management of complex technologies and builds on them to provide an integrated identity management platform. With an infinitely extensible plugin system, any application can be managed via Zivios.
We feel that managing a fully featured opensource network is currently out of reach of the average administrator. Integration and proper management of servers, services and identity requires intricate knowledge and, unfortunately, has a rather steep learning curve.
Highlighting Key Points of Zivios
Zivios allows the administrator to get up to speed with the opensource technologies quickly and without
requiring indepth knowledge. The core technologies come pre-integrated so little time is wasted in the redundant
task of setting it up correctly for each and every system. Core services are online right after installation.
Management is done completely via the web panel, even if some tasks require access to remote machines
(Zivios acts on your behalf) using server side agents.
Zivios ensures correctness in day to day repetitive tasks such as identity management. In loosely coupled
systems, Zivios also ensures compliance (imagine forgetting to delete an ex-employee's ERP account, where your ERP
system is online and not integrated directly with deployed directory services.
Zivios reduces time to task completion with cascading data changes. Imagine having to reset the password
for all users in your finance department. It would be quite laborious, especially if users in the department have a
diverse password store.
Zivios allows delegated administration to be implemented in a simple manner. Since everything is part of the
tree, you can delegate access of any tree object to any user or group. As training and service level knowledge is
not required, delegated administration can actually be enforced. In opensource networks, it is common for few
(or even singular) resources to have the knowledge required to administer a particular service or server. Delegated
administration is impossible in this scenario as all requests will rebound automatically to that particular
administrator, making him critical for that task.
Zivios enforces customizable organizational workflow. Workflow will be used (in future versions) to allow for
the transaction to be deferred until approved.
Zivios allows administrators to focus on actual problems and spend time on improving the system rather than
maintaining it on a day to day basic. | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704134547/warc/CC-MAIN-20130516113534-00031-ip-10-60-113-184.ec2.internal.warc.gz | CC-MAIN-2013-20 | 3,626 | 31 |
http://www.21alive.com/news/state/208347991.html | code | Now that's a big snake. ...
The Florida Fish and Wildlife Conservation Commission said a Florida man caught and killed an 18-foot, 8-inch Burmese python earlier this month. The previous record snake had measured 17-feet, 7 inches.
According to a news release from the Florida Fish and Wildlife Conservation Commission:
On May 11, Jason Leon was riding late at night in a rural area of southeast Miami-Dade County when he and his passenger spotted the python. About 3 feet of the snake was sticking out of the roadside brush. Leon stopped his car, grabbed the snake behind its head and started dragging it out of the brush. When the snake began to wrap itself around his leg, he called for assistance from others and then used a knife to kill the snake. Leon once owned Burmese pythons as pets and has experience handling this nonvenomous constrictor species.
“Jason Leon’s nighttime sighting and capture of a Burmese python of more than 18 feet in length is a notable accomplishment that set a Florida record. The FWC is grateful to him both for safely removing such a large Burmese python and for reporting its capture,” said Kristen Sommers, Exotic Species Coordination Section Leader for the Florida Fish and Wildlife Conservation Commission.
Earlier this year 68 Burmese pythons were caught or killed during the 2013 Python Challenge, a Florida state-sponsored hunt that resulted in approximately 1,600 people from 38 states and Canada taking part.
According to the National Park Service, more than 1,800 Burmese pythons have been removed from Everglades National Park since 2002, a number that likely represents just a fraction of the total population.
Related external content
Burmese python page at Everglades National Park
What are your thoughts CLICK HERE to leave us a "QUESTION OF THE DAY” comment.
© Copyright 2016, A Quincy Media broadcasting station. All rights reserved. This material may not be published, broadcast, rewritten or redistributed. | s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720380.80/warc/CC-MAIN-20161020183840-00437-ip-10-171-6-4.ec2.internal.warc.gz | CC-MAIN-2016-44 | 1,969 | 11 |
https://forum.terasology.org/threads/re-vamp-ing-your-world.1820/ | code | Bug Hunter Extraordinaire
- Name: Aresh "vampcat" Mishra
- Social: Github
- From: India
- Skills / Tools: Everything that's needed, and more I've taken every language, tool and game engine I could lay my hands on out for a spin (everything from Java to Python, PHP to JS, SQL to FoxPro, Unity to Ogre), but a C++ IDE running on Windows and compiling an OpenGL program is what I love the most.
- Found via: GSoC
- Interests: Mostly game development, specifically renderings and simulations on GPU (RayTracing, fluid simulations, or anything remotely related) that make the world much more beautiful just by existing! (Terasology shaders, here I come!)
- Extra: Games have always fascinated me, for they are the respite from reality that we all sorely need. That, coupled with the fact that I love creating. Making an entire world out of nothing, a world where I can make fishes fly and flowers grow in mid-air, and make them look amazing while they do all this is what makes me tick. | s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814290.11/warc/CC-MAIN-20180222200259-20180222220259-00181.warc.gz | CC-MAIN-2018-09 | 982 | 8 |
http://richardfranklincompany.com/869-ph74472-zolpidem-hexal-rezeptfrei.html | code | It may also be provided as an oral liquid. In the populated areas, many dogs, cows, and monkeys wander zolpidem tartrato dosis as wild or semiwild scavengers. Mira yo personalmente no estoy a favor del aborto excepto en algunos casos, y creo que en el caso de tu amiga hay opciones zolpidem dose maximum para evitar el aborto. Pharmacists say that this drug is often purchased together zolpidem teva bula with complexes of beneficial bacteria. Biasanya obat antibakteri tidak boleh diberikan kepada is zolpidem a benzo drug hewan dengan intoleransi individu terhadap komponen, di hadapan penyakit ginjal, kehamilan. Buy an what is zolpidem the generic for established online store on exchange marketplace. As we know, are a type of financing with which high amounts of money can be granted zolpidem sleep architecture to the borrower. Propecia hiv comprar levitra cialis nolvadex zolpidem propranolol interaction for sale uk acheter cialis lilly france! You zolpidem medicine in india have to be current with what your competitors are doing. Nestled in a pine forest near a nature reserve and on islands off zolpidem hexal filmtabletten poole harbour, the oilfield owned by french company perenco has been quietly producing thousands of barrels a day since the late 1970s. What you do when doctors disagree is only something you can decide.
Buy cheapest cialis buy generic cialis online cialis online canadian pharmacy cialis. Equity markets have been soaring this year as most commodity prices generally moved lower! This virus is not normal for it to claim lives in matter of momments. Som beskrevet ovenfor er enhver score zolpidem tartrate get you high vel farmakologisk som nonfarmakologisk. Bagaimana untuk melakukannya, baca cadangan ahli parasit dr? Levitra buy philippines cialis 20mg price can you zolpidem side effects long term use buy cialis online no prescription. This can be misleading when it comes zolpidem eg compresse time to purchase. Mccoll k, murray l, omar e-e, dickson a, lek zolpidem tartrate nujumi ae, wirz a, kelman a, penny c, jones rk, hilditch t. Ze krijgen koorts, hoofdpijn, koude rillingen, spierpijn comprar viagra barato Sancti Spíritus en moeten hoesten. They have always had cable and don in drug stores. Although internet websites zolpidem lowest dosage we backlink to beneath are considerably not associated to ours, we really feel they may be essentially really worth a go by way of, so have a look. With fever , blood cultures and po antibiotics bactrim, clinda, doxy for 48 hrs.
I discovered your blog via google at the same time as looking for a comparable subject, your site zolpidem receita c1 ou b1 came up. Each tadalista tablet contains 20 mg, 10 mg or 5 mg zolpidem lyrica interaction of tadalafil. Do not use this medication under dressings zolpidem hexal rezeptfrei that do not breathe. I have been exploring for a little bit for any high quality articles or weblog posts on this sort of house. But, yes, ovasitol could be really helpful zolpidem causing hallucinations for you? Boston scientifics stock price has more zolpidem zopiclone than doubled this year, after about four years in which it had been stuck in the single-digit range. Fever goes to 103 and loss of apitite when fever goes higher , lowers with panadol only but after few hours same condition? About a year how zolpidem hexal rezeptfrei to take caverta 50 some studies have shown that high intensity exercise is tied to appetite suppression and changes in hormones that regulate hunger and fullness, and the new research found different effects on those hormones among the various exercise regimens. In these patients prolongation of the dosage intervals should be carefully considered according to the patient's requirements. I am delighted i found this zolpidem drug forum during my find something regarding this. You might lose zolpidem dose recreative your appetite for various reasons when you are having cancer treatment.
Der gebrauch von chloroquin kann folgende nebenwirkungen hervorrufen. Make zolpidem tartrate how does it work sure you know when you should come back for a checkup. He enjoys the zolpidem mylan alkohol spotlight too much. Wonderful story, reckoned we could combine a few unrelated data, nevertheless definitely worth taking a look, whoa did 1 study zolpidem tartrate other names about mid east has got much more problerms also. Macrobid works by killing the bacteria zolpidem 10 mg 1a pharma kaufen that cause the infection or by preventing the bacteria from growing. Uniformed bena teva zolpidem shortage can hillward man beneathe manfulness. Id like to pay this cheque in, please erectile dysfunction medicines in india izle after the pests were taken care of, zenjuro and his wife started cleaning the freezers inside their shop. Therefore zolpidem hexal rezeptfrei patients taking this drug are advised not to consume alcoholic beverages and to exercise caution when taken with benzodiazepines or cns depressants. Accutane can cause an health-care in day users and i zolpidem ratiopharm 5 mg would too compound the kamagra soft tablets treatment with post? Table s3 zolpidem dosis recomendada shows gene deletion constructs and primers. To ensure safety, blue-emu should zolpidem daytime anxiety be used as directed on the packaging. Cbd treats for dogs where to buy hemp oil vs cbd oil. You brush a afield mean retrieve of squandering every so time that your children are all unrealistic on their own. Pertanyaan anda akan dijawab dalam waktu 24 jam sebagai gantinya.
I have thick, color-treated, past-waist-length hair. Cialis cialis dosage 40 mg dangerous cialis without a doctor prescription. The gps in and a small swab which being infected with zolpidem for coma chlamydia patients onto the road men and women. Through feeds you will be able zolpidem tartrate formula to deliver information about your goods such as descriptions, pricing and images. But for ms hanley, the dangers zolpidem hexal rezeptfrei must be taken more seriously by medical professionals. Another thing id prefer to say is the fact getting hold of duplicates of your credit score in order zolpidem false positive drug test to check out accuracy of each and every detail is one first measures you have to perform in credit repair? It is often said that criminal records check nsw and jackson how does zolpidem cr work county jail. People do it all the time but it doesnt mean it is legal? Herpoveda has been formulated zolpidem tartrate images to help people who have suffered first at the hand of herpes and then with drugs like antivirals. Human subjects and patient specific data were not included in this study! I visited several blogs except zolpidem actavis fass the audio http://outofsitetechnology.com/68080-ph86348-benzaclin-gel-buy-online.html feature for audio songs present at this site is actually marvelous. Patients should be directed to discontinue use and contact a physician if any lorazepam or zolpidem for sleep signs of an allergic reaction occur. Marley generic viagra viagra coupon cheap generic cialis tadalafil zolpidem hexal rezeptfrei cialis online. Always inform your doctor fully about any medication you are currently taking, including those you have recently stopped using, before embarking on a course of azithromycin treatment.
I saw a lot of website but i believe this one holds something special in it. Excess insurance will cover the cost of your excess if you have to make a zolpidem on sleep claim, up to a pre-agreed limit! Desire to spare some amount zolpidem mechanism of action of money in process. By looking at a sample research paper on our website, you will zolpidem long term memory loss see that although the prices on our website are not high, the quality has not been compromised. There are changes underway on federal and state levels that will ultimately clarify the laws and zolpidem toxic dose regulations related to cbd-based products and sales. De beste cryptomunten van 2019 kopen met bancontact, ideal, sofort, sepa banktransfer of via je creditcard, mastercard of visa. The signals that bounce back are analyzed for zolpidem hexal rezeptfrei patterns that indicate a persons breathing or heartbeat. Becky kendall, background, a recruited baker that dunckel has brought in to help fill the orders. Can i take doxycycline hyclate 100mg for an abscessed tooth. When deciding on a budget keep in mind that investing in professional landscaping will greatly add to the value of your home. And does zolpidem cause joint pain will it be used in south africa to combat the spread of the virus? Andapun jasa pengiriman produk yang kami gunakan adalah sebagai berikut. If you stop taking fluoxetine capsules suddenly or miss a dose, you may zolpidem hexal rezeptfrei get withdrawal effects. If you have a query that you need to clear out, do ask the pharmacist about it. Reiben sie nicht den bimsstein zolpidem generico peru - wischen sie nur die umgebende gesunde haut ab, es entsteht eine wunde.
Do my math solve my math i need help zolpidem to zopiclone with math homework. Im confident youll achieve so numerous individuals with what youve got to say? I buy the best coconut oil i can find. Hello zolpidem was ist das i am 48 yr old female recently diagnosed with alopecia arrays? Diazepam can absorb into plastic, and, therefore, zolpidem interactions with zoloft diazepam solution is not stored in plastic bottles or syringes, etc. The study was approved by the ucsf committee on human subject research, approval number 15-16384. Enterococcus species look similar to steptococcus bacteria and both are found naturally in the human intestinal tract. Wenn der wirkstoff an das ribosom bindet, wird die peptid-translokase gehemmt und das bakterienwachstum verlangsamt. Can zolpidem and quetiapine together i use cialis with antibiotics. Es ilustrados por muchos el pensiones zolpidem gocce dosaggio de la abortar de la bandejas. | s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400249545.55/warc/CC-MAIN-20200926231818-20200927021818-00568.warc.gz | CC-MAIN-2020-40 | 9,913 | 7 |
https://freshwebonline.co.uk/wordpress-websites-rock/ | code | Firstly WordPress was developed as a blogging platform but over that last few years it has changed significantly. There is a two way split, one is the online hosted and managed public blogging website, the other is the downloadable self managed and hosted open source CMS. This has always been the case, the changes are to self managed system.
Nowadays most agencies are using WordPress as their CMS of choice to build websites. It is extremely easy to use, versatile and feature rich. You can build a full featured website with blog/news and do all sorts of cool stuff like pull your Twitter, Facebook and many other social media feeds in to it. Pull recent blog items on to your home page. Add on of thousands of widgets or plugins for free an a ton of other brilliant stuff.
WordPress is setup to be Google friendly with permalinks, canocial URL’s, blog roll and many more plus points.
All in all, a very good CMS that’s easy to use and very powerfull. Try it yourself! | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100534.18/warc/CC-MAIN-20231204182901-20231204212901-00033.warc.gz | CC-MAIN-2023-50 | 976 | 4 |
http://www.techist.com/forums/f78/notebook-ram-lost-39192/ | code | Originally posted by rama
hey there guys
i have a sony viao. initially got it with 64 ram. then later upgraded it to 256 ( 64 + 64) .
now its years after i had bought it (5 years)
now a days i see that the ram shown is only 64. in the system specs and the bios. i oprned up the lappy to check if the ram got loose (unusual for a laptop) . but found all in good condition.
i have noidea how come the ram got halved. with 64 mb of ram life is a hell . using win ME . i have to wait for the disk swaps even when shifting between different windows of the internet explorer.
does anyone have any idea hoe this could have had happened. lappy was taken great care. no shaking shagging.
why me ? why me?
128 mb ram total
first off the first part of your post, you have a simple addition error
64 + 64 does not equal 256, it = 128
one of the ram sticks could have died, therefore reducing you to only 64. your laptop is 5 years old, you got a good amount of life out of it. I reccomend buying a new notebook. | s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720845.92/warc/CC-MAIN-20161020183840-00285-ip-10-171-6-4.ec2.internal.warc.gz | CC-MAIN-2016-44 | 999 | 12 |
https://rmm.datto.com/help/en/Content/1INTRODUCTION/Infrastructure/AgentEncryption.htm | code | Datto RMM takes a layered approach to security and part of this is Agent encryption. A unique encryption key is generated for every Datto RMM Agent installation to ensure that when an Agent is communicating with the platform, we know the traffic is coming from the device where the Agent was originally installed, and no impersonation is taking place.
Agent to platform authentication
The Datto RMM Agent and Datto RMM platform have a shared secret key, and the platform has a map of device identifiers linked to these keys. Linking a key to a device identifier allows the signing of data from the Agent and the encryption or refusal of communication by the platform.
Encryption keys do not have to be approved in the following scenarios:
- Newly generated and assigned encryption keys are automatically approved. This will include the following devices:
- Existing devices with no mapped encryption key
- Existing devices with a previously approved encryption key that had their Agent uninstalled and then re-installed
- New device records by newly installed Agents
- If a device submits an encryption key change request from an IP address that your Datto RMM account has previously approved at least four other change requests from in the last 60 days, the encryption key is automatically approved.
- If a device submits an encryption key change request from the same IP address as the last IP address you approved the request for, the encryption key is automatically approved.
- If a device submits an encryption key change request from the same IP address as the device's current external IP address, the encryption key is automatically approved.
In situations where the platform has a stored device identifier with a pre-existing encryption key mapping and the Agent is attempting to communicate using a mismatched or missing encryption key, a manual approval is required. This request must be approved by an Administrator in the Agent Encryption Key Changed list on the Devices requiring approval page. Refer to Agent Encryption Key Changed.
It is recommended that all encryption key approvals are validated as an Agent should never change its key spontaneously. In the event of a mismatch, check the new device's audit records to see if they are as expected. If they are not or you are unsure, contact Datto RMM Support. Refer to Kaseya Helpdesk.
If a device is awaiting Agent encryption key change approval or is rejected, it will not receive any monitoring or Software Management data, and you will not be able to connect to it using Web Remote. Devices rejected from the Agent Encryption Key Changed list will be removed from the list and will be displayed in the list again an hour later; an Administrator can then approve or reject them. Alternatively, devices displayed in the list can be deleted from the account. Refer to Deleting a device.
Approved devices will receive monitoring and Software Management data an hour after approval. You will also be able to connect to them using Web Remote.
- It is necessary for devices to store the encryption key locally for the value to persist across reboots. This file is only accessible with system privileges, and it is essential for it to be present and correct for Agent to platform authentication to occur uninterrupted. If the file is removed, for example by reinstalling the operating system, a manual approval will need to occur.
- Devices should never share the same identifier but it may happen on rare occasions. For example, if an image-based operating system deployment method is used without first removing the device identifier from the system, the identifier would be cloned. Following the first device, every device communicating with the platform using the cloned identifier would generate an approval request. In this case, the cloned identifiers should be removed and allowed to automatically re-generate. To learn how to avoid duplicate device identifiers when cloning devices, refer to Cloning or imaging devices that have Datto RMM installed.
- If a Network Node is awaiting device approval due to an Agent encryption key change request, any associated network devices will appear offline until the Network Node is approved. | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100184.3/warc/CC-MAIN-20231130094531-20231130124531-00691.warc.gz | CC-MAIN-2023-50 | 4,204 | 18 |
http://sanguinaria-budding.blogspot.com/2012/01/january-garden-goals.html | code | Gardening Goals for January:
- Finish potting shed - we (ok, really it's Jeff and I'm the assistant) built the workbench last weekend and this weekend the goal is to build the shelves. Then the structures will be built and it'll just be prettifying the potting shed time.
- Set-up my seed starting station - this requires the shelves in the potting shed to be finished, but I plan to put up two 4 ft lights, heating pads, get a container to store potting soil, and get trays washed and ready.
- Start seeds! - yes, start seeds in January. I need to start broccoli, cauliflower, cabbage, leeks (first time growing!), head lettuce, onion seed, parsley, and Chinese cabbage.
- Build bed in front of potting shed - this is one of those 2012 projects. I doubt I will finish this month, but I hope to get a good start.
- Asparagus - decide what type and how many asparagus I'm going to plant and order. I also need to figure out where to plant them.
- Harvest and weigh winter crops - it should be easy to keep up with the winter garden's slow pace.
- Plan spring garden - I have the list of what I plan to grow and I've ordered seeds, however, I need to figure out where everything is going to be planted and how many seedlings I'm going to need.
- Make seed starting journal - trying to do better at keeping data so that I can learn what works and what doesn't.
- Clean and sharpen garden tools
Happy January gardening everyone! | s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267164750.95/warc/CC-MAIN-20180926101408-20180926121808-00489.warc.gz | CC-MAIN-2018-39 | 1,424 | 11 |
https://rdrr.io/cran/nixmass/man/swe.gu19.html | code | This model parameterizes bulk snow density with day-of-the-year as the only input similar to
swe.pi16 but adds a quadratic dependance. It was calibrated for the regions of the whole Italian alps, and the subregions South-West, Central and South-East. By setting the cofficients of the empirical regression it can however be used with results from other datasets.
A data.frame of daily observations with two columns named date and hs referring to day and snow depth at that day. The date column must be a character string with the format
Must be one of the italian subalpine regions italy, southwest, central or southeast, defined in the original reference (see details), or myregion, in which case the coefficients n0, n1 and n2 have to be set.
swe.gu19 Similar to the model of Pistocchi (2016), this function uses only the day-of-year (DOY) as parameterization for bulk snow density and hence SWE. In contrast to the latter, here, a quadratic term for DOY was added, to reflect non-linearity in the snow bulk density variability. The datums in the input data.frame are converted to DOY as days spent since November 1st. Regression coefficients depend on regions defined in Guyennon et al. (2019), which are italy for the Italian Alps, southwest for the South-western Italian Alps, central for the Central Italian Alpes or southeast for the South-western Italian Alps.
region.gu19 is set to myregion, the coefficients
n2 must be set to values, obtained from a regression between densities and day-of-year from another dataset. It has to have the form density ~ DOY + DOY^2, where DOY is the day-of-year as defined in the original reference.
Non computable values are returned as NA.
A vector with daily SWE values in mm.
Guyennon, N., Valt, M., Salerno, F., Petrangeli, A., Romano, E. (2019) 'Estimating the snow water equivalent from snow depth measurements in the Italian Alps', Cold Regions Science and Technology. Elsevier, 167 (August), p. 102859. doi: 10.1016/j.coldregions.2019.102859.
Pistocchi, A. (2016) 'Simple estimation of snow density in an Alpine region', Journal of Hydrology: Regional Studies. Elsevier B.V., 6 (Supplement C), pp. 82 - 89. doi: 10.1016/j.ejrh.2016.03.004.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets. | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989693.19/warc/CC-MAIN-20210512100748-20210512130748-00205.warc.gz | CC-MAIN-2021-21 | 2,306 | 13 |
https://www.satcompetition.org/2002/onlinereport/node23.html | code | The competition revealed many expected results as well as a few surprising ones. First of all, incomplete solvers appeared to be much weaker than complete ones. Note that no incomplete solver won any category of satisfiable formulas while these categories were intended rather for them than for complete solvers (note that in theory randomized one-sided error algorithms accept potentially more languages than deterministic ones). In Industrial and Handmade categories only one incomplete solver (UnitWalk) was in the top five. This can be due to the need of specific tuning for local search algorithms (noise, length of walk, etc.) and, probably, to the lack of new ideas for the use of randomization (except for local search). If automatic tuning (for similar benchmarks) of incomplete solvers is possible, how to incorporate it in the competition?
On industrial and handmade benchmarks, zchaff and algorithms close to it (Berkmin and limmat) were dominating. On the other hand, the situation on randomly generated instances was quite different. These algorithms were not even in the top five! Also, randomly generated satisfiable instances were the only category where incomplete algorithms were competitive (four of them: dlmsat1(2,3) and UnitWalk, were in the top five). Concerning the unsatisfiable instances, the top five list is also looking quite differently to other categories.
In fact, only two solvers appeared in all the top five lists for the three categories Industrial/Handmade/Random: a non-zchaff-like complete solver 2clseq for SAT+UNSAT and an incomplete solver UnitWalk for SAT. However, they did not win anything. The (unsurprising) conclusion is that specialized solvers indeed perform better on the classes of benchmarks they are specialized for. Also it confirms that our choice of categories was right. But maybe an award should be given to algorithms performing uniformly on all kinds of instances (while some part of the community was against an ``overall winner'' for the present competition).
Another conclusion of the competition is that almost all benchmarks that remained unsolved within 40 minutes on P-III-450 (or a close number of CPU cycles on a faster machine) have not been solved in 6 hours either. This can be partially due to the fact that few people experimented with the behaviour of their solvers for that long. Note that the greatest number of second stage benchmarks was solved in Industrial-SAT+UNSAT category, the one where probably the greatest number of experiments is made by the solvers authors. Also many solvers crashed on huge formulas (probably due to the lack of memory).
It is no surprise that the smallest unsolved unsatisfiable benchmark (xor-chain instance by L. Zhang) belongs to Handmade category. In fact, many of unsatisfiable benchmarks in this category are also very small. However, it seems like all these benchmarks are hard only for resolution (and hence DP- and DLL-like algorithms) where exponential lower bounds are known for decades (see, e.g., [Tse68,Urq87]). Therefore, if non-resolution-based complete algorithms come, these benchmarks will be probably easy for them. For example, LSAT (fixed version) and eqsatz (not participated) which employ equality reasoning can easily solve parity32 instances that remained unsolved in the competition.
On the other hand, the smallest unsolved (provably) satisfiable benchmark (hgen2 instance by Edward A. Hirsch) is made using a random generator. Other small hard satisfiable benchmarks also belong to Random category. These benchmarks are much larger than hard unsatisfiable ones (5250 vs 844 literal occurrences). This is partially due to the fact that no exponential lower bounds are known for DPLL-like algorithms for satisfiable formulas (in fact, the only such lower bounds we know for other SAT algorithms are of [Hir00b]). In contrast, no random generator was submitted for (provably) unsatisfiable instances. (Of course, some of the handmade unsatisfiable instances can be generated using random structures behind them; however, this does not give a language not known to be in coRP (and even in ZPP).) Note that the existence of an efficient generator of a coNP-complete language would imply NP=coNP (random bits form a short certificate of membership).
Probably, at the end, the main thing about competition is that it attracted completely new solvers (e.g., 2clseq and limmat) and a lot of new benchmarks.
Some challenging questions, drawn from the conclusion, are: | s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481832.13/warc/CC-MAIN-20190217091542-20190217113542-00612.warc.gz | CC-MAIN-2019-09 | 4,497 | 8 |
https://kubicle.com/course/introduction-to-data-databases/ | code | Introduction to Data and Databases
If data is a mystery to you, the first step is learning how data is stored, how it is read, and how it can be analyzed. In this course, you’ll take a high-level tour of these data fundamentals, to boost your confidence in taking on projects.
Beginner 10 Lessons 90 Minutes CPD Credits
About This Course
If your data analysis is going to use multiple data sources, all formatting in different ways, you may not know where to start. In this course, you’ll discover how to understand the data you have and how to best approach analyzing it.
Across 10 lessons, you’ll learn about how to handle structured and unstructured data, and how this helps in the analyzing of databases.
By the end of the course, you should understand what types of data your source files contain and how they need to be manipulated to gather insights.
Understand the data preparation process
Gather and explore data
Cleanse and transform data
Manage missing data and outliers
Format data correctly | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816977.38/warc/CC-MAIN-20240415111434-20240415141434-00136.warc.gz | CC-MAIN-2024-18 | 1,009 | 12 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.