url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
---|---|---|---|---|---|---|
http://www.twoplayermode.com/games/orangutans-great-escape,1084/ | code | Escape the two orangutans can become a great adventure. Try your hand and ask a buddy or pal to release these two pupae. Do you know how to reach each door? Nervously thinking about bananas traverse platforms, avoid the abysses and other danger! Full of platforms arcade game will test your skills. Step by step try to take your prizes - it's worth to praise in the comments, what you have done together. Cute critters will prove that they're interesting companions! | s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578533774.1/warc/CC-MAIN-20190422015736-20190422041736-00388.warc.gz | CC-MAIN-2019-18 | 466 | 1 |
https://www.appletechsoft.com/what-is-net-and-how-it-is-useful-for-custom-software-development/ | code | There are several frameworks and programming languages available for custom software development. At AppleTech, we have expertise with numerous of them, however, Microsoft.NET has shown to be the most reliable one. It’s an excellent choice for business applications.
What is .NET?
The.NET Framework is an open-source development platform that provides a wide variety of libraries and tools for creating mobile, online, game, microservice, and Internet of Things (IoT) applications. The 60 programming languages that the .NET’s application environment supports provide it a tremendous deal of versatility
Because of this, the.NET framework and its successor, .NET Core, have about 34% and 31% of the market share for frameworks worldwide, respectively. With such widespread adoption, .NET has shown itself as a platform capable of supporting the development of highly scalable applications for use by both small businesses and multinational conglomerates.
It may make sense to make use of the .NET framework’s built-in capabilities and integrations with other Microsoft apps if your company already makes use of such applications (for example, Azure cloud hosting, Office 365, Windows OS, and more).
You may still rely on us if you’re still exploring your options and haven’t made up your mind about .NET just yet. We are not limited to any one methodology and always suggest what we believe to be the most appropriate framework for each customer. Discussing your needs with us will allow you to choose the option that works best for your company.
Types of Dot Net Development Services
A number of apps can be created using .NET development services. It can be a .NET-based website, a website that uses web services, or a website that is made just for a business. .NET development services can be broken down into the following categories:
Web Application Development:
Due to the fact that the majority of applications are web-based, .NET is capable of creating various web apps, web forms, MVCs, and even web servers. The .NET framework is used to create any sort of web application according to a defined development process.
When a specific business need cannot be fulfilled using .NET, custom web apps can now be established using .NET web application development services. Today, .NET can be used to create almost any kind of online application imaginable.
Enterprise Application Development:
While there are more specialized frameworks for generating apps and services for Windows, it is feasible to construct applications and services for Windows using general implementations of .NET. Additionally, .NET gives you a lot of freedom when designing Windows GUIs. If you are building software for Windows and you need particular Windows services, be sure you choose Windows.
Mobile Application Development:
Even though .NET isn’t the most widely used platform, it may be used to create mobile apps. As a consequence, you could run across Dot NET developers that focus on mobile programming. Both Xamarin and Mono can help businesses make mobile apps.
Other Specialized Services:
There are a number of tools available in the.NET Framework that can facilitate the creation of mobile applications. As a result, you might come across .NET development companies that focus only on creating apps for mobile devices.
Reasons to consider .NET for Custom Software Development
After going through the fundamentals, let’s explore the advantages of .NET.
Highly Secure Environment
Everyone is now quite concerned about cybersecurity, but our customers in highly-sensitive industries like healthcare or financial services are particularly worried about it. We are aware that effective security must begin at the very start of the development process.
If the development environment you’re using lacks the necessary capabilities, it might be challenging to create a secure custom software application. With .NET, it’s easy to set up role-based security, monitor for threats, and do a lot of other things to keep your application safe.
NET is Developer Friendly
Because of its widespread adoption, .NET has extensive institutional backing. . You’re never on your own in the .NET world thanks to education and certifications, open-source extensions, and developer assistance.
Microsoft is dedicated to providing a platform that is beneficial for both businesses and developers. This makes it simpler to obtain help for your application since there is a large community of .NET developers at all skill levels.
NET Works Outside of Microsoft
.NET is a cross-platform framework, despite the fact that most people identify it with Microsoft. Using .NET, you can develop applications that work with Microsoft products as well as those that operate on iOS, Linux, Android, and other platforms. Only 11 of the 60 languages supported by the .NET framework were developed by Microsoft.
Building cross-platform apps with.NET in the Visual Studio development environment enables us to reuse a single codebase across several application variants. For instance, by leveraging code we can write across all of those platforms, we can maximize time and resources while developing a custom software system that includes versions for iOS, Android, and the web.
Your software has to evolve along with your company’s growth and development. Because of NET’s scalability, it is possible to increase user numbers, functionality, data usage, and other factors.
An application’s overall efficiency improves as a result of economies of scale. You can create blueprints using .NET, which enables developers to reuse particular elements across many software projects. If you’re developing an app, you can make changes to one section of code without impacting other parts. When it’s time to expand, you can add new modules, update existing ones, and benefit from work that has already been finished, tested, and distributed to users.
Makes Maintenance Easy
If you’re upgrading to the newest version of .NET or only need to update certain parts of your system, .NET tools make it easy. You can ensure that your customers are constantly getting the best possible experience by running updates, doing regression testing, and releasing a brand-new version immediately.
With the help of Application Insights, you can automatically detect and track mistakes, security risks, performance problems, abnormalities, and more, giving you the time you need to take quick action and keep your system updated to increase its lifespan.
Limitations of .NET
There are some less desirable characteristics to.NET, just as there are to every programming framework. For instance, certain licenses, like Visual Studio, might be expensive even if the platform is open source and free.
More importantly, keep in mind that Microsoft built and is in charge of this platform. You have no say in if Microsoft chooses to make significant changes to it or cease delivering security updates. To keep your apps running, you will need to come up with a fresh approach.
However, there is no reason to believe that .NET will go away any time soon given that it has been there for years, has a sizable portion of the market, and earns Microsoft a sizable sum of money. Want to talk more about whether .NET is the right framework for you or not? AppleTech can help you out with all the questions you’ve with .NET-based custom software development. Get in contact with us to explore how we can help you achieve your business objectives with an experienced team. | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100229.44/warc/CC-MAIN-20231130161920-20231130191920-00530.warc.gz | CC-MAIN-2023-50 | 7,516 | 37 |
https://support.quest.com/technical-documents/rapid-recovery/6.6/user-guide/84 | code | The Universal Recovery Console (URC) is a recovery environment embedded into a bootable ISO image, and is used to perform bare metal restore. When you boot the BMR target from the boot CD ISO image, the URC environment appears. The user interface appears slightly differently for Windows and Linux targets.
- The URC for Windows targets uses a graphical user interface based on Windows Preinstallation Environment (Win PE) 10 OS.
- The URC for Linux targets uses a Linux Ubuntu 16 command line interface.
On a Linux recovery target machine, the sole purpose of the URC is to provide single-use credentials to connect the BMR target machine with a running Rapid Recovery Core instance to perform the restore process.
In addition to providing single-use credentials for BMR, the URC for Windows contains a full-featured recovery environment. It includes function buttons and (when launched) a console.
The buttons in the Windows-based URC perform the following functions:
|Start Universal Recovery Console
|Launches the console from which you can manage drivers on the boot CD, manage additional drivers on the BMR target, perform bare metal restore from a Rapid Recovery archive, and monitor the restore progress.
Menu to access tools that may be required to help with your bare metal restore. For example, to launch a browser that runs on Windows PE, select Chromium from the Tools menu.
For specific information, see About Windows Universal Recovery Console tools.
This menu includes options to reboot or shut down the BMR target machine. Each time you reboot, the authentication key is refreshed.
The Windows-based URC includes the following tabs:
|Boot CD Driver Manager
Lets you manage the drivers available on the boot CD. Click the arrow next to each item to expand to show its child objects. After you make changes, click Force Load to apply the changes and test the drivers.
NOTE: Items listed under Other devices do not yet have the correct drivers associated with them.
|Existing Windows Driver Manager
Lets you load and manage drivers not included on the boot CD.
|Restore from Archive
|Lets you perform a BMR from a Rapid Recovery archive.
|Lets you monitor the process of the bare metal restore. This tab only appears when a restore takes place.
For both Windows and Linux BMR target machines, the Authentication area shows the following information:
|When an appropriate network adapter is loaded, the IP address of the BMR target machine is displayed.
|A new single-use password generates each time the BMR target machine is started using the boot ISO image.
Write down the authentication information. You will need this information to connect the BMR target machine with the Rapid Recovery Core Console to complete the restore process.
The Windows-based Universal Recovery Console (URC) includes access to tools that may assist you in completing a bare metal restore (BMR).
You can find the following tools by clicking
(Useful tools) from the top buttons displayed on the BMR target machine when booted into the URC. These tools include the following:
- Far Manager. This tool is similar to Windows Explorer. It provides a way to browse for files on the server until you complete the BMR and install an operating system with its own browsing function, such as Windows Explorer.
- Chromium. This open-source browser lets you access the Internet on a server that has a network controller loaded through the URC.
- PuTTY. This tool is an open-source terminal emulator. In the context of performing a BMR in Rapid Recovery, it lets you connect to a NAS storage device that does not include a user interface. This capability may be necessary if you want to restore from an archive stored on a NAS.
- Notepad. As in a Windows operating system, this text tool lets you type unformatted notes and view log files.
- Task Manager. As in a Windows operating system, this tool lets you manage processes and monitor the performance of the server while the restore is in progress.
- Registry Editor. As in a Windows operating system, this tool lets you change the system registry of the BMR target.
- Command Prompt. This tool lets you perform commands on the BMR target outside of the URC until you install a user interface.
The following tasks are prerequisites for this procedure.
The Universal Recovery Console lets you add any drivers that were not included in the ISO image but are required for a successful bare metal restore.
This task is part of the process for Using the Universal Recovery Console for a BMR
When creating a boot CD, you can add necessary drivers to the ISO image. After you boot into the target machine, you also can load storage or network drivers from within the Universal Recovery Console (URC).
If you are restoring to dissimilar hardware, you must inject storage controller, RAID, AHCI, chipset, and other drivers if they are not already on the boot CD. These drivers make it possible for the operating system to operate all devices on your target server successfully after you restart the system following the restore process.
Complete the steps in one of the following procedures to load drivers using the URC:
Complete the following procedure to use a portable media device to load drivers in the Universal Recovery Console (URC).
- On an internet-connected machine, download the drivers from the manufacturer’s website for the server and unpack them.
- Compress each driver into a .zip file using an appropriate compression utility (for example, WinZip).
- Copy and save the .zip file of drivers onto a portable media device, such as a USB drive.
- Remove the media from the connected machine and insert it into the BMR target machine.
- On the target server, load the boot CD ISO image from removable media and start the machine.
The Quest splash screen appears.
- To start the URC, click the (Start URC) button.
The URC opens to the Boot CD driver manager tab.
- Expand the Other devices list.
This list shows the drivers that are necessary for the hardware but are not included in the boot CD.
- Right-click a device from the list, and then click Load Driver.
- In the Select driver load mode window, select one of the following options:
- Load single driver package (driver will be loaded without verification for device support)
- Scan folder for driver packets (drivers for selected device will be searched in selected folder)
- Expand the drive for the portable media device, select the driver (with file extension .inf), and then click OK.
The driver loads to the current operating system.
- In the Info window, click OK to acknowledge that the driver successfully loaded.
- Repeat this procedure as necessary for each driver you want to load. | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817206.28/warc/CC-MAIN-20240418093630-20240418123630-00003.warc.gz | CC-MAIN-2024-18 | 6,691 | 59 |
https://training.media-and-learning.eu/courses/certify/lectures/34743754 | code | “The highest form of knowledge is empathy, for it requires us to suspend our egos and live in another's world. It requires profound purpose larger than the self kind of understanding.” Plato
Demonstrate your ability to understand and share the feelings of another by answering the questions below 👇
- How do you cultivate empathy?
- Would you consider yourself empathetic?
- Do you find it easy to understand other people’s feelings, emotions and problems?
- Do you actively seek to develop empathy? How?(volunteering, reading books …) | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474445.77/warc/CC-MAIN-20240223185223-20240223215223-00435.warc.gz | CC-MAIN-2024-10 | 546 | 6 |
https://sourceforge.net/projects/free-cad/?sort=usefulness&stars=5 | code | WARNING: FreeCAD has moved!
FreeCAD code and release files are now hosted on github at https://github.com/FreeCAD/FreeCAD
Only older files and code are available here.
FreeCAD is a general purpose feature-based, parametric 3D modeler for CAD, MCAD, CAx, CAE and PLM, aimed directly at mechanical engineering and product design but also fits a wider range of uses in engineering, such as architecture or other engineering specialties. It is 100% Open Source and extremely modular, allowing for very advanced extension and customization.
FreeCAD is based on OpenCasCade, a powerful geometry kernel, features an Open Inventor-compliant 3D scene representation model provided by the Coin 3D library, and a broad Python API. The interface is built with Qt. FreeCAD runs exactly the same way on Windows, Mac OSX and Linux platforms.
- Rock-solid OpenCasCade-based geometry kernel, allowing complex 3D operations on complex shape types, and supports natively concepts like brep, nurbs, booleans operations or fillets
- Full parametric model allowing any type of parameter-driven custom objects, that can even be fully programmed in python
- Complete access from python built-in interpreter, macros or external scripts to almost any part of FreeCAD, being geometry creation and transformation, the 2D or 3D representation of that geometry (scenegraph) or even the FreeCAD interface
Very good program! Thank you guys!
It's an excellent software. It still lacks: - assembly management - FEM support for Code ASTER (open source certified engineering sofrware). There's just Calculix support :-( - structured mesh support (it's needed for FEM model patch test) - spline in sketch
Powerful and lightweight program. Easy to use.
FreeCAD is the CAD software of the future from mechanical to architecture work. But you know the future is now. FreeCAD est le logiciel de CAO du futur pour travailler depuis la mecanique jusqu'à l'architecture. Mais vous savez que le futur c'est maintenant.
FreeCAD is a powerful open source (LGPL license) parametric 3D CAD program. It is still under development, and version 0.14 was recently released. FreeCAD has an active web forum too. See the Users Showcase area to see some of what has been done with FreeCAD. There are also forum areas for those people who may have trouble installing FreeCAD, and another area for people who need help using FreeCAD. I have some YouTube video tutorials under the user name bejant000 that are intended to help new users learn FreeCAD to make 3D models. I've been using FreeCAD for just over a year (so yeah, I'm biased) and thought I'd post this info to help new FreeCAD users. | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00101-ip-10-171-10-70.ec2.internal.warc.gz | CC-MAIN-2017-04 | 2,636 | 13 |
http://www.geekstogo.com/forum/topic/335637-strange-popup/ | code | This strange popup (Console, Inspector, Debugger, etc.) appeared while I was reading email. I never saw it before. Can anyone recognize what it is?
Jump to content
ex-agent: Thank you. ...batpark
It's just the firefox tools console. Just hit ctrl+shift+I and it'll toggle on and off
0 members, 0 guests, 0 anonymous users
Community Forum Software by IP.Board
Licensed to: Geeks to Go, Inc. | s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948551162.54/warc/CC-MAIN-20171214222204-20171215002204-00796.warc.gz | CC-MAIN-2017-51 | 389 | 7 |
https://wiki.services.openoffice.org/wiki/Documentation/OOo3_User_Guides/Writer_Guide/Footnotes_and_endnotes | code | Using footnotes and endnotes
Footnotes appear at the bottom of the page on which they are referenced. Endnotes are collected at the end of a document.
To work effectively with footnotes and endnotes, you need to:
- Insert footnotes.
- Define the format of footnotes.
- Define the location of footnotes on the page; see Chapter 4 (Formatting Pages).
You can also change footnotes to endnotes and vice versa.
To insert a footnote or an endnote, put the cursor where you want the footnote/endnote marker to appear. Then select Insert > Footnote from the menu bar or click the Insert Footnote Directly or Insert Endnote Directly icon on the Insert toolbar.
A footnote (or endnote) marker is inserted in the text, and the cursor is relocated to the footnote area at the bottom of the page (or to the endnote area at the end of the document). Type the footnote or endnote content in this area.
If you use Insert > Footnote, the Insert Footnote dialog box is displayed. Here you can choose whether to use the automatic numbering sequence specified in the footnote settings and whether to insert the item as a footnote or an endnote.
If you use the Insert Footnote Directly or Insert Endnote Directly icon, the footnote or endnote automatically takes on the attributes previously defined in the Footnote Settings dialog box.
You can edit an existing footnote/endnote the same way you edit any other text.
To delete a footnote/endnote, delete the footnote marker. The contents of the footnote/endnote are deleted automatically, and the numbering of other footnotes or endnotes is adjusted automatically.
Defining the format of footnotes/endnotes
To format the footnotes themselves, click Tools > Footnotes. On the Footnote Settings dialog box, choose settings as required. The Endnotes page has similar choices.
Changing footnotes to endnotes and vice versa
To change a footnote to an endnote or vice versa, double-click on the footnote/endnote anchor. You can then alter the text of the footnote/endnote.
|Content on this page is licensed under the Creative Common Attribution 3.0 license (CC-BY).| | s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251801423.98/warc/CC-MAIN-20200129164403-20200129193403-00349.warc.gz | CC-MAIN-2020-05 | 2,090 | 18 |
http://www.indeed.com/job/Ruby-on-Rails-Developer-at-Istonish-in-Denver,-CO-a1e5e6f4257db1db | code | Istonish is a minority owned, privately-held, award-winning business enterprise, headquartered in Denver, Colorado, with offices located in Texas, Minnesota, and Wyoming. When you join Istonish, you become a part of the team dedicated to delivering outstanding technical and customer service to our clients.
Position: Engineer (Ruby on Rails)
Downtown Denver Marketing client is looking for a talented Engineer who has a track record developing and maintaining scalable and high performance commercial web applications using Ruby on Rails. The successful candidate should have experience in the design and development of a full software project life cycle. You should be familiar with agile methods and be able to thrive in a collaborative, fast-paced, high-energy environment. We’re looking for a well-rounded developer who is passionate about creating great products to join our team.
2+ years of experience in developing high-scale web applications (eCommerce preferred)
2+ different applications
Full-stack Rails development with Ruby 1.8.7 and 1.9.x
Experience with developing for Rack interface
Understand modern web standards, browser compatibility and web development best practices
Familiarity with git or another distributed version control system
Experience using Linux/Unix command line
Experience developing applications using MySQL as a database
Experience with a document database like Couchdb or MongoDB is a plus
Good understanding of web application security and secure coding practices
Ability to work in a very fast-paced startup environment
Good communications and collaboration skills
Fluent in English language
Degree in Computer Science or equivalent experience
Preference given to candidates with experience in these additional technologies/frameworks: delayed_jobs, sinatra, activemerchant, jquery-ui, couchdb, memcached, i18n
We help you do more with less.
Technology can be complicated. But, what we do for our clients is simple. Istonish helps you accomplish... | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645378542.93/warc/CC-MAIN-20150827031618-00015-ip-10-171-96-226.ec2.internal.warc.gz | CC-MAIN-2015-35 | 1,992 | 20 |
http://www.utexas.edu/its/alerts/interruptions/4793/ | code | ITS Services Status
This maintenance is complete
The maintenance is being extended by 30 minutes.
A new maintenance window has been published for Python Production Environment (PyPE).
Start: Thursday, January 16, 2014 7:00 AM
End: Thursday, January 16, 2014 8:00 AM
Pype deployment interface upgrade
We will be upgrading the Pype deployment interface (pype.its.utexas.edu) to use the latest Pype tools. The deployment interface will be unavailable during this maintenance.
Last updated on Jan 16, 2014 9:26 AM | s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824226.79/warc/CC-MAIN-20160723071024-00182-ip-10-185-27-174.ec2.internal.warc.gz | CC-MAIN-2016-30 | 509 | 9 |
http://advogato.org/person/blizzard/diary/22.html | code | What interesting things have I been hacking on recently?
Let's see. Killing bugs, as usual. Making things faster. I gave scrolling and expose handling a swift kick in the ass in Mozilla and things are a lot faster now. I also rewrote all the focus handling in Mozilla and killed a couple of problems in the process.
I've also been hacking on embedding and I can't believe it, people are actually using my code. | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125719.13/warc/CC-MAIN-20170423031205-00065-ip-10-145-167-34.ec2.internal.warc.gz | CC-MAIN-2017-17 | 410 | 3 |
http://www.programmersheaven.com/mb/VBNET/318993/319027/re-can-anyone/?S=B20000 | code | lol.... General questions usually don't get answered. But I am feeling nice tonight! Here are a few things to keep in mind.
- Use Try,Catch,Finally statements for error handling throughout your code.
- Use the Debug and Trace classes if your application is complex, or will require troubleshooting after implementation.
- Avoid late binding it KILLS performance (you may have to google that if you dont know what late binding is).
- Even if your just learning, make a design document. It can be as simple as a dataflow, and business logic. But it is a very good habit to make them before you start then update as necessary. If you want to do it right I HIGHLY recommend creating UIDs for all objects, process, etc... in design and commenting the UID in the code to relate it to the doc.
-Oh yeah comment everywhere, good habit, saves time in the future makes code. Doesn't have to explain the sytaxj ust what the goal of the line(s) is.
-use some naming convention for the variables, many people still use the Hungarian notation which is fine. However it is based on prefixing with the data type, which in VB.NET is not as useful since you should be using well defined objects. I personally use a scope prefix which to me is infinitely more useful.... Whatever you do be diligent to keep it uniform in your code.
prefix" Used for:
m_ Class,structure,interface member
p_ Public Property on class,struc,interface
l_ local member declared in a Sub or Funciton
v_ local Parameter passed by Value into a sub or function
r_ local parameter passed by reference into a sub of function
-Make all class members private and create Public properties if they need to accessed by another object. | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696384181/warc/CC-MAIN-20130516092624-00080-ip-10-60-113-184.ec2.internal.warc.gz | CC-MAIN-2013-20 | 1,681 | 14 |
https://www.dailysynch.com/jobs/senior-backend-developer-remote-at-montech-studios-inc | code | Montech Studios INC is a digital transformation enterprise software company that uses blockchain technology to develop cutting-edge solutions for small, medium, and large scale enterprises across different niches. Our clients include startups, Fortune 500 companies, NGOs, universities, financial service companies, and artists among others. We help organizations increase revenue, reduce cost, and improve day-to-day processes with technology. Montech Studios INC provides all stages of development from idea to MVP, to actively providing development support. Montech Studios INC specializes in the development of reliable and scalable software solutions that perfectly suit client needs.
We are recruiting to fill the position below:
Job Title: Senior Backend Developer
- We are looking for an experienced Back-End developer responsible for managing the interchange of data between the server and users.
- Your primary focus will be on the development of all server-side logic, definition, and maintenance of the central database, and ensuring high performance and responsiveness to requests from the front-end.
- You will also be responsible for integrating APIs, you should be able to develop and maintain functional and stable web and mobile applications to meet clients’ needs.
- Building reusable code and libraries for future use.
- Write clean code to develop functional web and mobile applications
- Integration of user-facing elements developed by front-end developers with server-side logic.
- Optimization of the application for maximum speed and scalability.
- Troubleshoot and debug applications
- Perform UI Tests to optimize performance
- Implementation of security and data protection
- Gather and address technical and design requirements
- Design and implementation of data storage solutions.
- 3+ years of experience in a similar role
- Strong knowledge of blockchains such as Ethereum, ripple, etc
- Strong understanding of concurrency and writing efficient and safe multithreaded code..
- Familiarity with P2P networks.
- Familiarity with basic cryptography.
- RESTful API
- API development
- Database Design
- Database Maintenance
Application Closing Date
Method of Application
Interested and qualified candidates should: | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224656737.96/warc/CC-MAIN-20230609132648-20230609162648-00399.warc.gz | CC-MAIN-2023-23 | 2,247 | 27 |
https://community.oracle.com/message/12561490 | code | Before upgrading to Weblogic 12.1.2 there is one question: Is it possible that WLS 12.1.2 web services can communicate with WLS 8.1 clients or do we have to roll out new client jars to all our customers?
The WebLogic Server 8.1 Web services stack has been removed in the WebLogic Server 12.1.1 release. Therefore, WebLogic Server 8.1 Web services applications will no longer work. Oracle recommends that you upgrade such applications to the WebLogic JAX-RPC or JAX-WS stacks, per the instructions in "Upgrading an 8.1 WebLogic Web Service to 12.1.x".
The same applies to client apps as well. We recommend you to upgrade and roll out new client jars to access the services. | s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948544124.40/warc/CC-MAIN-20171214124830-20171214144830-00749.warc.gz | CC-MAIN-2017-51 | 672 | 3 |
http://econ201online.umwblogs.org/assignments/ | code | What follows is all the assignments for this class, except for the Module quizzes. By assignment I mean anything you have to turn in to me. Please submit all assignments (except the module quizzes) by email. Thanks!
- Course Objectives
- Course Texts
- Exam Dates
- Final Grade
Signup to Post on this website!
If you want to add yourself to this blog, please log in. | s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886126027.91/warc/CC-MAIN-20170824024147-20170824044147-00090.warc.gz | CC-MAIN-2017-34 | 366 | 7 |
http://sibgrapi.sid.inpe.br/col/sid.inpe.br/banon/2001/03.30.15.38.24/doc/mirrorget.cgi?languagebutton=en&metadatarepository=sid.inpe.br/banon/2002/11.07.10.09.18&index=0&serveraddress=sibgrapi.sid.inpe.br+802&choice=fullrefer&lastupdate=2020:02.19.02.58.51+sid.inpe.br/banon/2001/03.30.15.38+administrator+%7BD+2000%7D&continue=yes | code | %0 Conference Proceedings
%A Guimarães, Silvio Jamil Ferzoli,
%A Leite, Neucimar Jerônimo,
%T Image decomposition in morphological residues: an approach for image filtering and segmentation
%B Brazilian Symposium on Computer Graphics and Image Processing, 13 (SIBGRAPI)
%E Carvalho, Paulo Cezar Pinto,
%E Walter, Marcelo,
%J Los Alamitos
%I IEEE Computer Society
%C Gramado, RS, Brazil
%K image segmentation, image decomposition, morphological residues, image filtering, image segmentation, image representation, gray-scale images, mathematical morphology.
%X Morphological residues represent an image in a hierarchical way by means of a decomposition of its structures and according to a size parameter /spl lambda/. From this decomposition, we can obtain a relation between the different residual levels associated with the complexity of the image structures. In this work, we introduce a method for filtering of components in gray-scale images based on the morphological residue decomposition which takes into account a size parameter and a certain level of complexity of the different structures we want to be filtered.
%O The conference was held in Gramado, RS, Brazil, from October 17 to 20.
%1 SBC - Brazilian Computer Society | s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141197278.54/warc/CC-MAIN-20201129063812-20201129093812-00469.warc.gz | CC-MAIN-2020-50 | 1,235 | 14 |
https://gambas-buch.de/dwen/doku.php?id=k6:k6.9:k6.9.3:start | code | The DirChooser control (gb.form) allows the user to select a directory. There is an additional option to also display the contents of a selected directory.
The control can be created:
Dim hDirChooser As DirChooser hDirChooser = New DirChooser ( Parent As Container ) As "EventName"
Internally, the DirView and FileView controls (optional) are used.
The DirChooser class has these selected properties:
|Returns the DirView control used internally by the DirChooser.
|Returns the FileView control used internally by the DirChooser.
|Returns or sets the icon for displaying a file or directory. Use this property to react to the icon event.
|Returns or sets the picture used for the root directory.
|Returns the selected directory path.
|Synonym for the SelectedPath property.
|Returns or sets the truth value whether the bookmark field is visible or not.
|Returns the truth value or sets the truth value whether the files are displayed with a detail view or with icons or not.
|Returns the truth value or sets the truth value whether the panel showing the contents of the directory is shown or hidden.
|Returns or sets the truth value whether the hidden files or directories are shown or hidden.
|Returns the truth value or sets the truth value whether the thumbnails are shown or not.
|Returns the truth value or sets the truth value whether the splitter button is visible.
Table 184.108.40.206.1 : Properties of the DirChooser class.
The DirChooser class has only one relevant method, Reload( ). It reloads the contents of the view - as if you had clicked on the “Refresh” button in the context menu.
The DirChooser class has these selected events, among others:
|The event is triggered when a user double-clicks on a directory.
|This event is triggered when the current directory changes.
|Icon( Path As String )
|This event is triggered when the icon for a specific file or directory of the control should be changed. The parameter “Path” is the file path to the icon.
Table 220.127.116.11.1 : Selected events of the DirChooser class.
You can create new directories via the context menu or trigger the creation via the following source code:
Public Sub btnCreateNewFolder_Click() DirChooser1.DirView.NewFolder() End
You change the name of an existing directory via the context menu or via the following source code, which triggers the change:
Public Sub btnRenameFolder_Click() DirChooser1.DirView.Rename() End
The following notes on the layout of the programme window and the basic configuration of the DirChoosers control have proven to be useful for practical use:
Public Sub Form_Open() FMain.Resizable = True FMain.Utility = True ' Minimum window size as defined in the IDE DirChooser1.Root = User.Home ' Default-Folder DirChooser1.ShowSplitter = True DirChooser1.ShowBookmark = True ... End
If you switch on the splitter, you will also see the contents of the selected directory in the internal FileView. In addition, the ToolBar fills with more buttons and controls like a slider to change the display size of the preview images.
You can also use the available context menu of the internal FileView to make further settings or actions. A nested context menu supports you in working with the DirChooser control: | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817790.98/warc/CC-MAIN-20240421163736-20240421193736-00621.warc.gz | CC-MAIN-2024-18 | 3,227 | 33 |
https://www.samasana.io/post/product-prioritization | code | uring my time at Greenling and Organic, part of my job was to help the teams come up with a prioritization framework for product development. First step: talk to the customer. Know what they're really looking for. And don't forget about segmenting your customers. MECE - mutually exclusive, completely exhaustive. Understanding what product or product features your customer needs (vs. wants) will only come from 1) studying your analytics like crazy or 2) the simpler option - talking to your customers. What drives their behavior? Why are they using your product? Are they using everything your product has to offer? These are just a few things to consider, but the end result should lead you to a clear set of customer segments.
As an example, consider Greenling customers. Three major drivers of use came down to:
- Saving time / skipping the hassle of the grocery store,
- Health benefits of local, organic and pesticide use, and
- Cutting edge consumer ( requires the best product available)
Each of these types of consumers had very specific needs of the site and we had to make sure whatever functionality we rolled out addressed each of these segments in a clear way. The next challenge we faced was determining how to map customer needs against business priorities. The simplest way to think about business needs is to think about the typical customer journey:
For simplicity sake, it's cleaner to think about this from 3 areas:
- Customer Acquisition - How do we acquire our customers?
- Customer Engagement - How we get them to engage with us and drive revenue?
- Customer Retention - How do we lower our churn rate?
Ok, so you've now got your main customer needs mapped against your business needs. Now comes the fun part - testing each product feature against a battery of questions. Short of mapping functional dependancies, there's no easy way to objectively say "we need to build this feature before this other one". Having dealt with this problem enough times, I crafted a way to make this decision more objective in nature.
Here's how you do this:
On your X-axis: Customer Segment focus points
On your Y-axis: Business Initiatives
Understandably, different business will prioritize different needs. In this case, we added additional weight to customer engagement, as our focus was driving usability and increasing customer lifetime value. This is why you see a feature score of 125 for the Engagement row. By summing up each cell value for the row (25) and multiplying it by the assigned multiplier (5 - on a scale from 1-5) you get to the total value of 125 for the column.
Once you've scored the product feature, you've got a set of objective metrics. For the above feature, a calculated score of 310 of a total 500 was scored. In isolation, 310 means nothing. But against a feature that may score lower or higher, you now know why this feature has a higher development priority against other product features.
Visually, the above looks like this:
Do this for every product feature, print out the score cards, and plaster them up in your conference room. Anytime anyone has a question of why a feature is being built, you'll have a clear objective answer why.
In terms of calculating the score for each of the cells, this is really where collaboration helps. I've always been a fan of collaborative development and when possible I try to create a workshop type environment with stakeholders across all operating divisions. Holistic development (to me) means leveraging perspectives from everyone who touches the product and everyone who's closest to the customer - i.e. marketing, operations, customer service, engineering. Leveraging leadership from each group is generally fine (assuming the organization has a healthy level of communication from the ground-up). For a template copy of the above, click below!
I've been developing versions of this methodology over the past five years and it works great from a dashboard perspective. Some things to consider when using this:
Everything is always important.
- Be as analytical about scoring and weighting as you can be. Remember that nothing is important when everything is important. Keeping as structured of a ranking process as possible, lets you keep every comparison apples-to-apples comparison. Each feature that gets added to the release gets put through x rows * y columns worth of questions. And the more features you have, the more of a grind this process will be - more espresso breaks.
Paper collects dust.
- You can clone the attached spreadsheet and build in an expanded sheet to log the answers to each question. This will help you remember why you scored a feature in a particular way. If you'd like to add to the spreadsheet, go for it! I'd love to take a look (github for spreadsheets anyone?) The reality is, the longer you leave this up without acting on it, the more you're going to forget the context anyway. Bring in as many key stakeholders as you need (in as few visits as possible) when workshopping this, but make sure everyone always understands why they're doing this. The more people who buy-in to what's happening, the better this process gets.
Measure your results.
- After all's said and done, now you get to measure the effectiveness of your product features. The calculations are representations of what your team believes is important, not exact figures. Use these results as a guide. If you're working agile and releasing in sprints, you might quickly see unexpected traction on a lower-scored feature. Don't ignore the shiny data. Readjust prioritization when it makes sense to do so.
You might Also Like
A Holistic Approach to 'Marketing'
The difference between marketing and conversation is you’re selling someone something - whether it’s a thought, a service, or a product. My problem with traditional marketing is how scattered it quickly gets. The idea behind data-driven marketing helps ground ideas back to measurable results, but it tends to work at a tactical vs. strategic level.Read More
Interaction Design Theory
Ever work with someone and they ask you to design something ‘fun’? First off, I love designing fun experiences, but then again, I’ve never been asked to design a painful experience - 90s era web pages not withstanding. Today, when presented with that challenge, my first response is always, “help me understand what you mean by fun”.Read More | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100508.53/warc/CC-MAIN-20231203193127-20231203223127-00183.warc.gz | CC-MAIN-2023-50 | 6,391 | 31 |
http://www.educationalappstore.com/app/cargo-bot | code | Cargo-bot is a coding and programming app that will be a challenge at all levels for students studying Computer Studies in Key Stage 2.
The app is fairly challenging and we recommend that the students work through the tutorials before attempting the tasks. The app has numerous levels and a scoring system where maximum stars can be achieved if the programming of the robotic arm is correct. Although there are tutorials we found even the easy levels challenging. Each level has a hints section that may help you solve the coding.
Students may need some assistance at first but once the basics are mastered they are free to tackle the multiple levels.
From the Developer
Presenting Cargo-Bot. The first game programmed entirely on iPad® using Codea™
Cargo-Bot is a puzzle game where you teach a robot how to move crates. Sounds simple, right? Try it out!
- Beautiful retina graphics
- Friendish puzzles
- A game about programming, programmed entirely on iPad
- Record your solutions and share them on YouTube
- Learn more about how it was made by searching for Codea on the App Store | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122996.52/warc/CC-MAIN-20170423031202-00277-ip-10-145-167-34.ec2.internal.warc.gz | CC-MAIN-2017-17 | 1,086 | 11 |
https://newsonline.library.vanderbilt.edu/2016/03/github-as-an-educational-technology-2/ | code | Friday, March 4th
Central Library, 418a
GitHub is a social network for computer programmers where programmers can store projects, view others’ projects, and collaborate with others to suggest changes.
Join Ramona Romero as she explains how to set up a GitHub account, how to communicate on GitHub with Markdown, and the various educational uses of GitHub in the classroom. In addition, the features that make GitHub so powerful such as branching and forking may be explored.
No RSVP needed.
Learn more at http://www.library.vanderbilt.edu/scholarly/ | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474853.43/warc/CC-MAIN-20240229202522-20240229232522-00280.warc.gz | CC-MAIN-2024-10 | 551 | 6 |
https://www.fact-archive.com/encyclopedia/Mathematical_induction | code | Mathematical induction is a method of mathematical proof typically used to establish that a given statement is true of all natural numbers, or otherwise is true of all members of an infinite sequence. A somewhat more general form of argument used in mathematical logic and computer science shows that expressions that can be evaluated are equivalent; this is known as structural induction.
The simplest and most common form of mathematical induction proves that a statement holds for all natural numbers n and consists of two steps:
- The basis: showing that the statement holds when n = 0.
- The inductive step: showing that if the statement holds for n = m, then the same statement also holds for n = m + 1. (The proposition following the word "if" is called the induction hypothesis. Do not call step 2 as a whole the induction hypothesis.)
This method works by first proving the statement is true for a starting value, and then proving that the process used to go from one value to the next is valid. If these are both proven, then any value can be obtained by performing the process repeatedly. It may be helpful to think of the domino effect; if you have a long row of dominos standing on end and you can be sure that:
- The first domino will fall.
- Whenever a domino falls, its next neighbor will also fall.
then you can conclude that all dominos will fall.
Suppose we wish to prove the statement:
for all natural numbers n. This is a simple formula for the sum of the natural numbers up to the number n. The proof that the statement is true for all natural numbers n proceeds as follows.
Check if it is true for n = 0. Clearly, the sum of the first 0 natural numbers is 0, and 0(0 + 1) / 2 = 0. So the statement is true for n = 0. We could define the statement as P(n), and thus we have that P(0) holds.
Now we have to show that if the statement holds when n = m, then it also holds when n = m + 1. This can be done as follows.
Assume the statement is true for n = m, i.e.,
Adding m + 1 to both sides gives
By algebraic manipulation we have
Thus we have
This is the statement for n = m + 1. Note that it has not been proven as true: we made the assumption that P(m) is true, and from that assumption we derived P(m + 1). Symbolically, we have shown that:
However, by induction, we may now conclude that the statement P(n) holds for all natural numbers n:
- We have P(0), and thus P(1) follows
- With P(1), P(2) follows
- ... etc
Start at b
This type of proof can be generalized in several ways. For instance, if we want to prove a statement not for all natural numbers but only for all numbers greater than or equal to a certain number b then the following steps are sufficient:
- Showing that the statement holds when n = b.
- Showing that if the statement holds for n = m ≥ b then the same statement also holds for n = m + 1.
This can be used, for example, to show that n2 > 2n for n ≥ 3. Note that this form of mathematical induction is actually a special case of the previous form because if the statement that we intend to prove is P(n) then proving it with these two rules is equivalent with proving P(n + b) for all natural numbers n with the first two steps.
Assume true for all lesser values
Another generalization, called complete induction, allows that in the second step we assume not only that the statement holds for n = m but also that it is true for n smaller than or equal to m. This leads to the following two steps:
- Showing that the statement holds when n = 0 (or some other basis value b).
- Showing that if the statement holds for all b ≤ n ≤ m then the same statement also holds for n = m + 1.
This can be used, for example, to show that
where fib(n) is the nth Fibonacci number and Φ = (1 + √5)/2 (the so-called golden mean). Given that fib(m + 1) = fib(m) + fib(m − 1), it can be proven that that the statement holds for m + 1 if we can assume that it already holds for both m and m − 1. (Hence the proof of this identity requires a double basis - it requires initial demonstration that the identity is true for both n = 0 and n = 1).
This generalization is just a special case of the first form:
- let P(n) be the statement that we intend to prove,
- then proving it with these rules is equivalent to proving the statement ' P(m) for all m ≤ n ' for all natural numbers n with the first two steps.
The last two steps can be reformulated as one step:
- Showing that if the statement holds for all n < m then the same statement also holds for n = m.
This is in fact the most general form of mathematical induction and it can be shown that it is not only valid for statements about natural numbers, but for statements about elements of any well-founded set, that is, a set with a partial order that contains no infinite descending chains (where < is defined such that a < b iff a ≤ b and a ≠ b).
This form of induction, when applied to ordinals (which form a well-ordered and hence well-founded class), is called transfinite induction. It is an important proof technique in set theory, topology and other fields.
Proofs by transfinite induction typically distinguish three cases:
- when m is a minimal element, i.e. there is no element smaller than m
- when m has a direct predecessor, i.e. the set of elements which are smaller than m has a largest element
- when m has no direct predecessor, i.e. m is a so-called limit-ordinal
Strictly speaking, it is not necessary in transfinite induction to prove the basis, because it is a vacuous special case of the proposition that if P is true of all n < m, then P is true of m. It is vacuously true precisely because there are no values of n < m that could serve as counterexamples. See three forms of mathematical induction for more on this point.
Proof or reformulation of mathematical induction
The principle of mathematical induction is usually stated as an axiom of natural numbers, see Peano axioms. However, it can be proved in some logical systems; for instance, if the following axiom:
The set of natural numbers is well-ordered
Note that the additional axiom is indeed an alternative formulation of principle of mathematical induction. That is, the two are equivalent. See proof of mathematical induction.
Last updated: 06-02-2005 00:08:37 | s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488540235.72/warc/CC-MAIN-20210623195636-20210623225636-00581.warc.gz | CC-MAIN-2021-25 | 6,248 | 49 |
https://www.cloudynights.com/topic/40515-silly-noobie-question/ | code | Silly noobie question.
Posted 20 September 2005 - 10:40 PM
Thanks for being gentle on me.
Posted 21 September 2005 - 12:32 AM
You can, and many do indeed use a microfocuser on the back of the sct. In fact, Meade ships one as OEM on the GPS models.
Posted 21 September 2005 - 03:24 AM
Until there were CCDs, few people had noticed "mirror flop", let alone worried about it. | s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740838.3/warc/CC-MAIN-20200815094903-20200815124903-00268.warc.gz | CC-MAIN-2020-34 | 372 | 7 |
https://docs.automationanywhere.com/de-DE/bundle/enterprise-v11.3/page/enterprise/topics/aae-client/bot-creator/using-the-workbench/using-filters-in-the-task-editor.html | code | Using filters in the Workbench
By default, automated tasks are shown in the Automation Anywhere Workbench with all commands and actions in chronological order (sorted by time).
Using command filters
Filters enable you to customize your view. Filters are particularly helpful when working with larger automations.
Filters do not modify a task. Filters enable you to focus on specific commands in an automation without needing to modify the entire Task Bot.
To view or hide specific commands in a task, select or deselect the filters in the Filter Bar in the task Actions List.
- Select CATEGORIES to view category folders that contain the commands.
- Select VIEW ALL all of the commands in alphabetical order.
Using the Find Text search field
Use the Find Text search field to search in a task for names, text, variables, and other items. This can be helpful when editing longer tasks.
Using the Windows filter
Use the Windows filter for tasks that involve two or more applications, for example, the Calculator, Notepad, and Explorer. Use the Windows filter to view actions sorted by application.
In the drop-down list, select ALL, NONE, or the application title to view the related commands. | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100626.1/warc/CC-MAIN-20231206230347-20231207020347-00868.warc.gz | CC-MAIN-2023-50 | 1,191 | 13 |
http://newredsmoothiedetoxfactor.cf/zyky/cat-write-to-file-linux-20.php | code | Learn how to remount filesystem in read write mode under Linux.Basic Linux Navigation and File Management. In Linux, every file and directory is under the top.
If you a regular Linux command line user, I am sure you must have used the cat command.Just type the following command at the prompt, and then press Enter: cat sample.txt. Create a Text File Using the Touch Command.
Learning the shell - Lesson 7: I/O Redirection
Linux sort command - Sort lines of text files - Real world
Examples to delete or erase contents of any File in Linux without deleting the original Files.
bash scripting sending "cat: write error: Broken pipe,"
C++ – How Create a Text File and Write in It - CodeBind.comThe cat command can read and write data from standard input and output devices.
7 Examples To Delete/Erase Contents Of File In Linux But
Linux: How to view log files on the shell? - FAQforge
Reading and writing files in linux is simple, you just use the standard utilities for reading files such as cat, grep, tail, head, awk etc.
[SOLVED] cat > filename << "EOF" syntax explanationI found this command interesting enough to read more about it and.
How To Rename Files Using Linux Tools And The TerminalThe rest of the permission bits control who can read or write to the pipe just like a regular file. file name is another clue, and on Linux. cat can write to.
How To Quickly Generate A Large File On The Command Line
This guide will show you how to rename files using a file manager and the. but you accidentally called them cat pictures.To verify your file was created, you can use the ls command to show a directory listing for the file: ls -l sample.txt. You can also use the cat command to view the contents of your file.Special device files /dev/full /dev/null /dev/random. are available in /proc/sys/kernel/random/ and can be displayed by the command cat. etc. can write to /dev...
In this post I will show you how to use bash redirection to write to a file using the cat command.Methods to empty any regular file rather than delete the one.
how to read or view utmp, wtmp and btmp files in Linux
Cat command concatenate FILE(s), or standard input, to standard output.Check out the top tips and tools on how to tail a log file on Windows and Linux.
mv command in Unix/Linux | move files/directoriesIt is a standard Unix program used to concatenate and display files.Program that copies source file into destination file using POSIX system calls to demonstrate open(), read() and write() system calls on Linux operating system.
linux - Write current date/time and number of files to aA faster way is to use the cat command with the name of the file and write contents of the.
The cat command in Linux allows you to concatenate files and display the output to the standard output, in most cases this is a screen.
While going through an article on Linux text processing commands, I came across Linux sort command.How to use the Linux chmod command to change file and directory permissions.Using an expect Script to send the output of cat and append it to a file.
UNIX & Linux Shell Scripting (Reading, Writing Files) | s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998440.47/warc/CC-MAIN-20190617063049-20190617085049-00122.warc.gz | CC-MAIN-2019-26 | 3,115 | 22 |
http://www.luisescobarblog.com/drawing-from-reference-part-01/ | code | ART/VIDEO – Drawing From Reference: Part 01
However, I’m also going to be specifically talking about using reference in order to change it for difference purpose.
In this video I introduce my reference and the principles I’m going to use to approach using it:
Want To Get a Video From Me?
If you liked this video and would like me to discuss something you think I might know about…
If you want to suggest a character or drawing you’d like me to draw, feel free to ask or suggest away, either here on my comments or anywhere else you’d like to contact me.
Just be aware that my Patreon patrons get their questions answered first and they get to see the videos weeks before anyone else. | s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886946.21/warc/CC-MAIN-20180117142113-20180117162113-00014.warc.gz | CC-MAIN-2018-05 | 696 | 7 |
https://forum.inductiveautomation.com/t/internet-explorer-vision-module/34739 | code | Has anyone developed an IE based module for vision? I have an application that needs to use ActiveX instead of WebGL and the Inductive automation web browser module seems to use chromium which uses WebGL. Any info would be appreciated. Thank you.
I don’t see how that would be possible. Vision clients are java applications. I don’t think there’s any way to hook COM/ActiveX technology into java.
You might be able to do something with Perspective though, with a custom module wrapping the necessary DOM.
You’ll have a hard time in the opposite direction; IE doesn’t support many of the browser features Perspective requires.
A long, long, long time ago there was an ActiveX control for Ignition, but it caused countless issues with performance and stability, and was a maintenance nightmare. The same technique used (wrapping native code into Java) could theoretically still be used, I think, but would be a huge pain, especially with the better platform security on Java 8 and 11. | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662560022.71/warc/CC-MAIN-20220523163515-20220523193515-00432.warc.gz | CC-MAIN-2022-21 | 992 | 5 |
https://community.secondlife.com/profile/853728-saraya-starr/content/page/5/?all_activity=1 | code | Ok, guess I have a Pet Peeve right now. I am getting tired of people (Well, one person in particular) Who IM's me with a Hello, how are you? If I see the message I will answer. But, a lot of times I just do not see it and so I do not reply. Then I will get another message saying, I said hello to you yesterday ( or whenever the message was sent) and you did not answer, I guess you didn't see it.... *sighs* This really ticks me off lately and I am biting my lip not to send back a really snarky answer. | s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153709.26/warc/CC-MAIN-20210728092200-20210728122200-00554.warc.gz | CC-MAIN-2021-31 | 504 | 1 |
https://www.cs.vassar.edu/_export/xhtml/courses/cs331-201701/top | code | This is the syllabus page for the Compiler Design class. It will evolve over the course of the semester, so check back regularly. You are responsible for keeping up-to-date with any changes here.
An online discussion group will be set up in due course.
Aho, Alfred V., Lam, Monica S., Sethi, Ravi & Ullman, Jeffrey D., Compilers: Principles, Techniques, and Tools (2nd edition).
The examination will be scheduled later in the semester.
The project consists of implementing a compiler front-end, in four parts: lexical analysis routines, a parser, symbol table management routines, and semantic routines. Each piece must be integrated with what has been completed previously.
The project should be implemented in either Java or Python.
All of the IDEs named above are available on the CS department Linux systems. Downloads are available here.
This schedule is provisional and subject to change. You are responsible for keeping yourself apprised of the current status of this schedule.
It is expected that you will have completed the appropriate readings before the corresponding class(es). You are responsible for keeping up with the readings and for ensuring that you have adequate notes of material covered in class, some of which may not be in the textbook: this includes lectures, assignments, hand-outs, additional reading, and so forth.
If you miss a class, it is your responsibility to make arrangements with classmates to acquire information presented.
|1||22nd January||Introduction||Ch. 1, 2.1-2.5.||Full Handout|
|2||29th January||Lexical analysis.||Ch. 2.6, 3.1-3.5, 3.8.||Full Handout|
|3||5th February||Languages, syntax and parsing.||Ch. 4.1-4.3.||Full Handout|
|4||12th February||Top-down parsing.||Ch. 4.4.||Full Handout|
|5||19th February||Bottom-up parsing.||Ch. 4.5-4.6.|
|6||26th February||More powerful bottom-up parsing.||Ch. 4.7.|
|7||5th March||Symbol tables.||Ch. 2.7, 7.1-7.3.|
|8||26th March||Syntax-directed translation.||Ch. 5.|
|9||2nd April||Intermediate code generation.||Ch. 2.8, 6.1-6.3.|
|10||9th April||Runtime environments. The Vassar Interpreter.|
|11||16th April||Arithmetic expressions.||Ch. 6.4-6.5, 7.1-7.4.|
|12||23rd April||Relational expressions and control flow.||Ch. 6.6-6.7.|
|13||30th April||Optimisation and code generation.||Ch. 9.1-9.4, 8.4-8.5, 8.7-8.8.|
|14||7th May||Procedure and function declaration and calls.||Ch. 7.1-7.3.|
|16||21st May||End of class – happy vacation!|
|Component||Specification||Date released||Date due||Weighting|
|Lexical analyser (lexer)||Lexical analyser (lexer)||Thursday, 1st February||Thursday, 15th February||10%|
|Parser||Parser||Tuesday, 20th February||Tuesday, 6th March||7.5%|
|Symbol table routines||2.5%|
|Semantic actions (part 1)||5%|
|Semantic actions (part 2)||7.5%|
|Semantic actions (part 3)||12.5%|
|Complete compiler||Tuesday, 15th May||17.5%|
Test files are available in the directories listed here.
To upload a test file, use the following command:
331upload [lexer|parser|code] <filename>
All components are due by 11.59:59 pm on the date specified. See the submission guidelines for further information.
The grading rubric used for project submissions is available here.
Late work will be subject to the late policy as described.
Please note that the due date for the complete system is the latest time possible under Vassar College legislation.
This will likely be a 48-hour take-home exam scheduled towards the end of the semester, not in the final exam block.
I support wholeheartedly and implement all the general policies of Vassar College, including but not limited to those relevant to students with disabilities, plagiarism, and respectful classroom etiquette.
Academic accommodations are available for students registered with the Office for Accessibility and Educational Opportunity (AEO). Students in need of disability (ADA/504) accommodations should schedule an appointment with me early in the semester to discuss any accommodations for this course that have been approved by the Office for Accessibility and Educational Opportunity, as indicated in your AEO accommodation letter.
Attendance is not required, but it is strongly recommended: in-class discussions and explanations will likely not be posted online.
While most of this class takes the form of lectures, I welcome and encourage discussion and questions on the subject under discussion. If you are unsure about something, if you are just curious, if you want to play “Devil's advocate” (within reason) — please ask questions.
I endeavor to foster an atmosphere in which it feels safe to fail. My classroom is a failure-okay zone: in fact, I welcome failure, since we frequently learn more by failure than by success.
Please set your mobile telephone to silent and put it away.
I prefer students not to use laptops in class, if possible. Please bring [(paper)+] and [(pen)+|(pencil)+|(stylus)+] to make notes.
Work submitted late will be subject to an incremental 10% penalty per day or part day late. Work received more than 96 hours (four days) late will automatically receive a grade of 0.
Exceptions to this policy may be made in the case of extenuating circumstances.
I accept and recognise that there may be occasions where significant problems occur. I am more than willing to work with students who are having genuine difficulties with their work because of life events beyond their control (e.g. illness or bereavement). However, I ask that you make me aware of such situations as soon as possible, and I require documentary evidence of some form in order to be able to make any accommodation.
The compiler is a significant programming project and should be treated like any other major piece of work. It is expected that you will make regular backups of your work and develop your code incrementally: together, these steps will increase the likelihood of you being able to recover from a significant loss of work.
As it is expected that you will make backups, the excuse, “I couldn't submit my work because [computer-related reason],” is no longer generally acceptable. | s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813059.39/warc/CC-MAIN-20180220165417-20180220185417-00714.warc.gz | CC-MAIN-2018-09 | 6,070 | 53 |
https://aws.amazon.com/de/blogs/containers/persistent-storage-using-efs-for-eks-on-bottlerocket/ | code | Persistent Storage using EFS for EKS on Bottlerocket
In this post, we discuss about how to achieve persistent storage with Amazon Elastic Kubernetes Service (Amazon EKS) clusters running on Bottlerocket OS with Amazon Elastic File System (Amazon EFS). Persistent storage is needed for long running stateful applications to persist state for high availability or to scale out around shared datasets. This is true with machine learning customers who use Amazon EFS to store shared data sets and data science home directories, allowing them to train models in parallel across multiple containers and access data from individual data science notebook containers. Examples of open-source platforms for machine learning include MxNet, TensorFlow, Jupyter, Jupyterhub, Kubeflow.
Before we dive into the blog, we’ll cover a high-level overview of each of the components and services.
Bottlerocket is an open-source Linux-based operating system, minified and purpose-built for running container workloads. It is secure by design, following best practices for container security. It only includes tools needed to run containers, significantly reducing the attack surface and impact of vulnerabilities. By virtue of being minimal, nodes running Bottlerocket have a fast boot time thus enabling clusters to scale quickly based on varying traffic patterns or workload changes.
Amazon Elastic File System (Amazon EFS) provides a simple, serverless, elastic file system that allows you to share file data without provisioning or managing storage. It can be used with AWS services and on-premises resources, and is built to scale on demand to petabytes without disrupting applications.
Bottlerocket OS uses a read-only file system to improve overall security posture of the host OS. Taking this into consideration, we need to follow specific steps to implement a shared storage solutions across cluster nodes and the pods running on them. In this post, we will demonstrate how to leverage EFS with Bottlerocket OS to run stateful workloads and enable us to persist data.
Figure 1: Amazon EKS cluster on Bottlerocket using Amazon EFS
In our example, we have built an EKS cluster with three worker nodes running on Bottlerocket OS sharing an Elastic File System. We deployed two instances of a busy box application, app1 and app2, sharing a common/data mount.
In order to get started with EKS cluster there are a few mandatory prerequisites. You will need to follow the instructions on getting started with Amazon EKS, as well as install and configure eksctl, kubectl, the AWS Command Line Interface (CLI), and Helm CLI.
It’s important to have the right IAM roles and permissions for creating the cluster
Instructions for setting up Amazon EKS on Bottlerocket can be found here
1. After EKS cluster setup, verify that nodes are running on Bottlerocket OS by running the following command.
Figure 2: List EKS worker nodes
Steps to configure and install the Amazon EFS CSI driver
2. Create an EFS file system.
The following command is used to create an EFS file system.
For our example, we will name our filesystem “eks-efs.”
3. Capture the VPC-ID.
Run the following commands to capture the VPC-ID.
4. Capture the CIDR block, using the VPC-ID from step 3.
5. Create a security group, using the VPC-ID from step 3.
6. Create an inbound rule that allows inbound NFS traffic from the CIDR for your cluster’s VPC, using CIDR block from step 4, and the security group-id from the output of step 5.
7. Create an EFS mount target.
Create the mount target using the below command for each of the subnet IDs.
Run the following command to verify that the aws-efs-csi-driver has started.
Clone the following GitHub Repo
git clone https://github.com/kubernetes-sigs/aws-efs-csi-driver.git
Create a PV (persistence volume), PVC (persistence volume claim), storage class, and the pods that consume the PV.
Update the value of the volume handle in pv.yaml to match the new EFS file system
Both the pods, pod1 and pod2, are writing to the same EFS file system at the same time.
Verify that the pods are running using below command.
Verify that data is written onto EFS filesystem from both the pods.
Validate data persistence
To confirm if the mount persists, restart the pods, app1, app2, and validate that the files, out1.txt and out2.txt, still remain in the mount under/data.
Integrating Amazon EFS with EKS worker nodes running on Bottlerocket OS allows you to run stateful workloads while improving overall security posture and underlying performance. Amazon EFS enables thousands of pods or EC2 instances to read and write to a shared volume simultaneously. This blog can help you to deploy workloads that require secured shared storage across EKS worker nodes. | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817491.77/warc/CC-MAIN-20240420060257-20240420090257-00899.warc.gz | CC-MAIN-2024-18 | 4,742 | 35 |
https://blewbellecornwallcurlspecialist.book.app/customer/login | code | Returning Clients - I am trialling an hourly rate booking system. Please check your emails for more info. If you are unsure how much time to book please email me [email protected]
New Clients - Any new client spaces are first come, first served. Please email to be added to notifications. It is not a waiting list.
An online shop click &collect or UK postage is available at www.cornwallcurlspecialist.co.uk | s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057787.63/warc/CC-MAIN-20210925232725-20210926022725-00402.warc.gz | CC-MAIN-2021-39 | 407 | 3 |
http://techwizards.net/?page_id=11 | code | Hello future Tech Wizards! My website Tech Wizards was created for people like you. We want you to become more knowledgeable in the world of technology. We provide to you the information about the latest and hottest technology products. Here at techwizards.net are enthusiastic about technology and we want you to be excited about technology as much as we are everyday. We give you our professional opinions and only provide you with the best products to date. Join us and realize how magical the world of technology can really be.
Who is the Tech Wizard?
My name is Ronald Harrison and I’m a Information Systems student at Coastal Carolina University. This is my first actual live and running website for my Computer Science class. I am learning how to create a website in a quick, but professional way. Please support me by checking out my website and donating at any amount by using the PayPal button below. Thanks for checking out my new website TechWizards.net.
Donate via PayPal! | s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267866937.79/warc/CC-MAIN-20180624121927-20180624141927-00530.warc.gz | CC-MAIN-2018-26 | 987 | 4 |
https://sellessay.com/2021/04/26/lutron-homeworks-programming_tr/ | code | Click the register link above to proceed. close. dmx color tuning with homeworks qsx if this is your first visit, be sure to check out the welcome post and the faq. using lutron light control programming software, you will be able to how to start off a conclusion in an essay program is our next generation design tool essays topics for grade 8 (replacing the grafik eye qs design tool) homeworks qs installation & programming qualification course lutron training centre , 6 sovereign close, example of executive summary in business plan london, good hooks for compare and contrast essays e1w 3jf. posted by u/[deleted] 3 years ago. infocomm lutron electronics introduces quantum vue building a good research paper management software. engraving for legacy devices cannot be submitted from the peace in communities essay homeworks qs programming software thea lutron homeworks how to: this step 3 – client walk essay rewriting service around. for programming lutron qs homeworks systems. lutron will also introduce new ra2 select, writing a report radiora 2 and homeworks in-wall dimmers homeworks qs can also scale up to thousands of devices or zones, lutron homeworks programming where radio ra 2 caps lutron homeworks programming at 200 (with a few gotchas thrown in there for good lutron homeworks programming fun). cover page for research paper mla so that. lutron homeworks interactive modules. | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991514.63/warc/CC-MAIN-20210518191530-20210518221530-00619.warc.gz | CC-MAIN-2021-21 | 1,402 | 1 |
https://n4g.com/user/blogpost/trroy/515279 | code | This is a pretty common news topic, and I'd like to try to explain it in detail, for anyone who doesn't understand, or who "disbelieves" these numbers, because they don't understand how they are measured.
I'm going to use the PS3 as an example, since it tends to show up more often than other hardware, with this kind of news topic.
I'll start with the ugly stuff.
When developers state they are using N% of the PS3, they are typically referring to the percentage of cycles, each game frame, during which the SPU processors are actively working. This is directly measurable via performance anaylsis tools, so its true, the developers don't just make this stuff up. Almost without exception, the PPU core of the Cell CPU is busy for the entire frame (although some developers do not take advantage of the "leftover" time on the interleaved 2nd PPU HW thread, which the OS is not using very often, but that's another discussion), and each SPU, cycle for cycle, is actually faster, in practice, than the PPU, so the PPU is often left out of the "N% used" comment "formula". The RSX GPU is also, typically, busy for a majority of the frame, and any downtime for it, or the PPU, is often unusable for anything by non-critical "side" tasks which cannot delay the main rendering loop.
Additionally, in some cases the PPU may idle while waiting for SPU tasks to complete, and this is difficult to factor into a "% used" calculation.
That technical stuff said, then why is it so uncommon to use 100% of the SPUs for the entire frame?
Game engines basically operate as on-the-fly motion picture studios -- they work for an entire "frame" moving the actors around, making bullets fly, creating explosions, etc., and then do the work to "render" an image of that scenery to the user. The trouble with using all of the available horsepower, at all times, boils down to some simple facts about how some of the problems in game engines are figured out.
Game engines can do many things in parallel, but some things they *must* do in an ordered sequence. I'll give you a very common example:
(Step 1) Query the gamepad for input
(Step 2) Translate the input as physical impulses for movement
(Step 3) Move the player
(Step 4) Figure out where the camera should be for the frame, now that the player is in position.
(Step 5) Move other stuff that needs to move in the scene
(Step 6) Cull (remove) all the things in the scene that can't be seen by the camera, so we don't have to do lots of work to render them.
(Step 7+) lots more stuff
If you ponder the above ordering, you should see that its basically impossible to do (Step 6) without having done (Step 4 and Step 5), and that its impossible to do (Step 4) without doing (Step 3) for many kinds of games, and that it's impossible to do (Step 3) without first doing (Step 2)... so on and so forth. On top of the ordering dependancy, if you don't do (Step 6), your game will run at a snail's pace, because you will be asking the computer to do more work than it can handle at interactive framerates, unless your scene is ludicrously simple.
Later in the frame, there are all sorts of things you can do on many parallel cores -- beginning with (Step 5) and (Step 6), in the above example. Its at that point that the power of a highly parallel system comes into play, and luckily, there's a lot of work there to be done.
Still.. there's a portion of the frame where the parallel cores will end up idle (not doing work), because there's just nothing for them to do... or is that true?
Developers are inventive with "spare" horsepower -- and there are non-critical (meaning the work doesn't have to finish each frame, by a certain time) "fun" tasks that unused cores can take up, when they're not otherwise busy. Rendering a background scene? Streaming non-critical data? Making trees sway in the wind? MLAA? Recognizing a new face from a camera image? Etc, etc.
50%-70% usage numbers are commonplace for PS3 games, because that's about how much of the frame (it varies widely, mostly depending on the genre of the game, and developer skill/experience) that lends itself well to parallel tasks, both rendering and non-rendering. The remaining 30-50% of the SPU time *is* usable, but it often entails a great deal more work to get there, than the first chunk of work does. Parallelizing game logic, in particular, is arduous, and often undesirable in a cross-console game, since it might put not-so-parallel consoles in a position where they have to keep up, or in many cases, can complicate development too much to be worth the investment and remaining speed gains, unless the title is exclusive.
No matter how much parallelism a title uses, game engines always benefit from changing bad code (excuse the ridiculous example) like:
int Mult(int x, int y)
for(i=0; i < y; ++i)
returnvalue += x;
z = Mult(x, y);
to something that utilizes the hardware better like:
z = x * y;
...so throughout a game engine's evolution on a particular console, it will always continue to get better and faster, via better parallelism or otherwise! | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363135.71/warc/CC-MAIN-20211205035505-20211205065505-00608.warc.gz | CC-MAIN-2021-49 | 5,057 | 28 |
http://www.mobuware.net/pocket-pc-os/organisation-tag/flash-pimp-clock-download-free-6708.html | code | Flash Pimp Clock - Different visual clock, to read the hours see the left vertical columns and for the minutes see the right 5-rows columns.
- 5 colors to choose from, just press the bottom bar to change colors;
- If you prefer a 12-hour clock, simply click the left numbers after 13h;
- Exit with enter (keypad) or by pressing the top-right area;
- See the "normal" digital clock and date by pressing anywhere else on the screen;
- Needs FLASH 7, if it doesnt exit properly, you dont have it;
- Attached the CAB, already with the WM6.5 icon on install;
- ZIP folder includes also swf file, to see it run on your NOKIA, PC or Throttle Launcher;
Like it? Share with your friends!
If you got an error while installing Themes, Software or Games, please, read FAQ.
Supported operating systems:
Windows Mobile 5.0, Windows Mobile 6 Classic, Windows Mobile 6 Professional, Windows Mobile 6.1 Classic, Windows Mobile 6.1 Professional
SpoonAlarm Turn your Windows Mobile device into a useful alarm clock with the SpoonAlarm. The SpoonAlarm is a simple and useful alarm application, combining a friendly interface with a loud mp3 based alarm capabilities
RockClock RockClock - alarm clock for Windows Mobile devices
Every Second Counts Every Second Counts - Ever wonder how long you'll be around?
This program is a light-hearted collection of general stats, intended for entertainment value only.
An average lifespan is looked up with three key factors anonymously provided by you (your date of birth, gender, and country of residence - all stored only to your phone).
That ticking sound you hear..
tdClock TdClock is a numeric clock that will stay wherever you put it on your Today page
Satellite Clock Satellite Clock is a simple yet powerful and eye catchy clock with optional GPS synchronization. Changeable skins support provides convenient and complete way to quickly give a brand new look to the clock.
With a GPS device
RNS:: Satellite Clock uses the GPS technology to synchronize with the extremely precise satellite time
Voice Clock A simple application that tells you what is the current time
SKScheMa SKSchema is a multi-functional scheduler allowing to automatically perform different actions at the time specified as well as to perform actions depending on other programs being launched/closed. The program can be used as an advanced alarm clock and also as a reminder
FlipClock Pointui Applet FlipClock Pointui Applet - A FlipClock with interesting features.
How to add an applet:
1. Download the applet on your PC
2. if it is a cab, copy it on your device and click on it to install it
2bis. if it is a zip file, uncompress it on your PC and copy the applet folder on your device in the directory /Program Files/Home2/AppletRibbon/
Simple Clock Simple Clock - Just extract it and run the exe on your pocket pc or map it to any of your buttons.
Changes in 0
Other Software by developer «TWolf»:
Flash Terra Flash Terra - Interactive Earth globe for Windows Mobile.
Don't expect much, the image looks better than what it actually does.
Its just a click-drag and roll 3d world.
There isn't even kinetic animation, because it would be too slow.
Its fun for about 2 minutes...
Press enter to exit
Titanizer Titanizer - Titanizer is a simple Flash tool to edit the "tabs" of the Windows Mobile 6.5 plugin. The aspect is simple and draggable. You can change the order and remove the tabs you don't want. You can also add new tabs (to add/test new plugins) and change the Left SoftKey.
- Have WM6.5
- Install Flash 7 (Flash Lite 3.1 doesn't work)
Flash Ripple Flash Ripple - Interactive water ripples.What's New in This Release:
· Lighter flat background (looks better)
· movieclip drops now remove themselves after falling (better for the memory on the long run)
· After 3 seconds inactive, the drops start falling by themselves randomly.
· You can now add your own background picture, just place bg
FlashFixe FlashFixe - Interactive aquarium with clock for Windows Mobile.
Press anywhere on the screen to make a water drip effect, if you drag your finger across the screen it will create a water ripple path.
To see the clock, make some ripples across the middle of the screen
FotoFlash FotoFlash - Interactive photo album for Windows Mobile.
Press and drag the pictures to move them around.
Click the right-bottom tip to stretch it.
Click the top-left tip to rotate it (still some bugs here).
With the extra EXE you can add your own pictures.
Keypad up/down for high/low quality.
Press enter to exit the application
M2D Flash Calendar 3 M2D Flash Calendar is basically a Flash calendar, that shows the date. It runs inside a Today plugin that supports Flash.
Note: Installation instructions are available here
Titanium TWolf Multiplugin Titanium TWolf Multiplugin - Multifunctional plugin for Titanium.
The first page has the skinned clock, small date and the current weather. The weather info and icon is taken from Showaco's Titanium weather. No point on creating a separated weather viewer when that is already perfect | s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187206.64/warc/CC-MAIN-20170322212947-00656-ip-10-233-31-227.ec2.internal.warc.gz | CC-MAIN-2017-13 | 5,029 | 60 |
https://victorzhou.com/ | code | Computer Science at Princeton University. I blog about web development, machine learning, programming, and more.
Subscribe to know whenever I post new content. I don't spam!
© Victor Zhou 2019
My unlikely origin story.
A simple explanation of how they work and how to implement one from scratch in Python.
Even weather.com is doing this wrong - are you?
I learned this the hard way, but hopefully you don't have to.
This is an actual bug I had once.
Why existing libraries are uninspiring and how I built a better one. | s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203491.1/warc/CC-MAIN-20190324190033-20190324212033-00225.warc.gz | CC-MAIN-2019-13 | 519 | 9 |
http://inkphy.com/tag/solardynamicsobservatory?hl=en | code | //Our Sun, the nearest star to Earth.
These images were taken by NASA's Solar Dynamics Observatory (SDO), during the Sun's 11 years solar cycle, since 2008.
Almost every 11 years, our star changes it's activity by changing the levels of solar radiation and ejection of solar material and it's appearance by changing the number and size of sunspots, flares and other manifestations. This phase is called "solar cycle", with "solar maximum" being in the highest of it's activity and solar minimum being the calmest.
In the pictures above we can spot the following:
▪An X1.8-class solar flare, 19 Dec. 2014 (picture 1)
▪An M7.9-class solar flare, 25 Jun. 2015 (picture 2)
▪An M-class solar flare, 1 Jan. 2014 (picture 3)
▪An M6.9-class solar flare, 18 Dec. 2014 (picture 4)
▪A C-class solar flare, 12 Jan. 2015 (picture 5) *X-class signify the most intense solar flares while the number provides data about its strength. For example an X2 is twice as intense as an X1, an X3 is three times as intense, etc. Also M-class flares are a 10th (1/10) of the size of the most intense ones, the X-class flares. The same here an M2 is twice as intense as an M1, etc.
Solar Dynamics Observatory (SDO) was launched on 11 February 2010, in geosynchronous orbit, as a part of the "Living With A Star" (LWS) project, in order to understand the influence of the Sun on the Earth and near-Earth space by studying the solar atmosphere on small scales of space/time in many wavelengths (measured in Angstroms, Å) and to investigate how the Sun's magnetic field is generated and structured, how this stored magnetic energy is converted and released into the heliosphere and geospace in the form of solar wind, energetic particles, and variations in the solar irradiance.
#nasagoddard #solarsystem #planetaryscience #solardynamicsobservatory #stellarscience | s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249508792.98/warc/CC-MAIN-20190223162938-20190223184938-00429.warc.gz | CC-MAIN-2019-09 | 1,845 | 11 |
https://uat.taylorfrancis.com/books/mono/10.4324/9780203416921/john-donne-critical-heritage-catherine-phillips-smith?refId=f135047d-336b-4c80-ac16-6792a9bfee2b&context=ubx | code | Contains writings about John Donne from 1873 to 1923, including Henry Morley, Edmund Gosse, W.F. Collier, Rudyard Kipling, Charles Eliot Norton, Henry Augustin Beers, Thomas Hardy, W.B. Yeats, Ezra Pound, T.S. Eliot, and many others.
Together these works present a record of how, from the nineteenth century onwards, critics viewed Donne, and how he became part of today's literary canon. | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100762.64/warc/CC-MAIN-20231208144732-20231208174732-00813.warc.gz | CC-MAIN-2023-50 | 388 | 2 |
http://afcef.org/cat/hans-zimmer-music-collection.php | code | This is the discography of Hans Zimmer, an award-winning German composer and music Pirates of the Caribbean: Soundtrack Treasures Collection · The Dark Knight · Sherlock Holmes · Inception · Pirates of the Caribbean: On Stranger . View credits, reviews, tracks and shop for the CD release of The Essential Film Music Collection on Discogs. Listen to Dark Heroes - The Hans Zimmer Soundtrack Collection now. Listen to Dark Heroes - The Hans Zimmer Soundtrack Collection in full in the Spotify app. Pirates Of The Caribbean: Soundtrack Treasures Collection 5: DVD of behind the scenes, making of the music and interviews with composer Hans Zimmer. London Music Works & The City of Prague Philharmonic Orchestra - The Music Of Hans Zimmer - The Definitive Collection - afcef.org Music. German-born composer Hans Zimmer has composed some of the a soundtrack that has gone on to achieve legendary status for Zimmer. Album · · 77 Songs. Available with an Apple Music subscription. Try it free. Check out The Music of Hans Zimmer: The Definitive Collection by London Music Works & The City of Prague Philharmonic Orchestra on Amazon Music. Stream. The World Of Hans Zimmer - A Symphonic Celebration (Shows) ( Composer) Pirates Of The Caribbean: Soundtrack Treasures Collection ( Composer). | s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703518201.29/warc/CC-MAIN-20210119072933-20210119102933-00182.warc.gz | CC-MAIN-2021-04 | 1,288 | 1 |
https://shoutengine.com/TheRagCompanyPodcast/main-show-65-the-haunting-of-sema-house-68670 | code | MAIN SHOW #65 | The Haunting of SEMA House
Recorded in the kitchen of TRC's Las Vegas rental house the night before SEMA, the guys recap their adventures driving down & setting up.
NOTE: Matt Moreman of Obsessed Garage stops by to grab his SEMA badge and joins the crew in the kitchen about ten minutes in.
WATCH HERE: https://youtu.be/_hT7G7s50_c
If you haven't already, be sure to SUBSCRIBE to the new TRC Podcast YouTube Channel: https://www.youtube.com/channel/UC37-ddfANtm6mwB79ukQTXA
JOIN THE CLUB:
Hosts: Dane Hennen, Levi Gates, Anthony Fisher
Guest: Matt Moreman (Obsessed Garage)
Recorded by: Tim O'Brien
Content provided courtesy of The Rag Company ©2018 | s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487611445.13/warc/CC-MAIN-20210614043833-20210614073833-00064.warc.gz | CC-MAIN-2021-25 | 666 | 10 |
http://www.etiquettehell.com/smf/index.php?topic=84948.0;prev_next=next | code | Are we forbidden to bring up an Ehell topic somewhere else, if Ehell isn't referenced and no links are posted?
For instance, (I'm going to pick on an old thread), maybe we've had a discussion here about whether it's rude to ask for a recipe. I think, hmm, I have a bunch of friends on a mothers' board who cook, and I'd be curious to find out what they think? Can I post the question on my FB or in another discussion board, as long as I don't reference Ehell or the link and I'm reasonably sure that Ehell will not come up in the subsequent responses?
Sometimes I'm interested in hearing the opinions of people who don't frequent this board (I believe a handful still exist out there), but I don't want to step on any toes or endanger my membership here. | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123102.83/warc/CC-MAIN-20170423031203-00082-ip-10-145-167-34.ec2.internal.warc.gz | CC-MAIN-2017-17 | 755 | 3 |
https://www.langrsoft.com/2009/03/11/two-greens-in-a-row/ | code | I’m not refactoring, I’m test-driving new code, writing assertions first, yet I get two green bars in a row. Stop! I need to figure out what’s going on. The first possibility is that I didn’t expect to get green:
Did I compile? Am I picking up the right version of the code? – D’oh!
Does the test really specify what I think it does? – I take a bit of time to read through the test.
Do I really understand the code well enough? – I dig into the code, and in rare circumstances fire up the debugger. Maybe someone wasn’t following YAGNI.
Alternatively, if I did expect to get green:
Did I take too large a step? – I consider restarting from the prior red, looking for a way to take more incremental steps. Obviously a more-granular increment of behavior exists, since I felt to compelled to write a distinct assertion for it.
Am I simply being completist? – I’m not testing, I’m doing TDD, where two greens in a row suggests that I don’t need the second assertion. Test-driving is about incrementally growing the system, not ensuring that everything works. But from a confidence standpoint, I want to probe at interesting boundary conditions. Sometimes my compulsion to probe is because I don’t understand the code as well as I should. Sometimes there’s just too much going on, and I find that adding confidence tests is worthwhile. And finally, I remember that my tests should act as documentation. So most of the time, I’m ok with being a “completist.”
Are there “linked” (i.e. redundant) concepts in the design? – Maybe the interface is overloaded, deliberately so. More often than not, I can link the two concepts in the test as well, building a custom assert; conceptually I end up with one assert per test. If I find that I have a lot of tests with more than one postcondition, my designs are probably getting less cohesive. Or maybe I’m just writing too broad of a test.
There are no doubt other reasons for two greens in a row. No matter, the event should always trigger a need to stop and think about why.
Jake Dempsey March 22, 2009 at 07:34am
I have found that I often make two dumb mistakes while tdd’ing. I am using Ruby’s unit testing framework on a project, which is very similar to Junit 3.8. I often times forget to puts test_* as my test name and after writing what seems to be a failing test, I run my test and nothing….all green…wtf?!? I have spent 10 minutes looking at a test thinking…there is no way that can be green.
The other one I do a little less frequently, but still fall prey to it is using the wrong assertion. I have sometimes says “assert foo, bar” instead of “assert_equal foo, bar”. The first will always pass as long as foo is an object.
There are a ton of these I’m sure. | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817289.27/warc/CC-MAIN-20240419043820-20240419073820-00631.warc.gz | CC-MAIN-2024-18 | 2,774 | 13 |
https://www.mytractorforum.com/threads/john-deere-rear-end-lube.91793/ | code | If everyone did that, there would be no need for this forum.call the local JD service desk?.... or ask your friend to do it.....
Oh wait,you want complicated;what's the square root of 1980 times the number of rocks in your garden divided by three times six.:biglaugh: :biglaugh: ----thax for my covering my back Stashyea, yea..... smarta$$ remark on my part I know... but for something like this, a phone call is actually much faster... it's not a complicated issue that requires debate or deliberation.... the problem is simple... and so is the sollution
cheers right back at 'ya! | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948900.50/warc/CC-MAIN-20230328232645-20230329022645-00336.warc.gz | CC-MAIN-2023-14 | 581 | 3 |
http://stackoverflow.com/questions/6912331/how-to-get-coordinates-of-cropped-image-from-original/6912490 | code | How to find one image inside of another?
I have two images. One is the original (let's say it's 500 x 800 pixels. The other is a cropped image (let's say 300 x 200 pixels) takend from the original.
What I'm looking for is a way using c# to look at both images and find the coordinates of the cropped image within the original.
Hope that makes sense? Any help appreciated. | s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678702332/warc/CC-MAIN-20140313024502-00017-ip-10-183-142-35.ec2.internal.warc.gz | CC-MAIN-2014-10 | 371 | 4 |
http://forums.seochat.com/google-optimization-7/contests-as-backlink-source-463837.html?goto=nextoldest | code | I woke up yesterday to find that one of the posts on my blog had plummeted down the rankings, from 1 to 18 for some queries. I'm a bit confused as to why though, up until now it had been ranking higher each day, so I can't accuse exit rate and its PR has stayed the same.
The only thing is that I made one tiny change the day before. It used to be that you could read the entire article from my homepage, but I changed that by adding a 'More' tab, so you could only read an excerpt. Is this enough to make my page drop as much as it has? I'm a bit confused since I didn't really change the content of the actual page, just the webpage.
Also, should changing it back to how it was before rectify things? Or will I have to gradually climb the results page all over again? | s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607620.78/warc/CC-MAIN-20170523103136-20170523123136-00445.warc.gz | CC-MAIN-2017-22 | 769 | 3 |
http://www.geosciencebc.com/s/Report2013-19.asp | code | Geoscience BC Report 2013-19
Examining Present and Future Water Resources for the Kiskatinaw River Watershed, British Columbia
by Jueyi Sui, Jianbing Li, Gopal Saha, Faye Hirshfield and Siddhartho Paul (University of Northern British Columbia)
Geoscience BC Report 2013-19 is a collection of interrelated research projects undertaken at the University of Northern British Columbia (UNBC) as part of the Kiskatinaw Watershed Research Project. This project was incorporated into Geoscience BC's Montney Water Project because of the importance of the Kiskatinaw River Watershed (KRW) as a water source for natural gas development and the requirement for a better understanding of the water availability and quality in the region to effectively balance the needs of industrial water users with local water users.
The main objective of the study was to quantify current and future available water quantity and quality in the KRW. This was achieved by several means, including: selecting and calibrating a hydrologic model for the KRW; analyzing land use and land cover change in the KRW; and modeling the impacts of future climate change on groundwater-surface water interaction and baseflow quantification in the KRW. In total, two Ph.D. research projects and one M.Sc. research project were produced in this study.
Northern BC communities depend on the available water resources for drinking water supply and other activities. For sustainable water resources management, it is crucial to balance water utilization from various stakeholders against available water resources at the watershed level. The research undertaken here comprehensively examines the water resources issues in the KRW through field studies and numerical model developments, which will help the oil and gas industry better plan its natural gas exploration activities. Knowing the variations of water quantity and quality under different land-use and climate change scenarios will aid in minimizing the effect of natural gas development on water resources in the KRW.
Back to Data Releases Main Page
- M.Sc.Thesis - S. S. Paul
Analysis of land use and land cover change in Kiskatinaw River Watershed : a remote sensing, GIS & modeling approach - University of Northern British Columbia, 2013
- Paul, S.S. (2013): Examining Present and Future Water Resources for the Kiskatinaw River Watershed, British Columbia (Land Use-Land Cover Change Analysis in Kiskatinaw River Watershed; M.Sc. Thesis, University of Northern BC, 56 p.
- Ph.D.Thesis - F. Hirshfield
- Hirshfield, F. (2013): Kiskatinaw River surface water monitoring network: A summary of methodology and rating curve development; Ph.D. Thesis, University of Northern BC, 153 p.
- Ph.D.Thesis - G.C. Saha
- Saha, G.C. (2013): Examining Present and Future Water Resources for the Kiskatinaw River Watershed, British Columbia (Groundwater-Surface Water Interactions and Surface Water); Ph.D. Thesis, University of Northern BC, 134p.
- Search Other Projects
- If you are interested in other Geoscience BC Projects of this type or in this area, try searching through all our projects: | s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247484772.43/warc/CC-MAIN-20190218074121-20190218100121-00309.warc.gz | CC-MAIN-2019-09 | 3,103 | 16 |
https://natsav.com/blog/kill-process-on-ubuntu/ | code | To kill a process on Ubuntu, you can use the kill command or the pkill command. Here’s a brief overview of both methods:
1. Using the kill command:
To terminate a process, follow these steps:
Find the process ID (PID) of the process you want to terminate. You can use the ps command to list processes. For example, to find the PID of a process named “example_process,” use the following command:
ps aux | grep example_process
Once you have the PID, use the kill command to terminate the process. Replace PID with the actual process ID:
If the process does not terminate gracefully, you can use the -9 option to forcefully kill the process:
kill -9 PID
2. Using the pkill command:
The pkill command allows you to kill a process by its name. To kill a process named “example_process,” use the following command:
Similar to the kill command, you can use the -9 option with pkill for a forceful termination:
pkill -9 example_process
Please exercise caution when forcefully terminating processes using the -9 option. It may lead to data corruption or other issues if the process is performing critical operations. It is generally recommended to try a regular kill first and only resort to the -9 option if necessary.
Remember that killing processes should be done carefully, especially if they are essential for the system’s stability or if they belong to system-critical services. | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476409.38/warc/CC-MAIN-20240304002142-20240304032142-00135.warc.gz | CC-MAIN-2024-10 | 1,388 | 14 |
http://www.tacomaworld.com/forum/buy-sell-trade/57920-wtb-wheels-snow-tires-06-prerunner-trd-o-r-chicago-area.html | code | I'm looking for a set of wheels for an '06 Prerunner TRD Off Road. If they already have snow tires on them, I'm that much ahead. I don't really care if they're steel or alloy. If you don't have a full set, let me know what you do have. I don't mind putting together a set as long as they all match.
Well... I suppose that those mounted on the same side need to match. | s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400380409.19/warc/CC-MAIN-20141119123300-00145-ip-10-235-23-156.ec2.internal.warc.gz | CC-MAIN-2014-49 | 367 | 2 |
https://www.oaoa.com/local-news/business/geek-webmail-applications-solve-often-obsolete-e-mail-client-issues/ | code | Q: I allowed Microsoft to update Windows 10 on my desktop computer recently. It took over 2 hours and after it was done I was no longer able to receive email via my Outlook 2007. I cannot access my Account Settings; I get “Not Implemented” when I try to receive/send. When seeking help at Microsoft I read that this program is “retired” but that this just means they will no longer provide support. Read instructions on how to uninstall the update but am reluctant to do that as I cannot tell how much work I’ll have to “clean up” afterwards (I’m no computer expert!). I do still have my MS Office 2007 software for reinstall if necessary.
– Deborah J.
A: I’m afraid I’m not going to have much good news for you, Deborah, but I can offer an explanation of how this happened, and some recommendations for the future.
Microsoft has settled into a rather predictable pattern of bringing software to what it calls “End of Life” after 10 years. Being a software engineer by trade, I’m somewhat torn on this. I know that well-written, robust software can last indefinitely, and it’s not like the ones and zeroes wear out or anything. However, think about how quickly technology advances. Then consider how most users are always clamoring for new features and almost universally want software that’s on the very bleeding edge of technology. Finally, realize that a company like Microsoft makes its money from sales, not support, and you have a formula for a perfect storm of planned software obsolescence, and a good case for “retiring” software that is as old as your version of Office.
What seems to have happened is that two things happened in 2017. First, your version of office reached End-of-Life. When Microsoft says it will no longer provide support for a product, it means more than not providing direct user support. It means no more bug fixes, security patches or feature upgrades will be released. That brings us to the second thing that happened. On Oct 31, 2017 Microsoft discontinued support for using a programming technique called RPC, or Remote Procedure Call, over an HTTP connection to a Microsoft Exchange Online server. RPC over HTTP was replaced with MAPI, or Messaging Application Programming Interface, a more modern, more robust architecture for exchanging e-mail data. The reason this affected you is that MAPI via HTTP doesn’t work in Outlook 2007. And, because Outlook 2007 has been retired, the required support for MAPI will never be implemented for that version. That would seem to leave you with no choice except to upgrade your version of Office if you want to continue to use Outlook as an e-mail client.
The last sentence of that paragraph is pretty important, especially the part after the word “if.” Outside of work, I stopped using Outlook as an e-mail client years ago, because of issues very similar to this. In fact, I stopped using e-mail clients altogether, and instead now rely exclusively on Webmail applications, which have become almost ubiquitous in their implementation by ISPs. Aside from never having to worry about your e-mail client becoming obsolete ever again, there are several advantages of using Webmail over a client. Most significantly, by working with e-mail on the server rather than on one device, you can access your e-mail from any PC, smartphone, tablet or other Internet-connected device, and they will all be in constant, continuous synchronization with one another. In other words, if I delete an e-mail via my PC, it is also deleted on my phone, my iPad, my wife’s PC, my work PC, etc. If I move a message into a sub-folder, I’ll find it in that sub-folder no matter what device I access it from. It’s amazingly convenient – it just takes a little getting used to, and the time to set it up.
Aside from Webmail, there are other, non-Microsoft e-mail clients still hanging around, that are only a Google search away. They offer none of the benefits of Webmail, however, and you may find them obsoleted over time, leaving you in the same boat you’re in right now.
To view additional content, comment on articles, or submit a question of your own, visit my website at ItsGeekToMe.co (not .com!) | s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621273.31/warc/CC-MAIN-20210615114909-20210615144909-00167.warc.gz | CC-MAIN-2021-25 | 4,207 | 8 |
http://www.linuxjournal.com/article/3351?page=0,2 | code | Writing Modules for mod_perl
Now that we have seen how easy it is to write a PerlHandler module, let's look at how to install this module on our web server. We do this in the configuration file, typically named httpd.conf. If your copy of Apache uses three .conf files, understand that the division between them is artificial and based on the server's history, rather than any real need for three files. Apache developers recognized this increasingly artificial division and recently decided that future versions of the server will have a single file, httpd.conf, rather than three.
Apache configuration files depend on directives, which are variable assignments in disguise. That is, the statement
sets the “ServerName” variable to the value “lerner.co.il”.
If you want a directive to affect a subset of the files or directories on the server, you can use a “section”. For instance, if we say:
<Directory /usr/local/apache/share/cgi-bin> AllowOverride None Options ExecCGI </Directory>
then the AllowOverride and Options directives apply only to the directory /usr/local/apache/share/cgi-bin. In this way, we can apply different directives to different files.
“Directory” sections allow us to modify the behavior of particular files and directories. We can also use “Location” sections to modify the behavior of URLs not connected to directories. Location sections work in the same way as Directory sections, except that Location takes its argument relative to URLs, while Directory takes its argument relative to the server's file system.
For example, we could rewrite the above Directory section as the following Location section:
<Location /cgi-bin> AllowOverride None Options ExecCGI </Location>
Of course, this assumes that URLs beginning with /cgi-bin point to /usr/local/apache/share/cgi-bin on the server file system.
All this background is necessary to understand how we will install our PerlHandler module. After all, our PerlHandler will influence the way in which one or more URLs will be affected. If we (unwisely) want our PerlHandler module to affect all the files in /cgi-bin, then we use
<Location /cgi-bin> SetHandler perl-script PerlHandler Apache::TestModule </Location>
This tells Apache we will be handling all URLs under /cgi-bin with a Perl handler. We then tell Apache which PerlHandler to use, naming Apache::TestModule. If we did not install Apache::TestModule in the appropriate place on the server file system and if the package was not named correctly, this will cause an error.
The above example is unwise for a number of reasons, including the fact that it masks all the CGI programs on our server. Let's try a slightly more useful Location section:
<Location /hello> SetHandler perl-script PerlHandler Apache::TestModule </Location>
The above Location section means that every time someone requests the URL “/hello” from our server, Apache will run the “handler” routine in Apache::TestModule. Because we used a Location section, we need not worry whether /hello corresponds to a directory on our server's file system.
This is how mod_perl creates a status monitor:
<Location /perl-status> SetHandler perl-script PerlHandler Apache::Status </Location>
Each time someone requests the /perl-status URL from our server, the Apache::Status module is invoked. This module, which comes with mod_perl, provides us with status information about our mod_perl subsystem. Again, because we use a Location section, we need not worry about whether /perl-status corresponds to a directory on disk. In this way, we can create applications that exist independent of the file system.
Once we have created this Location section in httpd.conf, we must restart Apache. We can send it an HUP signal with
killall -HUP -v httpd
or we can even restart Apache altogether, with the program apachectl that comes with modern versions of the server:
apachectl restartEither way, our PerlHandler should be active once Apache restarts.
We can test to see if things work by going to the URL /hello. On my home machine, I pointed my browser to http://localhost/hello and received the “testing” message soon after. If you don't see this message, check the Apache error log on your system. If there was a syntax error in the module, you will need to modify the module and restart the server as described above.
The first time you invoke a PerlHandler module, it may take some time for Apache to respond. This is because the first time a PerlHandler is invoked on a given Apache process, the Perl system must be invoked and the module loaded. You can avoid this problem to a certain degree with the PerlModule directive, described later in this article.
|PasswordPing Ltd.'s Exposed Password and Credentials API Service||Apr 28, 2017|
|Graph Any Data with Cacti!||Apr 27, 2017|
|Be Kind, Buffer!||Apr 26, 2017|
|Preparing Data for Machine Learning||Apr 25, 2017|
|openHAB||Apr 24, 2017|
|Omesh Tickoo and Ravi Iyer's Making Sense of Sensors (Apress)||Apr 21, 2017|
- Graph Any Data with Cacti!
- Teradici's Cloud Access Platform: "Plug & Play" Cloud for the Enterprise
- The Weather Outside Is Frightful (Or Is It?)
- Simple Server Hardening
- Understanding Firewalld in Multi-Zone Configurations
- IGEL Universal Desktop Converter
- Bash Shell Script: Building a Better March Madness Bracket
- Gordon H. Williams' Making Things Smart (Maker Media, Inc.)
- Server Technology's HDOT Alt-Phase Switched POPS PDU | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917126538.54/warc/CC-MAIN-20170423031206-00462-ip-10-145-167-34.ec2.internal.warc.gz | CC-MAIN-2017-17 | 5,439 | 41 |
https://marlin.crc.id.au/forum/t/vref-on-skr-mini-v3/1067 | code | Hi, I was about to upgrade my board from a 4.2.7 to an skr mini v3. On my printer I upgraded my Y stepper motor to a 42-40 and had to adjust the VREF on the 4.2.7 board to give more power to properly use the motor. On the 4.2.7 board all I had to do was turn the screws to change the VREF. There are no screws on the skr mini v3. Does anybody know how to change the VREF for this board?
Ive read you can set it in the firmware, but I use this site to build my firmware so Im unsure how to add that in.
You can set the stepper current using the M906 command.
Is this the same as setting the vref? like if I make the current .97mA, is that the same as setting the vref to 1.20 V?
As per the documentation, you set a specific current in mA - ie
M906 E600 will set the extruder current to 600mA.
and to confirm, the range for a 42-40 is 870mA to 1000mA correct?
You would need to check the specifications of the stepper you got. They can vary.
Just keep in mind that the spec sheets probably refer to a peak current - whereas the stepper driver setting will be in RMS.
To get RMS from the peak value, multiply the peak value by 0.7071. | s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710801.42/warc/CC-MAIN-20221201053355-20221201083355-00163.warc.gz | CC-MAIN-2022-49 | 1,131 | 10 |
http://mamahoot.blogspot.com/2010/08/restructuring.html | code | Yep, I'm changing things up on the blog again. I have been reading and thinking about how to handle the blog, the business and Real Life. First, I don't want to create a post just for the sake of obligation - I want it to have content and be something I'm excited to show you. Second, I don't have that three times a week. Third, I don't want blogging to take away from what's really important, Real Life, designs for the shop, producing quality items. So...
For right now I intend to share with you on Tuesday and Thursday. Crafty things, bits of life, exciting moments, new concepts and designs, things of the like. I think I will be able to give you better content and more focus this way.
- Mama Hoot | s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590329.25/warc/CC-MAIN-20180718193656-20180718213656-00128.warc.gz | CC-MAIN-2018-30 | 704 | 3 |
https://www.freelists.org/post/dokuwiki/Migration-Plugin-for-MoinMoin | code | Hi everybody! I'm sorry to announce that, but due to performance problems on large wikis some people reported before, our wiki has become nearly unusable :( I'd also like to thank the dokuwiki team for their work on a wiki we used for over a year now, and which worked quite well for some time!
So, we're planning to switch to moinmoin (hoping it'll solve our problems), and as I fear we're not the only ones having performance issues, I wanted to make people know we have a plugin for moin allowing it to use dokuwiki's syntax, so to make the migration easier. It's, well, 90% finished yet. If you're interested, just mail me!
So, many thanks to the dokuwiki's team, and please focuse on performance for oncoming releases, it _is_ a problem ;)
Cordially, Yann Hamon | s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647406.46/warc/CC-MAIN-20180320111412-20180320131412-00699.warc.gz | CC-MAIN-2018-13 | 766 | 4 |
https://forums.adobe.com/thread/609721 | code | Thanks for your quic reply... but where are the options such as underline, li etc ? Cannot see them anymore.
Thanks a lot
Some styles must be applied with CSS properties.
Read this Help Article - What's new in DW CS4?
oh ok! thank you, i thought i was missing something..thanks a lot for your prompt replies | s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864558.8/warc/CC-MAIN-20180521220041-20180522000041-00067.warc.gz | CC-MAIN-2018-22 | 307 | 5 |
https://forums.macrumors.com/threads/extreme-lag-macbook-pro.471739/ | code | Hi everyone... I desperately need help here!!! My macbook is only less than a year old, I can't remember the specs of it but I am sure it is pretty much the latest and most powerful Macbook Pro we can find in the market right now... Unfortunately, I have no idea why is it so lag while i'm using programs. Especially working on Microsoft powerpoint and microsoft word programmes. I am using the Mac OS, so it can't be a windows programme. I am currently compiling a project due next week and I'm transferring and sizing pictures previously compiled on a word document (it wasn't lag that time). But right now, it lags so bad that evertime i size my image or copy the image into a word document, it lags very badly and the spinning rainbows occur everytime. I even have to resize the image twice just to get the desired size. the lag happens whenever I insert something new into the document, but not when I'm typing. I have no idea what is happenning, my HD space is 30G of free space, 70G in use. I also checked for background programmes but there is no problem about my memory usage. So why is it lag? if its not HD space and memory space, my OS is up-to-date but why lag! i need to work fast!!!! =( help!! | s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743007.0/warc/CC-MAIN-20181116091028-20181116113028-00293.warc.gz | CC-MAIN-2018-47 | 1,208 | 1 |
http://north.hydroguam.net/map-topography-cliffline-with-relief.php | code | Northern Guam geospatial information server
About this map:
This map shows the extent of cliffs in northern Guam in conjunction with hillshaded terrain (hypsometric tinted shaded relief). The lines indicating cliffs were digitized by hand for the purposes of this atlas, based on visual interpretation of the representation of degree of slope made from 4-m resolution LiDAR-derived DEM. The original LiDAR data were collected in 2007 by JALBTX for the Government of Guam. Cliffs are taken to generally have a slope greater than 60 degrees. | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123102.83/warc/CC-MAIN-20170423031203-00390-ip-10-145-167-34.ec2.internal.warc.gz | CC-MAIN-2017-17 | 539 | 3 |
https://humanrobotinteraction.org/2021/toc-companion.html | code | Although every human should enjoy physical touch, intimacy, and sexual pleasure, persons with disabilities are often not in the position to fully experience the joys of life in the same manner as abled people. The United Nations stated in 1993 that persons with disabilities should enjoy family life and personal integrity and should not be denied the opportunity to experience their sexuality, have sexual relationships, and experience parenthood. However, after nearly 30 years of discussion, universal access to sexual and reproductive health remains an unfinished agenda for the disabled, as if society failed in recognizing people with disabilities as sexual beings. In this respect, a growing body of scholars have started to explore the idea of using technology to help disabled people satisfy some of these needs, although not without controversy. In concrete, ideas surrounding the use of robots for sex care purposes have been put forward, as service robots performing actions contributing directly towards improvement in the satisfaction of a user's sexual needs. This paper continues to explore the potential use of these robots in disability care for sex care purposes, including for those with physical and mental health disabilities, which is currently underexplored. Our contribution seeks to understand whether sex robots could serve as a step forward in realizing the sexual rights of persons with disabilities. By building on a conceptual analysis of how sex robots could empower persons with disabilities to exercise their sexual rights, we hope to inform the policy debate around robots' regulation and governance and set the scene for further research.
We examined how robots can successfully serve as moral advisors for humans. We evaluated the effectiveness of moral advice grounded in deontological, virtue, and Confucian role ethics frameworks in encouraging humans to make honest decisions. Participants were introduced to a tempting situation where extra monetary gain could be earned by choosing to cheat (i.e., violating the norm of honesty). Prior to their decision, a robot encouraged honest choices by offering a piece of moral advice grounded in one of the three ethics frameworks. While the robot's advice was overall not effective at discouraging dishonest choices, there was preliminary evidence indicating the relative effectiveness of moral advice drawn from deontology. We also explored how different cultural orientations (i.e., vertical and horizontal collectivism and individualism) influence honest decisions across differentially-framed moral advice. We found that individuals with a strong cultural orientation of establishing their own power and status through competition (i.e., high vertical individualism) were more likely to make dishonest choices, especially when moral advice was drawn from virtue ethics. Our findings suggest the importance of considering different ethical frameworks and cultural differences to design robots that can guide humans to comply with the norm of honesty.
Exploratory prototyping techniques are critical to devising new robot forms, actions, and behaviors, and to eliciting human responses to designed interactive features, early in the design process. In this opinion piece, we establish the contribution of exploratory prototyping to the field of human-robot interaction, arguing research engaged in design exploration-rather than controlled experimentation-should be focused on flexibility rather than specificity, possibility rather than replicability, and design insights as incubated subjectively through the designer rather than dispassionately proven by statistical analysis. We draw on literature in HCI for examples of published design explorations in academic venues, and to suggest how analogous contributions can be valued and evaluated by the HRI community. Lastly, we present and examine case studies of three design methods we have used in our own design work: physical prototyping with human-in-the-loop control, video prototyping, and virtual simulations.
Inspired by the recent UNESCO report I'd Blush if I Could, we tackle some of the issues regarding gendered AI through exploring the impact of feminist social robot behaviour on human-robot interaction. Specifically we consider (i) use of a social robot to encourage girls to consider studying robotics (and expression of feminist sentiment in this context), (ii) if/how robots should respond to abusive, and antifeminist sentiment and (iii) how ('female') robots can be designed to challenge current gender-based norms of expected behaviour. We demonstrate that whilst there are complex interactions between robot, user and observer gender, we were able to increase girls' perceptions of robot credibility and reduce gender bias in boys. We suggest our work provides positive evidence for going against current digital assistant/traditional human gender-based norms, and the future role robots might have in reducing our gender biases.
The robot rights debate has thus far proceeded without any reliable data concerning the public opinion about robots and the rights they should have. We have administered an online survey (n = 200) that investigates layman's attitudes towards granting particular rights to robots. Furthermore, we have asked them for what reasons they are willing to grant them those rights. Finally, we have administered general perceptions of robots regarding appearance, capacities, and traits. Results show that rights can be divided in sociopolitical and computing dimensions, and reasons into cognition and compassion dimensions. People generally have a positive view on robot interaction capacities. Attitudes towards robot rights depend on age and experience as well as on the cognitive and affective capacities people believe robots will ever possess. Our results suggest that the robot rights debate stands to benefit greatly from a common understanding of the capacity potentials of future robots.
Instrumental helping has been reported in infants toward other humans but not toward robots. Providing infants with opportunities for action-based assistance to robots might lead to more efficient infant-robot interactions. This paper presents preliminary findings on infants' spontaneous instrumental helping to robots exhibiting motion challenges, and proposes a novel decision-making model for infant-robot interaction that encompasses instrumental helping in its parameters; both in the context of pediatric rehabilitation. Six infants were engaged in a chasing game with a wheeled robot with the goal to follow the robot and ascend an inclined platform (8 sessions, 4 weeks). After infants' instrumental helping toward the robot was identified, a decision tree model was created to evaluate a set of annotated variables as potential predictors to the observed behavior. Next, a Markovian model for robot control was developed where these predictors were used as parameters to promote, in turn, action-based goals for the infants.
Human-robot collaboration is increasingly applied to industrial assembly sequences due to the growing need for flexibility in manufacturing. Assistant systems are able help to support shared assembly sequences to facilitate collaboration. This contribution shows a workplace installation of a collaborative robot (Cobot) and a spatial augmented reality (SAR) assistant system applied to an assembly use-case. We demonstrate a methodology for the distribution of the assembly sequence between the worker, the Cobot, and the SAR.
As robots and artificially intelligent systems are given more cognitive capabilities and become more prevalent in our societies, the relationships they share with humans have become more nuanced. This paper aims to investigate the influences that embodiment has on a person's decision to reward or punish an honest or deceptive intelligent agent. We cast this exploration within a financial advisement scenario. Our results suggest that people are more likely to choose to reward a physically embodied intelligent agent over a virtual one irrespective of whether the agent has been deceptive or honest and even if this deception or honesty resulted in the individual gaining or losing money. Additionally, our results show that people are more averse to punishing intelligent agents, irrespective of the embodiment, which matches prior research in relation to human-human interaction. These results suggest that embodiment choices can have meaningful effects on the permissibility of deception conducted by intelligent agents.
Displaying emotional states is an important part of nonverbal communication that can facilitate successful interactions. Facial expressions have been studied for their emotional expression, but this work looks at the capacity of body movements to convey different emotions. This work first generates a large set of nonverbal behaviors with a variety of torso and arm properties on a humanoid robot, Quori. Participants in a user study evaluated how much each movement displayed each of eight different emotions. Results indicate that specific movement properties are associated with particular emotions; such as leaning backward and arms held high displaying surprise and leaning forward displaying sadness. Understanding the emotions associated with certain movements can allow for the design of more appropriate behaviors during interactions with humans and could improve people's perception of the robot.
Interactive robots are increasingly being deployed in public spaces that may differ in context from moment to moment. One important aspect of this context is the soundscape of the robot and human's shared environment, such as an airport that is noisy during a weekend rush hour, yet quiet on a weekday evening. Just as humans are adept at adapting their speech appropriately to their environment, robots should adjust their speech characteristics (e.g. speech rate, volume) to their context. We studied the effect of a shared auditory soundscape on the perceived ideal speech rate of an artificial agent. We tasked raters to listen to a combination of text-to-speech (TTS) samples with different speech rates and soundscape samples from freesound.org and to evaluate the appropriateness of the speech combination and social perception of artificial speech. Contrary to our expectations, faster artificial speech in louder environments and slower speech in quieter environments were not preferred by raters. This suggests that further research into how exactly to adapt artificial speech to background noise is necessary.
We introduce and explore the concept of non-volitional behavior change, a novel category of behavior change interventions, and apply it in the context of promoting healthy behaviors through an automated sit-stand desk. While routine use of sit-stand desks can increase health outcomes, compliance decreases quickly and behavioral nudges tend to be dismissed. To address this issue, we introduce robotic furniture that moves on its own to promote healthy movement. In an in-person preliminary study, we explored users' impressions of an autonomous sit-stand desk prototype that changes position at regular pre-set time intervals while participants complete multiple tasks. While in-the-moment self-reported ratings were similar between the autonomous and manual desks, we observed several bi-modal distributions in user's retrospective comparisons and their qualitative responses. Findings suggest about half were receptive to using an autonomous sit-stand desk, while the remaining preferred to retain some level of control.
As social robots are increasingly entering the real world, developing a viable robot application has become highly important. While a growing body of research has acknowledged that the integration of an agile development methodology with user-centered design (UCD) provides advantages for both organizations and end users, integrating UCD in an agile methodology has been a challenging endeavor. The present paper illustrates a user-centered agile approach that integrates user perspectives through formative usability testing during an agile development process of a robot application and thus differentiates from most robot application evaluations, which conduct summative usability testing (i.e., they quantitatively test goal achievement after technological developments). Through an active involvement of organization and end users, the intermediate results of our ongoing project show that the developed social robot application is both useful and usable.
Smooth traffic presupposes fine coordination between different actors, such as pedestrians, cyclists and car drivers. When autonomous vehicles join regular traffic, they need to coordinate with humans on the road. Prior work has often studied and designed for interaction with autonomous vehicles in structured environments such as traffic intersections. This paper describes aspects of coordination also in less structured situations during mundane maneuvers such as overtaking. Taking an ethnomethodological and conversation analytic approach, the paper analyzes video recordings of self-driving shuttle buses in Sweden. Initial findings suggest that the shuttle buses currently do not comply with cyclists' expectations of social coordination in traffic. The paper highlights that communication and coordination with human road users is crucial for smooth flow of traffic and successful deployment of autonomous vehicles also in less structured traffic environments.
Previous Human-Robot Interaction studies have shown that social touching can be established under human-robot touch conditions. To provide the sensation of a robot grasping human hand when he/she grasps the robot, we develop a soft robot that is consisted of airbags and a force sensor. We expect that this mutual touching can take a positive role in alleviating human pains. This paper presents the development of our two prototypes of inflatable haptic devices consisting of airbags.
Social communication difficulties in autism spectrum disorder (ASD) have been associated with poor Theory of Mind (ToM)¬, an ability to attribute mental states to others. Interventions using humanoid robots could improve ToM that may generalize to human-human interactions. Traditionally, ToM has been measured using the Firth-Happe Animations (FHA) task which depicts interactions between two animated triangles. Recently, we developed a Social Robot Video (SRV) task which depicts interactions between two NAO robots. In this study, we administered both tasks to 8 children with ASD and 9 typically-developing children to examine the validity, reliability, and sensitivity of the SRV task. Results suggest that SRV has face validity, partial inter-rater reliability and could differentiate between the two groups. In sum, the SRV task could be used to assess effectiveness of ToM interventions using humanoid robots.
Eye behaviour is one of the main modalities used to regulate face-to-face conversation. Gaze aversion and mutual gaze, for example, serve to signal cognitive load, interest or turns during a conversation. While eye blinking is mainly thought to have a physiological function, the rate of blinking is known to increase during conversation suggesting a communicative function for the eye. Recently, it has been shown that a virtual avatar, acting as the receiver in a conversation, could use blinking as a kind of conversational marker influencing the speaker's communicative behaviour. In particular, it has been demonstrated that long eye blinks resulted in shorter answers by the speaker compared with short ones. Here, we set out to investigate this effect when using a humanoid robot as interaction partner, given that robots have both a physical and social presence. Interestingly, however, we could not replicate the result: short or long blinks did not modulate the length of the responses by the human interactant.
Previous research in psychology has found that human faces have the capability of being more distracting under high perceptual load conditions compared to non-face objects. This project aims to assess the distracting potential of robot faces based on their human-likeliness. As a first step, this paper reports on our initial findings based on an online study. We used a letter search task where participants had to search for a target letter within a circle of 6 letters, whilst an irrelevant distractor image was also present. The results of our experiment replicated previous results with human faces and non-face objects. Additionally, in the tasks where the irrelevant distractors are images of robot faces, the human-likeness of the robot influenced the response time (RT). Interestingly, the robot Alter produced results significantly different than all other distractor robots. The outcome of this is a distraction model related to human-likeness of robots. Our results show the impact of anthropomorphism on distracting potential and thus should be taken into account when designing robots.
We propose a concept of the open diary drawing behavior of a robot living with a child. In our concept, a robot acts together with a child and draws a diary about the memories of a day. Also, it shares the diary with other people such as parents, grandparents, and community members. The purposes of the open diary are to deepen friendships with a child and to promote communication between the child and people. Today, people lose touch with each other due to social change. We believe the robot's open diary is one of the solutions to this problem. In this study, we conducted two preliminary experiments to show the potential and the design principle of the open diary. The diary about memories of what a robot learned from a child pleased the child. Also, when the robot drew a humorous diary, people could enjoy the diary together.
With the pandemic preventing access to universities and consequently limiting in-person user studies, it is imperative to explore other mediums for conducting user studies for human-robot interaction. Virtual reality (VR) presents a novel and promising research platform that can potentially offer a creative and accessible environment for HRI studies. Despite access to VR being limited given its hardware requirements (e.g. need for headsets), web-based VR offers universal access to VR utilities through web browsers. In this paper, we present a participatory design pilot study, aimed at exploring the use of co-design of a robot using web-based VR. Results seem to show that web-based VR environments are engaging and accessible research platforms to gather environment and interaction data in HRI.
Emotions serve important regulatory roles in social interaction. Although recognition, modeling, and expression of emotion have been extensively researched in human-robot interaction and related fields, the role of human emotion in perceptions of and interactions with robots has so far received considerably less attention. We here report inconclusive results from a pilot study employing an affect induction procedure to investigate the effect of people's emotional state on their perceptions of human-likeness and mind in robots, as well as attitudes toward robots. We propose a new study design based on the findings from this study.
As the number of online studies in the field of human-robot interaction (HRI) increases, the comparability of the results of online studies with those of laboratory experiments needs further investigation. In this one-sample study, 29 participants experienced three different commercially-available service robots first in an online session and then in a lab experiment and evaluated the robots regarding their trust, fear and intention to use the robot. Furthermore, several robot characteristics were evaluated (e.g. humanness, uncanniness). Overall, study results indicate high comparability of findings from online and lab experiments for trust, fear and robot characteristics like humanness and uncanniness. Same relative differences between the robots were found for both presentation methods except for intention to use and robot reliability. This preliminary study provides insights into online study validity and makes recommendations for future research.
Social robots in public places could be a useful tool to guide and remind people to adhere to general regulations (e.g., wearing a mask, keeping social distance during a pandemic). Additionally, robots could be a useful assistive tool for public order offices, such as reducing risks of infection for employees. However, it is uncertain whether and how robots could enhance regulation adherence. To this extent, we present the results of a 2 (distraction: yes/no) between- by 2 (argument: strong/weak) within-mixed HRI video study (n=83) investigating the argument's persuasiveness based on the Elaboration Likelihood Model of persuasion (ELM). Participants watched a video of a robot persuading people to wear a mask using either a strong or a weak argument. As a distraction, participants had to either count the word mask in the video or not. Our results show that the distraction had no influence, while the argument's strength significantly influences the perceived robot's persuasiveness.
Advances in autonomous technology have led to an increased interest in human-autonomy interactions. Generally, the success of these interactions is measured by the joint performance of the AI and the human operator. This performance depends, in part, on the operator having appropriate, or calibrated, trust of the autonomy. Optimizing the performance of human-autonomy teams therefore partly relies on the modeling and measuring of human trust. Theories and models have been developed on the factors influencing human trust in order to properly measure it. However, these models often rely on self-report rather than more objective, and real-time behavioral and physiological data. This paper seeks to build off of theoretical frameworks of trust by adding objective data to create a model capable of finer grain temporal measures of trust. Presented herein is SMRTT: SEM of Multimodal Real Time Trust. SMRTT leverages Structured Equation Modeling (SEM) techniques to arrive at a real time model of trust. Variables and factors from previous studies and existing theories are used to create components of SMRTT. The value of adding physiological data to the models to create real-time monitoring is discussed along with future plans to validate this model.
'Food', when mentioned in Human-Robot Interaction (HRI) research, is most often in the context of functional applications of automation, delivery, and assistance. Food has, however, not been explored as a medium for social expression or building relationships with social robots. Using web-based examples of robot food and our pilot collection of LOVOT and AIBO robot user's Tweets about their practices of feeding their robots, we show how food has the potential to sustain interactions, increase enjoyment, sociability and companionship in HRI, enhance life-likeness, autonomy, and agency for robots, and open up opportunities for community building among robot users. We present design implications of food for HRI, and urge HRI researchers to envision food as a facet of Human-Robot relationships and <--Human-Food-Robot--> interaction as a celebratory, provocative, and promising domain for HRI and social robot design.
With an increasing number of home and social robot products, it is essential for the general population to feel comfortable in using and understanding these robots in their homes. The goal of this research is to understand the general population's definition of "robot state." We conducted 11 participatory design groups (PDGs) (n=30), in which participants completed two exercises: (1) Memory based: they recalled their past robots to come up with a working "robot state" definition through a series of exercises, and (2) Example-based: they saw short videos of 3 different home robots. Each PDG session yielded a set of "robot states" they felt were important to communicate to a user and a "robot state" definition, which were tested with the same set of participants via an online survey. We found that On/off/booting was significantly rated as more important than all other robot states. Interestingly, task-related stimuli did not result in task-related states being rated being more important to communicate to the user. We believe establishing fundamental knowledge of "robot states" will increase acceptability of home robots by the users and aid robot designers by providing information on states that are essential to be communicated.
Social robots increasingly find their way into homes, especially to target families with small children. These commercially available robots are becoming widely accessible, but the research on them was largely confined to a lab or classroom environment, and their long-term use is rarely studied. Moreover, while child-robot interactions in many domains such as education and health are widely explored, little is known about how family context influences the children's perception of a social robot in their home environment and their interaction in day-to-day activities. In this paper, we proposed a longitudinal study that looks at the interaction between children and their home reading companion robot - Luka. Children's interaction and perception of the robot, along with the influence of their family context will be measured and evaluated over time.
We investigated a hypothesis that a conversation between two strangers would be encouraged and they would improve each other's likability if a robot induced them to have a small talk in advance. In our experiment a participant remained in a waiting space with an experimental cooperator for five minutes, and then they did a collaborative task together. The experiment had two conditions: One was the with-intervention condition in which a robot asked them to do a small talk when they remained in the waiting space, another was the without-intervention condition in which there was no robot in the waiting space. The results showed that the number of participant's utterances in the collaborative task had no significant difference (t=-0.676, p=0.511, d=-0.350), and likability rating of the cooperator by the participant also had no significant difference (t=- 0.781, p=0.449, d=0.404). These results did not support the hypothesis. It suggested that it was difficult to affect a relationship between two strangers by a robot prompting them to speak simply.
Running with others is motivational but finding running partners is not always accessible due to constraints such as being in remote locations. We present a novel concept of augmenting remote runners' experience by mediating a running group using drone-projected visualisations. As an initial step, a team of five interaction designers apply a user-centred design approach to scope out possible visual prototypes that can mediate a running group. The design team specifically focused on visual prototypes through a series of workshops. We report on the impressions of the visual prototypes from six potential users and present future directions to this project.
In order to reinforce behavior modification, we first applied a user's favorite avatar with TSUNDERE type to operant conditioning, an approach that reinforces desired behaviors by providing a combination of punishment and reward. Then, we considered TSUNDERE Interaction, an interaction using TSUN (cold action) as punishment, DERE (kind action) as reward, and TSUN/DERE action by that avatar. Furthermore, we use a 3D virtual avatar superimposed on the robot using augmented reality technology as an avatar, and use the user's state of concentration to provide appropriate feedback to the user. In this paper, we discuss the core concepts we have implemented to develop the system based on their idea as a means of behavior modification.
The aim of this research was to investigate whether preferences of U.S. adults regarding autonomous vehicles have changed in the past decade. We believe this to be indicative of the effect of cultural shifts over time in preferences regarding robots, similar to the effect of cultural and national differences on preferences regarding robots (e.g. ,). By replicating a 2009 survey regarding autonomous vehicle parking, we found that participants ranked four out of six parking and transportation options significantly differently now particularly for an autonomous vehicle with no override, a taxi, driving a standard vehicle, and being next to a vehicle driven by another person. Additionally, we found partial support that participants who were more informed about autonomous vehicle technology showed an increase in preferences for autonomous vehicles.
Therapist-operated robots can play a uniquely impactful role in helping children with Autism Spectrum Disorder (ASD) practice and acquire social skills. While extensive research within Human Robot Interaction has focused on teleoperation interfaces for robots in general, little work has been done on teleoperation interface design for robots in the context of ASD therapy. Moreover, while clinical research has shown the positive impact robots can have on children with Autism, much of that research has been performed in a controlled environment, with little understanding of the way these robots are used "in the wild". We analyze archival data of therapists teleoperating robots as part of their regular therapy sessions, to (1) determine common themes and difficulties in therapists' use of teleoperation interfaces, and (2) provide design recommendations to improve therapists' overall experience. We believe that following these recommendations will help maximize the effectiveness of ASD therapy with Socially Assistive Robots and the scale at which it can be deployed.
Presenting realistic grasping interaction with virtual objects in mixed reality (MR) is one of the important issues to promote the use of MR, such as in the robot-teaching domain. However, the intended interaction is difficult to achieve in MR due to the difficulty of depth perception and the lack of haptic feedback. To make intended grasping interactions in MR easier to achieve, we propose visual cues (contact web) that represent the contact state between the user's hand and a MR-object. To evaluate the effect of the proposed method, we performed two experiments. The first experiment determines the grasp type and object to be used in the evaluation of the second experiment, and the second experiment measures the time taken to complete grasping tasks of the object. Both objective and subjective evaluations show that the proposed visual cues entailed a significant reduction in the time required to complete the task.
This paper addresses the effects of priming information that shapes people's beliefs for the presented partner: a human or an android. We investigate two reaction times: from the timing of touch to the start of a reaction behavior and its length. We conducted a web-survey based experiment to investigate whether reaction times for androids were affected by priming information. Our results concluded that people exhibited a similar tendency toward androids when they believed they were interacting with a human and with an android, i.e., we found no significant effects of priming information.
Intelligent robots are redefining autonomous tasks but are still far from being fully capable of assisting humans in day to day tasks. An important requirement of collaboration is to have a clear understanding of each other's expectations and capabilities. Lack of which may lead to serious issues such as loose coordination between teammates, ineffective team performance, and ultimately mission failures. Hence, it is important for the robot to behave explicably to make themselves understandable to the human. One of the challenges here is that the expectations of the human are often hidden and dynamically changing as the human interacts with the robot. Existing approaches in plan explicability often assume the human's expectations are known and static. In this paper, we propose the idea of active explicable planning to address this issue. We apply a Bayesian approach to model and predict dynamic human beliefs to be more anticipatory, and hence can generate more efficient plans without impacting explicability. We hypothesize that active explicable plans can be more efficient and more explicable at the same time, compared to the plans generated by existing methods. From the preliminary results of Mturk study, we find that our approach effectively captures the dynamic belief of the human which can be used to generate efficient and explicable behavior that benefits from dynamically changing expectations.
We present a formal verification method that provides a model-based approach to human-robot interaction (HRI) in medical settings by utilizing linear temporal logic (LTL). We define high-level HRI procedures with an LTL-based framework to create algorithmically sound robots that can function independently in dynamic HRI environments. Our approach's theoretical infallibility confers particular advantages for medical robots, where safety and informative communications are crucial. In order to establish the viability of our proposed method, and with the ongoing COVID-19 pandemic in mind, we developed an LTL knowledge base for a medical robot tasked with HRI-intensive roles of patient reception and triage. We designed robotic simulations based on our LTL architecture to test our approach, employing randomized inputs to generate unpredictable HRI environments. We then conducted formal verification via an automata-theoretic approach by evaluating our simulated robot against generalized Büchi automata. We hope our LTL-based approach can enable future achievements in HRI.
Interaction among humans does not always proceed without errors; situations might happen in which a wrong word or attitude can cause the partner to feel uneasy. However, humans are often very sensitive to these interaction failures and may be able to fix them. Our research aims to endow robots with the same skill. Thus the first step, presented in this short paper, investigates to what extent a humanoid robot can impact someone's Comfortability in a realistic setting. To capture natural reactions, a set of real interviews performed by the humanoid robot iCub (acting as the interviewer) were organized. The interviews were designed in collaboration with a journalist from the press office of our institution and are meant to appear on the official institutional online magazine. The dialogue along with fluent human-like robotic actions were chosen not only to gather information about the participants' personal interests and professional career, necessary for the magazine column, but also to influence their Comfortability. Once the experiment is completed, the participants' self-report and spontaneous reactions (physical and physiological cues) will be explored to tackle the way people's Comfortability may be manifested through non-verbal cues, and the way it may be impacted by the humanoid robot.
The ability to infer latent behaviours such as the degree of engagement of humans interacting with social robots is still considered one challenging task in the human-robot interaction (HRI) field. Data-driven techniques based on machine learning were recently shown to be a promising approach for tackling the users' engagement detection problem, however, the resolution often involves multiple consecutive stages. This in return makes these techniques either incapable of capturing the users' engagement especially in a dynamic environment or un-deployable because of their inability to track engagement in real-time. This study is based on a data-driven framework, and we propose an end-to-end technique based on a unique 3D convolutional neural network architecture. Our proposed framework was trained and evaluated using a real-life dataset of users interacting spontaneously with a social robot in a dynamic environment. The framework has shown promising results over three different evaluation metrics when compared against three baseline approaches from the literature with an F1-score of 76.72. Additionally, our framework has achieved a resilient real-time performance of 25 Hz.
We explored the feasibility and limitations of designing and developing a Robot-Human interactive board game known as memory, a turn-based game of matching card pairs. Our analysis of this case study suggests significant limitations and further interactive improvements before exposing the prototype to the users. In terms of technical limitations, the variability of light and the lack of sharp camera imaging makes it challenging to identify cards uniquely. Open MANIPULATOR-X's (robotic arm used) dexterity is limited and could not mimic the interaction of card flipping. For interactive design terms, we analysed the robot morphology, expressiveness and modifications in the cards. We suggest running comparative studies with well-known humanoid robots and humans. This project is the initial step to developing more engaging and interactive games between Humans and Robots. Future experiments aim to explore the emotional, physical and mental benefit users could obtain from playing games with robots.
We describe the design and implementation of a multi-modal storytelling system. Multiple robots narrate and act out an AI-generated story whose plots can be dynamically altered via non-verbal audience feedback. The enactment and interaction focuses on gestures and facial expression, which are embedded in a computational framework that draws on cognitive-linguistic insights to enrich the storytelling experience. With the absence of in-person user studies in this late breaking research, we present the validity of the separate modules of this project and introduce it to the HRI field.
Social robotics aim to equip robots with the ability to exhibit socially intelligent behaviour while interacting in a face-to-face context with human partners. An important aspect of face-to-face social interaction includes the efficient recognition of their surroundings, the environment and the objects within it, so as to be able to discuss, describe and provide instructions to assist continuous collaboration between the speaker and the listener. Although humans can efficiently learn from their interlocutors to perceptually ground word meanings of visual objects from just a single example, teaching robots to ground word meanings remains a very challenging, expensive and resource-intensive task. In this paper, we present a novel framework for robot concept acquisition on the fly, by combining few-shot learning with active learning. In this framework, a robot learns new concepts through collaboratively performing tasks with humans. We compared different learning strategies in a task-based evaluation with human participants, and we found that active learning significantly outperforms a non-active learning alternative, and is more preferable by the participants while increasing their trust in the social robot's capabilities.
Mind perception is considered to be the ability to attribute mental states to non-human beings. As social robots increasingly become part of our lives, one important question for HRI is to what extent we attribute mental states to these agents and the conditions under which we do so. In the present study, we investigated the effect of appearance and the type of action a robot performs on mind perception. Participants rated videos of two robots in different appearances (one metallic, the other human-like), each of which performed four different actions (manipulating an object, verbal communication, non-verbal communication, and an action that depicts a biological need) on Agency and Experience dimensions. Our results show that the type of action that the robot performs affects the Agency scores. When the robot performs human-specific actions such as communicative actions or an action that depicts a biological need, it is rated to have more agency than when it performs a manipulative action. On the other hand, the appearance of the robot did not have any effect on the Agency or the Experience scores. Overall, our study suggests that the behavioral skills we build into social robots could be quite important in the extent we attribute mental states to them.
For robots to play an increased role in healthcare settings, the understanding on nurse-patient human-centric interactions in their unique scenario-based setting is crucial for Human-Robot Interaction (HRI) design. We aim to investigate HRI design for a robot to perform nursing tasks in hospitals. The robot's personality is designed with animated eyes, voice with local accent and contextual phrases with politeness to mimic nurse's behavior as they engage with patients. The study was designed with scenario-based use cases focusing on user-centric and task-centric HRI components.
This paper reports on findings based on 2 online surveys conducted and performed by 60 participating nurses from a local Hospital. In the first survey, the tasks performed by the robot include greeting, patient identification, pain level check, and performing vital signs measurement. This is to identify the correlation of politeness, friendliness and strictness exhibited by the robot, and the effects on the user's perception on trustworthiness and their willingness to follow the robot's instructions for a given task. In the second survey, a comparison is made between an item delivery task performed by a nurse and the same task performed by a robot, to assess the user receptivity and the usability of the robot to assist in the task with its ability to communicate its service intent.
The results imply that in the current design, if the robot communicates with the user with politeness and friendliness, it will increase the perceived trustworthiness of the user. When the robot exhibited strictness to get the patient to comply, a decrease in the user-perceived friendliness, politeness, and robot trustworthiness was also revealed in the findings. Finally, participants indicated high receptivity in interacting with the robot and the feasibility of a robot-assisted nurse workflow. These results support the progression to further development towards user validation studies in an actual hospital ward setting.
In the light of recent trends toward introducing Artificial Intelligence (AI) to enhance Human-Robot Interaction (HRI), intelligent virtual assistants (VA) driven by Natural Language Processing (NLP) receives ample attention in the manufacturing domain. However, most VAs either tightly bind with a specific robotic system or lack efficient human-robot communication. In this work, we implement a layer of interaction between the robotic system and the human operator. This interaction is achieved using a novel VA, called Max, as an intelligent and robust interface. We expand the research work in three directions. Firstly, we introduce a RESTful style Client-Server architecture for Max. Secondly, inspired by studies of human-human conversations, we embed conversation strategies into human-robot dialog policy generation to create a more natural and humanized conversation environment. Finally, we evaluate Max over multiple real-world scenarios from the exploration of an unknown environment to package delivery, with the means of an industrial robot.
When a robot is deployed to learn a new task in a "real-word" environment, there may be multiple teachers and therefore multiple sources of feedback. Furthermore, there may be multiple optimal solutions for a given task and teachers may have preferences among those various solutions. We present an Interactive Reinforcement Learning (I-RL) algorithm, Multi-Teacher Activated Policy Shaping (M-TAPS), which addresses the problem of learning from multiple teachers and leverages differences between them as a means to explore the environment. We show that this algorithm can significantly increase an agent's robustness to the environment and quickly adopt to a teacher's preferences. Finally, we present a formal model for comparing human teachers and constructed oracle teachers and the way that they provide feedback to a robot.
Helper's high is the phenomenon that helping someone or something else can lead to psychological benefits such as mood improvement. This study investigates if a robot pet can, like a real pet, induce helpers high in people interacting with it. A Vector robot was programmed to express the need for daily exercise and attention, and participants were instructed how to help the robot meet those needs. Our within subjects design had two conditions: with and without emotional behaviour modifiers to the robot's behaviour. Our primary research question is whether behaviours that conveyed emotion as well as needs would lead to empathy in the participants, which would create a stronger helper's high effect than purely functional need expression behaviours. We present a long-term (4 day) remote study design that not only facilitates the kind of interactions needed for helper's high, but abides by government guidelines on Covid-19 safety (under which a laboratory study is not possible). Preliminary results suggest that Vector was able to improve the mood of some participants, and mood changes tend to be greater when Vector expressed behaviours with emotional components. Our post-study interview data suggests that individual differences in living environment and mood impacting external factors, affected Vector's efficacy in mood influencing.
Humans realize various complex tasks with their upper limbs based on bimanual coordination. A fundamental feature of our bimanual coordination is the natural tendency to synchronize the upper limbs, resulting in preferred symmetrical patterns of interlimb coordination. In this early-stage study, based on the coarse-to-fine human-robot collaboration framework, we investigate the possibility of human-robot collaboration for accurate manipulation under force feedback utilizing the bimanual synchronous mechanism. Primary results suggested the effectiveness of the proposed method.
An immersive drone-flying modality known as First-Person View (FPV) flying is increasing in popularity. Nonetheless, due to its recency, there is a lack of research focusing on FPV drones. Specifically, the journey FPV pilots go through when learning how to fly was not studied yet. Understanding their learning experience allows the development of FPV drones with a gentle learning curve for the pilots. We conducted an online survey with 515 FPV pilots to evaluate how they learned to fly, and how they recommend beginners to learn. We found that most pilots learned using a stabilized flight mode, but the majority of them later switched to an acrobatic (non-stabilized) flight mode. Although only half of the surveyed pilots used flight simulators to learn, 90% of all pilots recommend beginners to use simulators. Our results allow further understanding of the emerging FPV pilot's culture and composition of research questions for future studies.
While competitive games have been studied extensively in the AI community for benchmarking purposes, there has only been limited discussion of human interaction with embodied agents under competitive settings. In this work, we aim to motivate research in competitive human-robot interaction (competitive-HRI) by discussing how human users can benefit from robot competitors. We then examine the concepts from game AI that we can adopt for competitive-HRI. Based on these discussions, we propose a robotic system that is designed to support future competitive-HRI research. A human-robot fencing game is also proposed to evaluate a robot's capability in competitive-HRI scenarios. Finally, we present the initial experimental results and discuss possible future research directions.
To encourage visitors to use guiding agents in public spaces, this study adopted a design approach and focused on identifying behavioral factors that would encourage interaction. Six factors of agent behavior were hypothesized, and an experiment was performed in a public space with real people. One or two communication robots were installed near the entrance of a university library. The reactions of library users passing by the robot were observed and recorded under different robot behavior conditions. The results showed that the robots were able to attract attention by uttering guidance information and looking in various directions while waiting for people. When the robots spoke directly to nearby people, the people tended to interact with them. The results of a questionnaire survey suggested that the voice, speech content, and appearance of the robots are also important factors.
Social robots can improve quality of life for children undergoing prolonged hospital stays, both by offering a fun and interactive distraction and by providing practical assistance during procedure support and pain management. In this paper, we present important considerations for robots involved in pediatric contexts. These considerations are based on a need-finding interview conducted with a gaming technology specialist at a children's hospital. By summarizing their experiences, we identify considerations affecting the design of robot morphology and behavior for this unique use case, as well as the explore the role of parents, healthcare staff, and child life specialists.
Interpersonal communication and relationship building promote successful collaborations. This study investigated the effect of conversational nonverbal and verbal interactions of a robot on bonding and relationship building with a human partner.
Participants interacted with two robots that differed in their nonverbal and verbal expressiveness. The interactive robot actively engaged the participant in a conversation before, during and after a collaborative task whereas the non-interactive robot remained passive. The robots' nonverbal and verbal interactions increased participants' perception of the robot as a social actor and strengthened bonding and relationship building between human and robot. The results of our study indicate that the evaluation of the collaboration improves when the robot maintains eye contact, the robot is attributed a certain personality, and the robot is perceived as being alive.
Our study could not show that an interactive robot receives more help by the collaboration partner. Future research should investigate additional factors that facilitate helpful behavior among humans, such as similarity, attributional judgement and empathy.
Recently, the restaurant industry attempts to introduce service robot for efficient management. We conducted a survey research to investigate the expectations of restaurant robot service between customers and employees. It was found that customers and the employees share many common expectations depending on service elements that the restaurant industry usually adopts. Our results suggest that personalized service is important for the customer-oriented robot service in restaurant.
Although there are existing frameworks for designing robots within the field of HRI, there is not yet a viable, all encompassing framework that bridges the gap between academic research, industry development and users in the design process. Through two online workshops and an individual company assignment, we identified industry needs, concerns and challenges relevant to the development of the Robot Design Canvas (RODECA). We present our preliminary work with seven industry partners and scientists from three research institutions. This research will inform the development of a versatile robot design framework that accounts for user experience early in the design process that can be validated through systematic investigation across research and industry applications. Such a tool would help bridge the gap between HRI research and commercial robot development.
Autonomous or lively social robots will often exhibit behavior that is surprising to users and calls for explanation. However, it is not clear how such robot behavior should be explained best. Our previous work showed that different types of a robot's self-explanations, citing its actions, intentions, or needs - alone or in causal relations - have different effects on users (Stange & Kopp, 2020). Further analysis of the data from the cited study implies that explanations in terms of robot needs (e.g. for energy or social contact) did not adequately justify the robot's behavior. In this paper we study the effects of a robot citing the user's needs to explain its behavior. Our study is based on the assumption that users may feel more connected to a robot that aims to recognize and incorporate the users' needs in its decision-making, even when the resulting behavior turns out to be undesirable. Results show that explaining robot behavior with user needs generally did neither lead to higher gains in understanding or desirability of the behaviors, nor did it help to justify them better than explaining it with robot needs. Further, a robot referring to user needs was not perceived as more likable, trustworthy or mindful, nor were users' contact intentions increased. However, an in-depth analysis showed different effects of explanations for different behaviors. We discuss these differences in order to clarify which factors should inform content and form of a robot's behavioral self-explanations.
Avoiding dialogue breakdown is important in HRI and HAI. In this paper, we investigated why dialogue breakdown occurs. We hypothesize that confusion between elements and category is one important reason. Elements means individual strange utterances, and category means the entire ability and performance of a robot. We hypothesized that a user who confuses elements and category will tend to lose the motivation to continue interacting with a robot or agent when the robot or agent makes a strange utterance. To verify this hypothesis, we conducted an experiment. We asked participants to perform a memory task cited in a previous work. We separated them into two groups, the no-confusion group and confusion group, and we showed them a movie in which a robot made a mistake on a math problem. After that, we asked them in a questionnaire about their impression of the robot. We conducted a t-test between the two groups for each question. As a result, the participants who confused elements and category tended to brand the robot as having low ability and performance when it made a mistake, and those who were not confused did not have reduced trustworthiness in the robot or reduced motivation for continuing to interact when the robot made a mistake. These results supports our hypothesis.
The study was part of a confidential pilot project concerning the introduction of Autonomous Mobile Robots in an order picking process. The study assesses the impact of working with an AMR on logistics workers (hereinafter referred to as 'operators'). Two research questions were investigated. First, does working with an AMR lead to increased psychosocial workload? And second, what is the perception of the operators working with an AMR? Very little research, outside a lab context, has been done so far on the impact of working with an AMR. This study contributes to the understanding of the impact of working with robots, outside the lab in an industrial setting. The results of this study can be summarized as follows: 1) working with an AMR does not lead to extra psychosocial workload and 2) there is a positive perception and attitude versus working with an AMR.
Novel forms of two-player telegame interaction might extend and enhance social connection between physically separated persons. We examine the potential of a Rock-Paper-Scissors game conveyed via an embodied telepresence agent. We compare a game interaction with an autonomous robot and a game interaction with a teleoperated version of the same robot. Both systems are equipped with a perception module that processes and recognizes the hand movement of the human players colocated with the robot. In the classic interaction, the robot acts as the opponent player. In the telegame setting, the robot represents and mirrors the actions of the remote human player as the opponent. We integrate the systems on the tabletop robot platform Haru and evaluate user impressions with respect to game experience and robot sociality. Results show that the telegame is perceived more positively, indicating its potential for physically distant, but socially enhanced interaction in the future.
The practical value of trust in human-robot Interaction (HRI) has been to strength human long-term collaboration and interaction with robots. While much work focuses on determining the various factors influencing trust in HRI, vulnerability as a precondition of trust has not yet been explored from a robot-centered perspective. Based on eight semi-structured interviews with experts, I set out to identify robot vulnerabilities for then to present a systematic overview that resulted in a total of 13 categories grouped into four different themes. In the discussion, I specifically focus on how the experts interpreted the notion of vulnerability as it would relate to robots and how malicious human behavior can be problematic when aiming to ensure mutual trust in HRI.
Older adults with late-life depression often suffer from cognitive symptoms, such as dementia. This patient group is not prioritised for psychotherapy and therefore often medicated with antidepressants. However, in the last 20-years, the evidence base for psychotherapy has increased and one promising area is technology-based psychotherapy. Investigations of the possibilities in this area are also motivated by the Covid-19 pandemic, where many older adults are isolated, which makes it impossible for them to meet with a therapist. Therefore, we have developed a Wizard of Oz system allowing a human therapist to control a humanoid robot through a graphical user interface, including natural speech for natural conversations, which enables the robot to be stationed in, for example, a care home. For future research, we will conduct user-centered studies with both therapists and older adults to further develop the system.
While interacting with a social robot, children have a need to express themselves and have their expressions acknowledged by the robot. A need that is often unaddressed by the robot, due to its limitations in understanding the expressions of children. To keep the child-robot interaction manageable the robot takes control, undermining children's ability to co-regulate the interaction. Co-regulation is important for having a fulfilling social interaction. We developed a co-creation activity that aims to facilitate more co-regulation. Children are enabled to create sound effects, gestures, and light shows for the robot to use during their conversation. Results from a user study (N = 59 school children, 7-11 y.o.) showed that the co-creation activity successfully facilitated co-regulation by improving children's agency. Co-creation furthermore increases children's acceptance of the robot.
In this work we introduce the concept of Robot Vitals and propose a framework for systematically quantifying the performance degradation experienced by a robot. A performance indicator or parameter can be called a Robot Vital if it can be consistently correlated with a robot's failure, faulty behaviour or malfunction. Robot Health can be quantified as the entropy of observing a set of vitals. Robot vitals and Robot health are intuitive ways to quantify a robot's ability to function autonomously. Robots programmed with multiple levels of autonomy (LOA) do not scale well when a human is in charge of regulating the LOAs. Artificial agents can use robot vitals to assist operators with LOA switches that fix field-repairable non-terminal performance degradation in mobile robots. Robot health can also be used to aid a tele-operator's judgement and promote explainability (e.g. via visual cues), thereby reducing operator workload while promoting trust and engagement with the system. In multi-robot systems, agents can use robot health to prioritise robots most in need of tele-operator attention. The vitals proposed in this paper are: rate of change of signal strength; sliding window average of difference between expected robot velocity and actual velocity; robot acceleration; rate of increase in area coverage and localisation error.
Interest in web-based human-robot interaction (HRI) has grown since it was initially introduced. Similarly, interest in social signals such as voice and facial expressions continues to expand. More recently, researchers have also gained interest in the feasibility of using neurophysiological information to enhance HRI. While both social signals and web-based HRI have seen growing interest, there is limited work exploring potential advances at the intersection of these two areas. This paper describes our efforts to investigate this intersection by integrating: 1) web-based social signal interpretation, 2) hybrid block/text scripting interfaces, and 3) ROS integration via rosbridge. We further discuss potential advantages and current challenges concerning web-based platforms for prototyping social robotic applications.
Robots are rapidly gaining acceptance in recent times, where the general public, industry and researchers are starting to understand the utility of robots, for example for delivery to homes or in hospitals. However, it is key to understand how to instil the appropriate amount of trust in the user. One aspect of a trustworthy system is its ability to explain actions and be transparent, especially in the face of potentially serious errors. Here, we study the various aspects of transparency of interaction and its effect in a scenario where a robot is performing triage when a suspected Covid-19 patient arrives at a hospital. Our findings consolidate prior work showing a main effect of robot errors on trust, but also showing that this is dependent on the level of transparency. Furthermore, our findings indicate that high interaction transparency leads to participants making better informed decisions on their health based on their interaction. Such findings on transparency could inform interaction design and thus lead to greater adoption of robots in key areas, such as health and well-being.
In this study, we examined the effect of mindfulness meditation facilitated by human-robot interaction (HRI) on brain activity. EEG signals were collected from two groups of participants; Meditation group who practiced mindfulness meditation with a social robot and Control group who only listened to a lecture by the robot. We compared brain functional connectivity between the two groups by computing EEG phase synchrony during HRI session. The results revealed a significantly lower global phase synchrony in beta frequency band of the Meditation group, which has been previously reported as an indication of reduced cognitive processing and achieving the mindful state in experienced meditators. Our findings demonstrate the potential of Socially-Assistive Robots (SAR) for integration in mental healthcare and optimization of the intervention effects. Additionally, our study puts forward new measures for objective monitoring of HRI effect on the user's neurophysiological responses.
Because robots are perceived as moral agents, they hold significant persuasive power over humans. It is thus crucial for robots to behave in accordance with human systems of morality and to use effective strategies for human-robot moral communication. In this work, we evaluate two moral communication strategies: a norm-based strategy grounded in deontological ethics, and a role-based strategy grounded in role ethics, in order to test the effectiveness of these two strategies in encouraging compliance with norms grounded in role expectations. Our results suggest two major findings: (1) reflective exercises may increase the efficacy of role-based moral language and (2) opportunities for moral practice following robots' use of moral language may facilitate role-centered moral cultivation.
Regardless of newly developed forms of narration, listening to stories remains a popular leisure activity. However, storytelling is also evolving. With social robots taking on activities initially conducted by humans, they are emerging as a new storytelling medium. Robots are able to extend human storytelling by adding sounds to the narrative to enhance recipients' emotional reaction to and transportation into the story. To this end, we conducted an online study to compare the traditional audio book to a robotic storyteller, focusing on the influence of adding sound effects and music in both storytelling approaches. Results show that neither emotion induction nor transportation was significantly affected by storytelling medium or additional sounds, but descriptive values indicate a trend towards higher emotion induction and transportation when adding sound to robotic storytelling relative to the audio book condition without additional sounds. Live replications based on this preliminary study might reveal less ambiguous findings.
The users' positive or negative attitude towards robots is a crucial factor in human-robot interaction (HRI). We conducted a preliminary online study comparing a storytelling scenario including personal respectively impersonal framing to investigate effects of the robot's self-introduction on the users' attitude towards and likeability of the robot, their transportation into and memory of the story told. No significant group differences were found, but transportation correlated significantly with attitude, likeability and memory. Thus, transportation might be a promising factor which can when increased positively affect HRI.
Augmented reality technology can enable robots to visualize their future actions giving users crucial information to avoid collisions and other conflicting actions. Although a robot's entire action plan could be visualized (such as the output of a navigational planner), how far into the future it is appropriate to display the robot's plan is unknown. We developed a dynamic path visualizer that projects the robot's motion intent at varying lengths depending on the complexity of the upcoming path. We tested our approach in a virtual game where participants were tasked to collect and deliver gems to a robot that moves randomly towards a grid of markers in a confined area. Preliminary results on a small sample size indicate no significant effect on task performance; however, open-ended responses reveal participants preference towards visuals that show longer path projections.
When a robot is deployed to learn a new task in a "real-word" environment, there may be multiple teachers and therefore multiple sources of feedback. Furthermore, there may be multiple optimal solutions for a given task and teachers may have preferences among those various solutions. We present an Interactive Reinforcement Learning (I-RL) algorithm, Multi-Teacher Activated Policy Shaping (M-TAPS), which addresses the problem of learning from multiple teachers and leverages differences between them as a means to explore the environment. We show that this algorithm can significantly increase an agent's robustness to the environment and quickly adopt to a teacher's preferences. Finally, we present a formal model for comparing human teachers and constructed oracle teachers and the way that they provide feedback to a robot.
Especially in medical interactions with robots, estimating people's level of trust is critical. The first human clinical trials with a blood sampling robot have shown promising benefits for both patients and healthcare workers, as a robot provides higher accuracy and quick results. An automated solution for blood drawing is therefore preferable, but it is unclear under what circumstances people would be willing to use a blood sampling robot. This study therefore investigates people's perception of such a robot, and whether speech and gaze have a positive effect on their willingness to interact with the robot. A survey was conducted that shows that the perception of the blood sampling robot was more positive if the robot provided transparency through speech, and that the perception of the robot was more negative if the robot only displayed eye-gaze (without speech). The results also suggest that there generally is a positive attitude towards, and willingness to use a blood sampling robot, at least in the population investigated.
Emotional expression plays a very important role in human-robot interaction. It can greatly improve the quality of interaction between humans and robots. In the case of non-verbal communication, emotional expression can greatly shorten the social distance between humans and robots. However, in practical HRI applications, the efficiency and robustness are also an important part we need to consider. In this paper, we demonstrate the impact on the efficiency and robustness of human-robot teamwork in the case of implicit non-verbal communication. We designed an interactive experiment using a collaborative Sawyer robot. In the experiment, we let the robot play a tic-tac-toe game with a person, and judged whether there was a positive or negative effect by comparing the effects of various emotional factors. Our experiments demonstrated that the emotional expression in the case of non-verbal communication has a positive impact on the efficiency and robustness in a collaborative human-robot task.
In this study, non-verbal behavior in diversely-skilled groups was observed while participating in a collaborative educational game with a humanoid robot. Research has indicated that a mediating robot gaze can equalize the verbal contributions from each differently skilled participant, promoting inclusion and learning. The experiment results were further analyzed, extending to non-verbal effects. The initial results from two experiments under different robot gaze behavior indicate that modifications in the robot gaze can lead to different gaze behavior in participants. It was observed that a gaze mediating behavior in the robot led to increased gaze change frequency among participants as well as more time spent mirroring the robot's gaze. These initial results show promise in how a robot can balance attention in a collaborative learning environment.
In this study, we examine whether the data requirements associated with training a system to recognize multiple 'levels' of an internal state can be reduced by training systems on the 'extremes' in a way that allows them to estimate "intermediate" classes as falling in-between the trained extremes. Specifically, this study explores whether a novel recurrent neural network, the Legendre Delay Network, added as a pre-processing step to a Multi-Layer Perception, produces an output which can be used to separate an untrained intermediate class of task engagement from the trained extreme classes. The results showed that identifying untrained classes after training on the extremes is feasible, particularly when using the Legendre Delay Network.
Child and family care professionals in the Netherlands are facing challenges including high workloads. Technological support could be beneficial in this context, e.g. for education, motivation and guidance of the children. For example, the Dutch Child and Family Center explores the possibilities of social robot assistance in their regular care pathways. To study the use of social robots in this broad context, we started by drafting three example scenarios based on the expertise of child care professionals. During an exploration phase, we are identifying key design and application requirements through focus groups with child care professionals and parents. Later stages of our research, the testing phase, will focus on testing these requirements via scenario-based design and child-robot interaction experiments in real-world contexts to further shape the application of social robots in various child and family care settings.
Robots' spatial positioning is a useful communication modality in social interactions. For example, in the context of group conversations, certain types of positioning signal membership to the group interaction. How does robot embodiment influence these perceptions? To investigate this question, we conducted an online study in which participants observed renderings of several robots in a social environment, and judged whether the robots were positioned to take part in a group conversation with other humans in the scene. Our results suggest that robot embodiment can influence perceptions of conversational group membership. An important factor to consider in this regard is whether robot embodiment leads to a discernible orientation for the agent.
The collaboration between humans and autonomous AI-driven robots in industrial contexts is a promising vision that will have an impact on the sociotechnical system. Taking research from the field of human teamwork as guiding principles as well as results from human robot collaboration studies this study addresses open questions regarding the design and impact of communicative transparency and behavioral autonomy in a human robot collaboration. In an experimental approach, we tested whether an AI-narrative and communication panels of a robot-arm trigger the attribution of more human like traits and expectations going along with a changed attribution of blame and failure in a flawed collaboration.
Hand hygiene has become an important part of our lives. In order to encourage people to sanitize their hands, a hand sanitizer robot was developed and tested. It drove around in the main entrance hall of a university and reminded people to use hand sanitizer using speech. An initial pilot study (N=196) using ethnographic observation and interviews revealed that the robot's speed and its ability to speak may potentially influence people's willingness to use the robot. In the main study (N=351), the robot was therefore tested with two variables: Speech and speed to find out how the robot is most efficient in engaging participants. An efficient robot is a robot that people use. In particular, we studied to what extent the speed of the hand sanitizer robot impacts whether people use it and to what extent the way the robot addresses people verbally affects whether they use it. These research questions were addressed in four conditions. The results show that a robot that uses a slower speed is regarded to be more trustworthy, and that friendly speech can be useful to remind people to use hand sanitizer.
Teleoperations requires both a robust set of controls and the right balance of sensory data to allow task completion without overwhelming the user. Previous work has mostly focused on using depth cameras that fail to provide situational awareness. We have developed a teleoperation system that integrates 360° camera data in addition to the more standard depth data. We infer depth from the 360° camera data and use that to render it in VR, which allows for six degree of freedom viewing. We use a virtual gantry control mechanism, and also provide a menu with which the user can choose which rendering schemes to render the robot's environment with. We hypothesize that this approach will increase the speed and accuracy with which the user can teleoperate the robot.
To enable robots to select between different types of nonverbal behavior when accompanying spatial language, we must first understand the factors that guide human selection between such behaviors. In this work, we argue that to enable appropriate spatial gesture selection, HRI researchers must answer four questions: (1) What are the factors that determine the form of gesture used to accompany spatial language? (2) What parameters of these factors cause speakers to switch between these categories? (3) How do the parameterizations of these factors inform the performance of gestures within these categories? and (4) How does human generation of gestures differ from human expectations of how robots should generate such gestures? In this work, we consider the first three questions and make two key contributions: (1) a human-human interaction experiment investigating how human gestures transition between deictic and non-deictic under changes in contextual factors, and (2) a model of gesture category transition informed by the results of this experiment.
Many supermarket employees, such as shelf and warehouse workers, suffer from musculoskeletal disorders. Exoskeletons, that is, physical assistance systems that are worn on the body, could help. The conditions under which workers would be willing to use this new wearable technology, however, remain largely unclear. In this exploratory field study, 58 supermarket employees tested one or more out of five passive exoskeletons during their regular work. Perceived wearing comfort, extent of strain relief, and task-technology fit (i.e., how well the exoskeleton fit their current task requirements) were found to correlate significantly with post-trial intention to use. Soft exoskeletons were rated as preferable to rigid ones. Trying one of the latter also resulted in lower intention to use, which was revealed to be fully mediated by a better task-technology fit being ascribed to soft exoskeletons. Practical relevance of the results, study limitations and future research directions are discussed.
In this paper we present the results from a study in which participants (n=26, aged 6-9) were exposed to two different ER systems, one based on tangible tile-based programming and one on visual block-programming. During the transition from the first to the second system, mediated transfer of knowledge regarding computational concepts, were observed. Furthermore, the participants CT skills were likewise observed to improve throughout the study, across both ER systems.
Recent research shows - somewhat astonishingly - that people are willing to ascribe moral blame to AI-driven systems when they cause harm -. In this paper, we explore the moral-psychological underpinnings of these findings. Our hypothesis was that the reason why people ascribe moral blame to AI systems is that they consider them capable of entertaining inculpating mental states (what is called mens rea in the law). To explore this hypothesis, we created a scenario in which an AI system runs a risk of poisoning people by using a novel type of fertilizer. Manipulating the computational (or quasi-cognitive) abilities of the AI system in a between-subjects design, we tested whether people's willingness to ascribe knowledge of a substantial risk of harm (i.e., recklessness) and blame to the AI system. Furthermore, we investigated whether the ascription of recklessness and blame to the AI system would influence the perceived blameworthiness of the system's user (or owner). In an experiment with 347 participants, we found (i) that people are willing to ascribe blame to AI systems in contexts of recklessness, (ii) that blame ascriptions depend strongly on the willingness to attribute recklessness and (iii) that the latter, in turn, depends on the perceived "cognitive" capacities of the system. Furthermore, our results suggest (iv) that the higher the computational sophistication of the AI system, the more blame is shifted from the human user to the AI system.
We present the first experiment analyzing the effectiveness of robot-generated mixed reality gestures using real robotic and mixed reality hardware. Our findings demonstrate how these gestures increase user effectiveness by decreasing user response time during visual search tasks, and show that robots can safely pair longer, more natural referring expressions with mixed reality gestures without worrying about cognitively overloading their interlocutors.
This paper presents preliminary research on whether children will accept a robot as part of their ingroup, and on how a robot's group membership affects trust, closeness, and social support. Trust is important in human-robot interactions because it affects if people will follow robots' advice. In this study, we randomly assigned 11- and 12-year-old participants to a condition such that participants were either on a team with the robot (ingroup) or were opponents of the robot (outgroup) for an online game. Thus far, we have eight participants in the ingroup condition. Our preliminary results showed that children had a low level of trust, closeness, and social support with the robot. Participants had a much more negative response than we anticipated. We speculate that there will be a more positive response with an in-person setting rather than a remote one.
In this paper, we present a study in which a robot initiates interactions with people passing by in an in-the-wild scenario. The robot adapts the loudness of its voice dynamically to the distance of the respective person approached, thus indicating who it is talking to. It furthermore tracks people based on information on body orientation and eye gaze and adapts the text produced based on people's distance autonomously. Our study shows that the adaptation of the loudness of its voice is perceived as personalization by the participants and that the likelihood that they stop by and interact with the robot increases when the robot incrementally adjusts its behavior.
The use of robots for the automation of the supply of food and beverages is a commercially attractive and modern application of robotic technologies. Such innovative technologies are deemed helping to renew the image of a service and, in this way, stimulate people's curiosity. The novelty effect linked to the experience of a new technology, however, has a very limited duration in time, and it is not suitable for guaranteeing user loyalty. Consequently, the need for continuous renewal to keep the commercial proposal attractive becomes very expensive. In this paper, we present the architecture of a new project, called Bartending Robot for Interactive Long Lasting Operations (BRILLO), which aims to create a long-lasting operational robotic system that is able to have personalised multi-user interactions and to work as a bartender by performing different tasks according to the users' requests and preferences.
In this paper, we proposed rich feedback which contains multiple types of feedback to allow human teachers to provide a variety of useful information to the learning agent and modified Policy Shaping to accumulate the effects of rich feedback. Then we designed the ALPHA framework to actively request rich feedback and further developed it to use Deep Learning. The experimental results showed ALPHA with rich feedback can greatly improve learning and quickly lead the learning agent to the optimal solution.
This experimental study evaluates the effects of coaching people into behavior change with a simulation of the social robot Haru. In order to support participants in their attempts to change their behavior and to create a new habit, a coaching session was created based on the 'Tiny Habits' method developed by BJ Fogg . This coaching session was presented to altogether 41 participants in three conditions. In Condition 1, the dialogue between the participant and the simulated robot was interspersed with emotional expressions and behaviors such as dancing, bowing and vocalizing. Condition 2 used the same set-up with the robot simulator and provided participants with the same guidance, using the same synthesized voice from Condition 1, but without any emotional elements. The third condition was created to evaluate the effect of using a robot as a session coach by comparing the two conditions with Haru to a condition in which the same content was presented, just without a robot. The same script as in the two robot conditions was presented as a text on a website, divided into sections reflecting the human-robot dialogue in the two robot conditions. Data from a post-session questionnaire were supplemented by another questionnaire which was administered 10 days later and focuses on habit retention. Participants from the session with the robot that uses emotional behaviors felt significantly more confident that they will incorporate their behavior change in their lives and thought differently about behavior change. People participating in the session with a robot simulation also had a significantly higher retention rate of their behavior change, thus revealing a positive effect of the social robot.
Our project explores whether multiple robots can be deployed as a therapeutic tool to help children with Autism Spectrum Disorder (ASD) practice social interaction and collaboration. We implemented a human-robot interaction (HRI) solution using two different assistive social robots; the NAO humanoid robot and the Cozmo robot. We have several modules available through an iOS app that we developed for children with ASD to engage with these robots in scenarios focused on social interaction and collaboration. We are also in the process of refining an intelligent web interface so that we, as well as therapists and caregivers, can access relevant data collected by our HRI software. The web interface is designed to help caregivers and therapists, along with our team, monitor their child's or patient's progress, and understand which methods work best. The overall goal is to conduct an HRI user study with children with ASD to test how our software solution performs.
Advances in the capabilities of technologies like virtual reality (VR) and their rapid proliferation at consumer price points, have made it much easier to integrate them into existing robotic frameworks. VR interfaces are promising for robotics for several reasons, including that they may be suitable for resolving many of the human performance issues associated with traditional robot teleoperation interfaces used for robot manipulation. In this systems-focused paper, we introduce and document the development of a VR-based robot control paradigm with manipulation assist control algorithm, which allows human operators to specify larger manipulation goals while leaving the low-level details of positioning, manipulation and grasping to the robot itself. For the community, we also describe system design challenges to our progress thus far.
In designing collaborative robots, it is of utmost importance to do so with safety in mind. Most current commercial collaborative robots have numerous built-in safety features to minimize danger to humans. When such robots are placed in public settings, not only the actual safety mechanisms but also the perception of safety plays a crucial role in the success of its deployment. An interactive robotic art installation is a useful site to explore the perceived safety of a robot. This article presents the initial results of a study on the impact of robot faces have on perceived safety in an interactive setting with untrained participants.
As the prevalence of stroke survivors increases, the demand for rehabilitative services will rise. While there has been considerable development in robotics to address this need, few systems consider individual differences in ability, interests, and learning. Robots need to provide personalized interactions and feedback to increase engagement, enhance human motor learning, and ultimately, improve treatment outcomes. In this paper, we present 1) our design process of an embodied, interactive robotic system for post-stroke rehabilitation, 2) design considerations for stroke rehabilitation technology and 3) a prototype to explore how feedback mechanisms and modalities affect human motor learning. The objective of our work is to improve motor rehabilitation outcomes and supplement healthcare providers by reducing the physical and cognitive demands of administering rehabilitation. We hope our work inspires development of human-centered robots to enhance recovery and improve quality of life for stroke survivors.
Although research has demonstrated the potential for social robots to positively impact a person's mood and provide comfort, very little research has yet focused on social robots supporting people living with loneliness. Much of the relevant human-robot interaction work focuses on more serious situations such as living with dementia, or on related areas such as stress, anxiety, or depression, and these works generally target the older adult demographic. Loneliness, however, can affect anyone of any health and age. In this paper we present a summary review of the current research on loneliness and social robots, highlighting the gaps in research and the potential opportunity for more work in the area.
Navigating through an unknown, and perhaps resource-constrained environment, such as Lunar terrain or through a disaster region can be detrimental - e.g. both physically and cognitively exhausting. Difficulties during navigation can cost time, operational resources, or even a life. To this end, the interaction with a robotic exploration system in lunar or Martian environments could be key to successful exploration extravehicular activities (X-EVA). Through the use of augmented reality (AR) we can afford an astronaut with various capabilities. In particular, we focus on two: (1) The ability to obtain and display information on their current position, on important locations, and on essential objects in an augmented space. (2) The ability to control an exploratory robot system, or smart robotic tools using AR interfaces. We present our ongoing development of such AR robot control interfaces and the feedback system being implemented. This work extends the augmented reality robot navigation and audio spatial feedback components presented at the 2020 National Aeronautics and Space Administration (NASA) SUITS Challenge.
Humanoid workers that can improve the joy and achievement of workers with intellectual and developmental disabilities (IDD) hold promise in light manufacturing settings. In this paper, we provide details of an architecture to support social scaffolding for workers with IDD and efforts to adapt it to learn. This architecture is developed for human-robot interaction using the Pepper robot and will support future improvements using machine learning. Additional recommendations based on past experimentation are given for future work.
Communication is integral to knowledge transfer in human-human interaction. To inform effective knowledge transfer in human-robot interaction, we conducted an observational study to better understand how people use gaze and other backchannel signals to ground their mutual understanding of task-oriented instruction during learning interactions. Our results highlight qualitative and quantitative differences in how people exhibit and respond to gaze, depending on motivation and instructional context. The findings of this study inform future research that seeks to improve the efficacy and naturalness of robots as they communicate with people as both learners and instructors.
In this work, we explore whether robots can exert their persuasive influence to encourage others to follow new proxemic norms (i.e., COVID-19 social distancing guidelines). Our results suggest that social robots are not effective for this purpose, and, in fact, when some persuasive strategies are used, this approach might backfire due to novelty effects that encourage pedestrians to approach and cluster around such robots.
Emotions are reactions that can be expressed through a variety of social signals. For example, anger can be expressed through a scowl, narrowed eyes, a long stare, or many other expressions. This complexity is problematic when attempting to recognize a human's expression in a human-robot interaction: categorical emotion models used in HRI typically use only a few prototypical classes, and do not cover the wide array of expressions in the wild. We propose a data-driven method towards increasing the number of known emotion classes present in human-robot interactions, to 28 classes or more. The method includes the use of automatic segmentation of video streams into short (<10s) videos, and annotation using the large set of widely-understood emojis as categories. In this work, we showcase our initial results using a large in-the-wild HRI dataset (UE-HRI), with 61 clips randomly sampled from the dataset, labeled with 28 different emojis. In particular, our results showed that the "skeptical" emoji was a common expression in our dataset, which is not often considered in typical emotion taxonomies. This is the first step in developing a rich taxonomy of emotional expressions that can be used in the future as labels for training machine learning models, towards more accurate perception of humans by robots.
In healthcare areas, social robots have demonstrated positive effects on adherence to procedures and cognitive skills development. This paper explores the effects of a social robot during an introductory phase in Cerebral Palsy rehabilitation. A human-Robot interface was deployed to promote the interaction with the children through 10 activities, including a presentation stage and imitation games. Besides, the interface aims to ease using the social robot to the therapist and allow them to control the robot's non-verbal and verbal gestures. The interaction was measured through joint attention, attitudes, and follow-up instructions. A total of 10 children participate in this study. The results suggest that 80% of the participants have a joint attention rate of 70%, and they accomplish highly the requests given by the robot. These preliminary findings show a positive effect of the robot on the children.
Recent advances in robot capabilities have led to a growing consensus that robots will eventually be deployed at scale across numerous application domains. An important open question is how humans and robots will adapt to one another over time. In this paper, we introduce the model-based Theoretical Human-Robot Scenarios (THuS) framework, capable of elucidating the interactions between large groups of humans and learning robots. We formally establish THuS, and consider its application to a human-robot variant of the n-player coordination game, demonstrating the power of the theoretical framework as a tool to qualitatively understand and quantitatively compare HRI scenarios that involve different agent types. We also discuss the framework's limitations and potential. Our work provides the HRI community with a versatile tool that permits first-cut insights into large-scale HRI scenarios that are too costly or challenging to carry out in simulations or in the real-world.
One focus of augmented reality (AR) in robotics has been on enriching the interface for human-robot interaction. While such an interface is often made intuitive to interact with, it invariably imposes novel objects into the environment. In situations where the human already has a focus, such as in a human-robot teaming task, these objects can potentially overload our senses and lead to degraded teaming performance. In this paper, we propose using AR objects to solely augment natural objects to avoid disrupting our natural senses while adding critical information about the current situation. In particular, our case study focuses on addressing the limited field of view of humans by incorporating persistent virtual shadows of robots for maintaining situation awareness in proximal human-robot teaming tasks.
Socially-aware navigation seeks to codify the rules of human-human and human-robot proxemics using formal planning algorithms. However, the rules that define these proxemic systems are highly sensitive to a variety of contextual factors. Recently, human proxemic norms have been heavily influenced by the COVID-19 pandemic, and the guidelines put forth by the CDC and WHO encouraging people to maintain six feet of social distance. In this paper, we present a study of observer perceptions of a robot that not only follows this social distancing norm, but also leverages it to implicitly communicate disapproval of norm-violating behavior. Our results show that people can relate a robot's social navigation behavior to COVID safety protocols, and view robots that navigate in this way as more socially intelligent and safe.
While recent work on gesture synthesis in agent and robot literature has treated gesture as co-speech and thus dependent on verbal utterances, we present evidence that gesture may leverage model context (i.e. the navigational task) and is not solely dependent on verbal utterance. This effect is particularly evident within ambiguous verbal utterances. Decoupling this dependency may allow future systems to synthesize clarifying gestures that clarify the ambiguous verbal utterance while enabling research in better understanding the semantics of the gesture. We bring together evidence from our own experiences in this domain that allow us to see for the first time what kind of end-to-end concerns models need to be developed to synthesize gesture for one-shot interactions while still preserving user outcomes and allowing for ambiguous utterances by the robot. We discuss these issues within the context of "cardinal direction gesture plans" which represent instructions that refer to the actions the human must follow in the future.
Accurate pain assessment and management is particularly important in children exposed to prolonged or repeated acute pain including procedural pain because of elevated risk for adverse outcomes such as traumatic medical stress, intense pain response for subsequent pain and also developing chronic pain. Our current work in progress tries to help pain management in children through developing intelligent adaptive humanoid robots as a multi-modal non pharmacological intervention. Our current work increases the interactive capabilities of Nao humanoid robots by using the camera and microphone to assess pain and emotion in children undergoing procedural treatment through combining detection models for facial expression, voice quality, and adapt the robot's verbal and non-verbal interactive responses accordingly for optimal distraction through adaptive behavioral models. By combining two different methods of obtaining emotion predictive probabilities using facial expression and stress speech data, we predict an emotion label. This label is then used as an environment input to a reinforcement learning model with the robot as the agent to choose the best action out of a set of entertaining and distracting verbal and non-verbal actions to cheer up the child and distract them from the pain and fear of the medical procedure.
This work presents ideation and preliminary results of using contextual information and information of the objects present in the scene to query applicable social navigation rules for the sensed context. Prior work in socially-Aware Navigation (SAN) shows its importance in human-robot interaction as it improves the interaction quality, safety and comfort of the interacting partner. In this work, we are interested in automatic detection of social rules in SAN and we present three major components of our method, namely: a Convolutional Neural Network-based context classifier that can autonomously perceive contextual information using camera input; a YOLO-based object detection to localize objects with a scene; and a knowledge base of social rules relationships with the concepts to query them using both contextual and detected objects in the scene. Our preliminary results suggest that our approach can observe an on-going interaction, given an image input, and use that information to query the social navigation rules required in that particular context.
As Human-Robot Interaction becomes more sophisticated, measuring the performance of a social robot is crucial to gauging the effectiveness of its behavior. However, social behavior does not necessarily have strict performance metrics that other autonomous behavior can have. Indeed, when considering robot navigation, a socially-appropriate action may be one that is sub-optimal, resulting in longer paths, longer times to get to a goal. Instead, we can rely on subjective assessments of the robot's social performance by a participant in a robot interaction or by a bystander. In this paper, we use the newly-validated Perceived Social Intelligence (PSI) scale to examine the perception of non-humanoid robots in non-verbal social scenarios. We show that there are significant differences between the perceived social intelligence of robots exhibiting SAN behavior compared to one using a traditional navigation planner in scenarios such as waiting in a queue and group behavior.
This paper describes the design of an interactive game between humans and a robot that makes it possible to observe, analyze, and model competitive strategies and affective interactions with the aim to dynamically generate appropriate responses or initiations of a robot. We apply an iterative design process that applied several pilot evaluations to define the requirements for the game with a theme, mechanics, and rules that motivate a choice between competition and cooperation and provokes emotional reactions even after subsequent games. Also, the game is designed to be easily understood by humans and unambiguously interpreted by machines. Overall, we aim to make the Chef's Hat card game a standard platform for the development of cooperative/competitive and emotionally aware agents and enable embodied interaction between multiple humans and robots.
Social robots have started taking on storytelling, an age-old human tradition. However, the narrator's voice is central in storytelling and a robot cannot match the capabilities of human voice modulation, which might affect the perception of the robot by the listener. Using a robot and gendered voice as a medium for storytelling, we take a first step towards identifying effects of narrators' voice on anthropomorphism. We examine the robot's perceived anthropomorphism and the influence of its voice (female, male, or neutral) on recipients' attitude towards the robotic storyteller concerning gender and cross-gender effects. In addition, transportation indicating the quality of storytelling is investigated. We found no significant results, neither for attitudes toward the robot nor for transportation. Our gender-based voice manipulation did not affect the storytelling process. A lack of anthropomorphism of the robot may explain these findings and should be investigated in further studies.
In this work, we present a novel human-robot interaction (HRI) method to detect and engage passive subjects in multiparty conversations using a humanoid robot. Voice activity detection and speaker localization are combined with facial recognition to detect and identify non-participating subjects. Once a non-participating individual is identified, the robot addresses the subject with a fact related to the topic of the conversation, with the goal of promoting the subject to join the conversation. To prompt sentences related to the topic of the conversation, automatic speech recognition and natural language processing techniques are employed. Preliminary experiments demonstrate that the method successfully identifies and engages passive subjects in a conversation.
This article presents the use of a multi-layered perceptron neural network to predict if one person in a group is being interactive or not, based on the social signals of the other group members. Interactivity state (as manually annotated post-hoc) was correctly predicted with 60% accuracy when using the person's own social signals (self state), but showed higher accuracy of 65% when using instead social signals from the surrounding group members, excluding the target person (group members state). These results are preliminary due to the limits of our dataset (a micro dataset of 6 participants -- of which 3 are in frame -- playing the social game mafia, with 734 frames). A post-hoc factor analysis reveals that facial actions units and the distance between the target person and the group members are the key features to consider when estimating interactivity state from surrounding social peers.
The complexity of the tasks autonomous robots can tackle is constantly increasing, yet we seldom see robots interacting with humans to perform tasks. Indeed, humans are either requested for punctual help or given the lead on the whole task. We propose a human-aware task planning approach allowing the robot to plan for a task while also considering and emulating the human decision, action, and reaction processes. Our approach is based on the exploration of multiple hierarchical tasks networks albeit differently whether the agent is considered to be controllable (the robot) or uncontrollable (the human(s)). We present the rationale of our approach along with a formalization and show its potential on an illustrative example involving the assembly of a table by a robot and a human.
The objective of this paper is to develop a real-time, depth-sensing surveillance method to be used in factories that require human operators to complete tasks alongside collaborative robots. Traditionally, collision detection and analysis have been achieved with extra sensors that are attached to the robot to detect torque or current. In this study, a novel method using 3D object detection and raw 3D point cloud data is proposed to ensure safety by deriving the change in distance between humans and robots from depth maps. By not having to deal with any potential delay associated with extra sensor-based data, both the likelihood and severity of collaborative robot-induced injuries are expected to decrease.
The level of social cognition in artificial agents depends on its ability to identify and interpret the world surrounding it . Therefore, one design objective when creating artificial systems, such as a social robot, is to give it the ability to identify and interpret the transaction of social signals during social interactions. This is, however, an open research problem, often seen as a wicked problem. One of the key difficulties is to properly frame the problem, as social interactions are complex, highly dynamic, and not easily formalised.In this short article, we present one situation (extracted from the small dataset of social interactions that we recorded) that illustrates in one snapshot the complexity of social situation assessment and might help frame appropriately the problem.
Relationships are crucial for human existence. People form relationships with other humans, pets, objects, and places. We argue that the nature of human-SAR (Socially assistive robot) relationship changes by context of use and interaction level. Therefore, context and interaction must be incorporated into the design requirements. Earlier studies identified design-related preference differences among users, depending on their personal characteristics and on their role in specific contexts. To align the robotic visual qualities (VQ) with users' expectations, we propose two human-SAR relationship models: context-based model- Situational based model and interaction-based model- Dynamic based model. Together with the VQ's evaluation, these models aim to guide industrial designers in the design process of new SARs. An evaluation method and preliminary findings are presented.
As industrial robots and social robots become prevalent in commercial and home settings it is crucial to improve forms of communication with human collaborators and companions. In this work, I describe the use of musical improvisation to generate emotional musical prosody for improved human-robot interaction. This aims to develop a canny approach, where robots perform in a mechanomorphic manner improving collaboration opportunities with humans. I have currently collected a new 12-hour dataset and developed a Conditional Variational Autoencoder to generate new phrases. Generations have then been used to compare the impact of prosody on anthropomorphism, animacy, likeability, perceived intelligence, and trust. Future work will incorporate prosody into groups of robots and humans, using personality to drive emotional decisions and emotion contagion.
Through a critical design approach, I suggest new perspectives on social drones, particularly companion drones. Supported by philosophies such as slow technology, I propose the design of anti-solutionist ritual drones and the study of their impact on the lives of users, particularly in domestic contexts. I intend to fill some of the methodological gaps identified, such as longitudinal studies in drone user experience through ethnography and auto-ethnography. I propose a "Research through Design" process of custom domestic probes for children and their families.
11% of adults report experiencing cognitive decline which can impact memory, behavior, and physical abilities. Robots have great potential to support people with cognitive impairments, their caregivers, and clinicians by facilitating treatments such as cognitive neurorehabilitation. Personalizing these treatments to individual preferences and goals is critical to improving engagement and adherence, which helps improve treatment efficacy. In our work, we explore the efficacy of robot-assisted neurorehabilitation and aim to enable robots to adapt their behavior to people with cognitive impairments, a unique population whose preferences and abilities may change dramatically during treatment. Our work aims to enable more engaging and personalized interactions between people and robots, which can profoundly impact robot-assisted treatment, how people receive care, and improve their everyday lives.
Most previous work on enabling robots' moral competence has used norm-based systems of moral reasoning. However, a number of limitations to norm-based ethical theories have been widely acknowledged. These limitations may be addressed by role-based ethical theories, which have been extensively discussed in the philosophy of technology literature but have received little attention within robotics. My work proposes a hybrid role/norm-based model of robot cognitive processes including moral cognition.
Visually impaired children are increasingly educated in mainstream schools following an inclusive educational approach. However, even though visually impaired (VI) and sighted peers are side by side in the classroom, previous research showed a lack of participation of VI children in classroom dynamics and group activities. That leads to a reduced engagement between VI children and their sighted peers and a missed opportunity to value and explore class members' differences. Robots due to their physicality, and ability to perceive the world, socially-behave and act in a wide range of interactive modalities, can leverage mixed-visual ability children access to group activities while fostering their mutual understanding and social engagement. With this work, we aim to use social robots, as facilitators, to booster inclusive activities in mixed-visual abilities classroom.
A team develops competency by progressive mutual adaptation and learning, a process we call co-learning. In human teams, partners naturally adapt to each other and learn while collaborating. This is not self-evident in human-robot teams. There is a need for methods and models for describing and enabling co-learning in human-robot partnerships. The presented project aims to study human-robot co-learning as a process that stimulates fluent collaborations. First, it is studied how interactions develop in a context where a human and a robot both have to implicitly adapt to each other and have to learn a task to improve the collaboration and performance. The observed interaction patterns and learning outcomes will be used to (1) investigate how to design learning interactions that support human-robot teams to sustain implicitly learned behavior over time and context, and (2) to develop a mental model of the learning human partner, to investigate whether this supports the robot in its own learning as well as in adapting effectively to the human partner.
Robots can use information from people to improve learning speed or quality. However, people can have short attention spans and misunderstand tasks. Our work addresses these issues with algorithms for learning from inattentive teachers that take advantage of feedback when people are present, and an algorithm for learning from inaccurate teachers that estimates which state-action pairs receive incorrect feedback. These advances will enhance robots' ability to take advantage of imperfect feedback from human teachers.
As robots enter our homes and work places, one of the roles they will have to fulfill is being a teammate. Prior approaches in human-robot teamwork enabled robots to reason about intent, decide when and how to help, and allocate tasks to achieve efficiency. However, these existing algorithms mostly focused on understanding intent and providing help and assumed that teamwork is always present. Overall, effective robotic teammates must be able to reason about the multi-dimensional aspects of teamwork. Working towards this challenge, we present empirical findings and an algorithm that enables robots to understand the human's intent, communicate their own intent, display effortful behavior, and provide help to optimize the team's task performance. In addition to task performance, people also care about being treated fairly. As part of future work, we propose an algorithm that reasons about task performance and fairness to achieve lasting human-robot partnerships.
Children with autism and their families could greatly benefit from increased support resources. While robots are already being introduced into autism therapy and care, we propose that these robots could better understand the child's needs and provide enriched interaction if they utilize touch. We present our plans, both completed and ongoing, for a touch-perceiving robot companion for children with autism. We established and validated touch-perception requirements for an ideal robot companion through interviews with 11 autism specialists. Currently, we are evaluating custom fabric-based tactile sensors that enable the robot to detect and identify various touch communication gestures. Finally, our robot companion will react to the child's touches through an emotion response system that will be customizable by a therapist or caretaker.
Play is a vital part of childhood, however, children with physical special needs face many obstacles in traditional play scenarios. We have developed MyJay, a robotic system that enables such children to play with their peers via a robot proxy in a basketball-like game. This semi-autonomous robot will feature adaptable controllers to allow children of any physical ability to play the proposed game, and it will focus on developing better child-child interaction.
Advances in perception and artificial intelligence technology are expected to lead to seamless interaction between humans and robots. Trust in robots has been evolving from the theory on trust in automation, with a fundamental difference: unlike traditional automation, robots could adjust their behaviors depending on how their human counterparts appear to be trusting them or how humans appear to be trustworthy. In this extended abstract I present my research on methods for processing trust in the particular context of interactions between a driver and an automated vehicle, which has the goal of achieving higher safety and performance standards for the team formed by those human and robotic agents.
Our aim is to advance the reliability of autonomous social navigation. We have researched how simulation may advance this goal via crowdsourcing. We recently proposed the Simulation Environment for Autonomous Navigation (SEAN) and deployed it at scale on the web to quickly collect data via the SEAN Experimental Platform (SEAN-EP). Using this platform, we studied participants' perceptions of a robot when seen in a video versus interacting with it in simulation. Our current research builds on this prior work to make autonomous social navigation more reliable by classifying and automatically detecting navigation errors.
Interactive Task Learning (ITL) is an approach to teaching robots new tasks through language and demonstration. It relies on the fact that people have experience teaching each other. However, this can be challenging if the human instructor does not have an accurate mental model of a robot. This mental model consists of the robot's knowledge, capabilities, shortcomings, goals, and intentions. The research question that we investigate is "How can the robot help the human build a better mental model of the robot?" We study human-robot interaction failures to understand the role of mental models in resolving them. We also discuss a human-centred interaction model design that is informed by human subject studies and plan-based theories of dialogue, specifically Collaborative Discourse Theory.
Fairness plays an important role in decision-making within teams and its perception has shown to drive performance and individual behavior among team members. Robots deployed within human teams are consistently faced with decisions on how to optimally allocate resources (e.g. tools, attention, gaze) but current solutions often ignore key aspects of fairness. In this work, we look to leverage laboratory experiments to identify key performance and behavioral metrics to further develop algorithmic solutions that include fairness considerations. We look to the well established multi-armed bandit algorithms to frame our problem and establish constraints on how resources are distributed amongst team members.
Ergonomics and human comfort are essential concerns in physical human-robot interaction~(p-HRI) applications such as teleoperation. We introduce a novel framework for posture estimation and optimization for ergonomically intelligent teleoperation systems that estimates the human operator's posture solely from the leader robot's trajectory and provides online postural correction and offline initial posture correction according to the type of teleoperation task. Although our framework is in teleoperation, it can be extended to other p-HRI applications with minimal modifications.
Robot learning from demonstration (LfD) is a common approach that allows robots to perform tasks after observing teacher's demonstrations. Thus, users without a robotics background could use LfD to teach robots. However, such users may provide low-quality demonstrations. Besides, demonstration quality plays a crucial role in robot learning and generalization. Hence, it is important to ensure quality demonstrations before using them for robot learning. This abstract proposes an approach for quantifying demonstration quality which in turn enhances robot learning and generalization.
For many real-world robotics applications, robots need to continually adapt and learn new concepts. Further, robots need to learn through limited data because of scarcity of labeled data in the real-world environments. To this end, my research focuses on developing robots that continually learn in dynamic unseen environments/scenarios, learn from limited human supervision, remember previously learned knowledge and use that knowledge to learn new concepts. I develop machine learning models that not only produce State-of-the-results on benchmark datasets but also allow robots to learn new objects and scenes in unconstrained environments which lead to a variety of novel robotics applications.
Audrey is a flower-like socially assistive robot (SAR) that aims to support older adults living alone in times of social isolation. Audrey is aimed to: First, reduce boredom and overcome loneliness by intellectual stimulation and games according to the user's hobbies, preferences, and interests. Second, facilitate staying in touch with your loved ones: using video chats and active reminders for family members and friends to interact. Third, promote healthy living- encourage a healthy lifestyle and early detection of stroke or fallings for emergency aid.
We present the design of a mobile robot that delivers hand sanitizer on the Oregon State University campus. The goal is to encourage people to follow the best health practices under COVID-19. The current hardware involves a hands-free hand sanitizer dispenser mounted atop a TurtleBot base. A wizard teleoperates the robot to approach bystanders, communicating via its approach that it would like them to participate. Future work will evaluate what communication modes best serve this goal of distributing hand sanitizer in particular contexts, and consider distributing services to where there is the most human demand.
What do we want the most of in this COVID-19 social isolation period? - Companionship, entertainment, and popcorn. We present 'Crunchy': a Human-Popcorn Interaction (HPI) based social robotic movie-companion. Crunchy is designed with the intent of minimizing feelings of isolation and loneliness. Crunchy's personality and reactions are imagined to provide popping interactions, making the worst-rated comedies enjoyable or the scariest movies even more intense. We discuss interaction scenarios, initial concepts for Crunchy's design, and future plans to either pursue developing Crunchy or using it to inform the development of prospective desktop social robots.
COVID pandemic has impacted our lives in ways that we could not have imagined, introducing us to a new normalcy. Older adults, especially those in care facilities, are often discussed as one of the most vulnerable populations to challenges that the pandemic poses, including social isolation and loneliness. We hence present our robot, M.A.R (Music Assistive Robot) i/o, intended to bring some joy and entertainment through a universal language - music. Our prototype, therefore, attempts to facilitate music participation among older adults by playing the music of choice and inviting them to enjoy listening through its expressive movements and adaptive drumming skills.
During lockdown, it's tempting to order takeout or heat up a frozen pizza. We naturally turn to food for comfort, but as lockdown continues it is easy to lose track of healthy habits. Our robot will encourage users to try new recipes with a mix of healthy ingredients while removing the hassle of searching for a recipe. The robot will gather user-input from motion sensors to learn the user's preferences for core components of the meal (protein, carbs, etc.). After selecting a recipe, the user will use gesture commands to scroll through the steps, avoiding sticky fingers on the robot.
Due to a spike in COVID-19 cases, many individuals have needed to quarantine and monitor themselves for symptoms. Through our design process, we developed the idea for a COVID-19 symptom checker. The interactive robot is designed as a way for people in lockdown to report their COVID-19 symptoms easily and is simple to use for everybody, from children to the elderly at home. It will incorporate an Adafruit Bluefruit LE module to return collected data, an LCDR display to ask the user questions about their symptoms, a thermistor to take a temperature, and a joystick to answer questions.
For the purpose of assistance and companionship, we have designed a robot that will be capable of imitating human movements. As seen in pets such as dogs, cats, the attachment grows between the human and the pet because of the pet's ability to reciprocate/react. Consequently, we thought of designing a robot that will reciprocate the person's movements. The design of the robot uses raspberry pie 3 board for interfacing microcontroller with webcam, DC Motors, recycled home products such as food tray, used empty container and a camera to build an autonomous robot imitating human using image processing.
Social touch can be established in human-robot touch conditions according to previous studies in HRI. For eliciting numerous benefits of touching interaction between human and robot, we propose a doll-like inflatable haptic device that can be worn around the hand. The touching interaction with the device is presented by pneumatic airbag system which can provide a grasping feedback sensation when being grasped by a human. We expect this mutual touching brings a beneficial effect on alleviating human pains.
This project began with the question, "What if the feeling of being in or close to nature could be sensed within indoors but without the presence of other living beings?" Ivy Curtain is a robotic design proposal that promotes better mental health for indoor residents by simulating the natural behaviors of vegetation. The plant-robot's organic movement generates the rustling sounds of leaves and changes its colors, just as real plants do. Thus, Ivy Curtain functions as an "alternative-plant" robot that moves in unexpected directions, changing its shapes according to both a user's physical interaction and its surrounding temperature.
The COVID-19 pandemic has had a drastic impact on our day-to-day lives, prompting us to make changes to our daily routines for the greater good. One such change is wearing a face mask in public. Yet, it is easy to forget to take a mask with us as we venture out of our houses because of how acclimated we still are to pre-pandemic life. MaskUp! addresses this common problem and supports us through lockdown by reminding us to do the right thing and wear a mask.
Bada is a social robot that can interact with individuals with the deaf. It resembles the appearance of a robotic vacuum cleaner and its signaling of abnormal circumstances at home was modeled after the behavior of hearing dogs. Bada effectively reduce the loss of information during delivery by relaying messages in various ways including web service, text messages, visual representation, and haptic interface. We have developed Bada's interaction process through several tests. Its behavior, interface, and interaction model would fairly contribute to the robotic accessibility technology.
COVID-19 has disrupted the way we spend time with our friends and family. Cara is a robot designed to connect isolated persons to their loved ones during the pandemic through three different modes: daily podcasts of audio messages, real-time music sharing, and LED lights that show someone they are being remembered. Cara is a rechargeable, soft stuffed animal that lights up and talks. Individuals can interact with it through a web application.
This project suggests how human-robot interaction might be of service to public institutions during the pandemic crisis. Alice in Bookland aims to ensure a safe environment for children to continue their reading experiences during COVID-19. Due to the lockdown throughout the country, lots of public organizations such as libraries are having a hard time providing their resources, with limited user entrance and open hours. Here, Alice in Bookland complements the situation via providing a safe space outside the library, encouraging readers' healthy reading habits without any undue stress during the pandemic, and raising community awareness of local library resources.
COVID-19 has affected many people's mental health, especially those of students whose usual campus life got replaced by virtual learning platforms. Here, Moody Study Buddy, a robotic lamp that keeps students' company during the pandemic, aims to support their learning experience by reminding them that they are not alone. As they study with Moody Study Buddy, it smiles over at them brightly, and an astronaut figure climbs up and plants a flag on top, helping students feel content and distressed during their use. Its target users expand onto students, work-from-home workers, and lonely elderlies during the pandemic.
According to a USDA study [Cates 2018], only 4% of people correctly wash their hands, with most consumers not washing long enough. Mews is a helpful cat that will time hand washing for the user to ensure hands are sufficiently clean. Once a person signals to Mews, the cat plays a 20-second song of your choice, thus ensuring your hands are washed long enough. As an option for children, we offer a guided sound file of the steps to handwashing so they may practice proper handwashing as a habit.
In this paper, we discuss our solution towards social issues such as theft, sexual assault, and work-related stress in the context of delivery in densely populated urban areas, especially with the rising concerns for covid-19 (Muschert & Budd, 2020). In doing so, we will address our robot, TUTUS' design, social relevance, and practicality, and how it can be implemented in real situations. We aim to achieve this by having our robot capable of efficiently navigating through apartment complexes, carrying parcels of varying sizes and shapes while keeping cost low and practicality high.
We integrate a low-cost, open-source system to create a human-machine interface to control a robot by tracking human motion. Controlling a robot with human motion is intuitive and can readily be used to perform tasks that require precise control. We envision using this system will provide an interactive learning experience to students during the stay-at-home period. Existing solutions require an expensive and precise setup which is not portable. We use open-source components such as Robotics Operating System (ROS) and OpenMANIPULATOR-X and low cost commercially available Azure Kinect DK to create an accessible and portable motion-controlled human-machine interface.
Distance learning lacks many crucial communication tools that support learning in physical classrooms. The Distance Learning Companion Robot expresses the climate of digital classrooms using nonverbal modalities. The system allows students to express their confidence in curricular material, enables instructors to better understand student needs, and supports the virtual classroom community.
Every Hanukkah, the Jewish Festival of Lights, our team gathers with our grandparents, Fred and Marlene, to celebrate by lighting candles together. This is an interactive experience, where participants pass the Shamash candle to each other to light the next of 8 candles. During the COVID-19 pandemic, social distancing prohibits the close contact required for this interaction; therefore, we designed a pair of robotic menorahs (candelabras with 9 candle-holders) connected via Bluetooth to enable candle lighting across a distance during Hanukkah 2020. Our design incorporates existing menorahs and candles with electronic components to enable a traditional, yet socially distant experience.
We all experienced quarantine in 2020 and this experience is especially challenging for visually impaired individuals. I aim to develop a robot to enhance their lives during their isolation from society. Around Us is a human interactive robot that uses sensor input to inform them of surrounding information at home through touch and sound, so users can be in a safe, entertaining, and comfortable living environment.
Library GO is an interactive robot swarm group carrying books around in the hope of spreading humane care and delivering warmth. Library GO carts will arrive at users' front doors by request. Carts can express delight and excitement through designed motion patterns when they meet users. When users approach and greet the carts, the greeting actions will trigger the carts to open their cabin and raise the bookshelves. Users will then be able to scan through the shelves and select their favorite books just as before COVID-19.
The words and phrases we utilize in our daily lives reflect our social context, namely its hierarchies and inequalities (e.g., racial, gender). Furthermore, the usage of specific forms of expression can be harmful to vulnerable populations. Here, we propose the development of a robotic agent that will help users seeking to change their speaking habits (i.e., using words that could be harmful to vulnerable populations) fulfill their goal. We provide an overview of our project as well as its tentative design and potential features to be included, such as voice recognition to identify when the user uses a specific word, and the incorporation of a dictionary to inform users of the historical background of certain terms.
This video seeks to use robot prosocial behaviors to incentivize passersby to use hand sanitizer. Teleoperated by the wizard, the robot is capable of expressing via LED lights, gestures, and speech communication modes for human-robot interaction. Future work will explore what communication modes and behaviors are the most effective in encouraging people to use hand sanitizer.
As robots and autonomous systems become more adept at handling complex scenarios, their underlying mechanisms also become increasingly complex and opaque. This lack of transparency can give rise to unverifiable behaviours, limiting the use of robots in a number of applications including high-stakes scenarios, e.g. self-driving cars or first responders. In this paper and accompanying video, we present a system that learns from demonstrations to inspect areas in a remote environment and to explain robot behaviour. Using semi-supervised learning, the robot is able to inspect an offshore platform autonomously, whilst explaining its decision process both through both image-based and natural language-based interfaces.
Human-robot collaboration is increasingly applied to industrial assembly sequences due to the growing need for flexibility in manufacturing. Assistant systems are able help to support shared assembly sequences to facilitate collaboration. This contribution shows a workplace installation of a collaborative robot (Cobot) and a spatial augmented reality (SAR) assistant system applied to an assembly use-case. We demonstrate a methodology for the distribution of the assembly sequence between the worker, the Cobot, and the SAR.
Future intelligent system will involve various artificial agents, including mobile robots, smart home infrastructure or personal devices, which share data and collaborate with each other to serve users. Designing efficient interactions which can support users to express needs to such intelligent environments, supervise the collaboration of different entities and evaluate the outcomes, will be challengeable. This paper presents the design and implementation of the human-machine interface of Intelligent Cyber-Physical system (ICPS), which is a multi-entity coordination system of robots and other smart agents in a workplace (Honda Research Institute). ICPS gathers sensory data from entities and receives users' inputs, then optimizes plans to utilize the capability of different entities to serve people.
The goal of the EU H2020-ICT funded SPRING project is to develop a socially pertinent robot to carry out tasks in a gerontological healthcare unit. In this context, being able to perceive its environment and have coherent and relevant conversations about the surrounding world is critical. In this paper, we describe current progress towards developing the necessary integrated visual and conversational capabilities for a robot to operate in such environments. Concretely, we introduce an architecture for conversing about objects and other entities present in an environment. The work described in this paper has applications that extend well beyond healthcare and can be used on any robot that requires to interact with its visual and spatial environment in order to be able to perform its duties.
Sign language is the primary form of communication of the deaf community. It is one of the only channels of communication between the hearing impaired community and the rest of society. While there are many existing SLR systems that focus on the recognition of manual gestures, few consider the non manual components of sign language, such as facial expressions. The role of these non manual features are comparable to pitch, intonation and nuances that are observed in spoken language. Our prototype combines manual SLR with emotion recognition to form a singular system that can recognise both facial expressions and sign language gestures.
A HRI study with 31 expert robot operators established that an external viewpoint from an assisting robot could increase teleoperation performance by 14% to 58% while reducing human error by 87% to 100% This video illustrates those findings with a side-by-side comparison of the best and worst viewpoints for the passability and traversability affordances. The passability scenario uses a small unmanned aerial system as a visual assistant that can reach any viewpoint on the idealized hemisphere surrounding the task action. The traversability scenario uses a small ground robot that is restricted to a subset of viewpoints that are reachable.
This demo is the result of a two-day evaluation of a humanoid robot in a Danish citizens service centre. Due to COVID-19, the goal was to reduce the number of personal contacts between staff and visitors in the centre using verbal interaction with the robot. The robot was pre-programmed with a number of typical questions and answers related to the center. A total of 263 citizens attended the centre during the two days. Visitors would have to pass the robot to enter the center, and is was estimated that 5 percent of the visitors interacted with the robot. The most common interaction patterns were greetings and casual chatting, although questions about the facilities at the centre were also observed. However, most visitors ignored the robot and focused on their scheduled appointment.
This video presents a remote user interface (UI) for controlling a multi-robot furniture system intended to enable human-robot interaction studies safely during the COVID-19 pandemic. The three primary features of the system are detailed. The first, a web-based architecture, allows the operation of our chair robots (ChairBots) over the internet. Second, multiple ChairBots are simultaneously operable. Third, variable levels of autonomy allow an operator to send both high-level, with robots autonomously moving to goals, or low-level motion commands. This work presents advances in the technical capabilities of our ChairBot system, representing progress towards a viable multi-robot furniture system.
This virtual half-day workshop will explore human-machine communication (HMC) and communication studies/science theories as used in HRI studies. Submitted papers can include a 1) discussion of theories from the disciplines of media and communication that can guide HRI studies and vice versa, 2) analysis of related constructs (variables) that intersect both fields but with different nomenclatures (e.g., credibility vs. trust, presence vs. im- mediacy), or 3) exploration of quantitative and qualitative study designs from media and communication and their application to HRI. For more information and submission information, go to https://www.combotlabs.org/hmchri2021.html
This workshop sets out to bring researchers across the Human-Robot Interaction (HRI), Human-Computer Interaction, and Design fields that use, or are interested in using Research through Design (RtD) in their work. RtD is a research approach that uses design practices, such as ideation and prototyping, to generate knowledge. RtD focuses on understanding what is the right thing to design, and has the potential to bring new perspectives that break through fixation within a field. In our workshop, we will attempt to classify current HRI practices of RtD, identify under-explored topics and methods, and discuss challenges on conducting this type of work in the HRI field. The workshop will result in defining next steps for better integration of RtD approaches in the HRI community and guidelines to include more researchers in RtD practice.
The purpose of this workshop is to help researchers develop methodological skills, especially in areas that are relatively new to them. With HRI researchers coming from diverse backgrounds in computer science, engineering, informatics, philosophy, psychology, and more disciplines, we can't be expert in everything. In this workshop, participants will be grouped with a mentor to enhance their study design and interdisciplinary work.
Participants will submit 4-page papers with a small introduction and detailed method section for a project currently in the design process. In small groups led by a mentor in the area, they will discuss their method and obtain feedback. The workshop will include time to edit and improve the study. Workshop mentors include Drs. Cindy Bethel, Hung Hsuan Huang, Selma Sabanović, Brian Scassellati, Megan Strait, Komatsu Takanori, Leila Takayama, and Ewart de Visser, with expertise in areas of real-world study, empirical lab study, questionnaire design, interview, participatory design, and statistics.
Games have been used extensively to study human behavior. Researchers in the field of human-robot interaction (HRI) are becoming more aware of the importance of designing compelling and playful games to study interrelationships among players. Despite the growing interest, the use of game design techniques in the creation of playful experiences for HRI experiments is still in its infancy and more multidisciplinary activities should be promoted to foster the convergence between game research and HRI. This workshop aims at discussing the value of using iterative game design techniques to integrate playful experiences using social robots for HRI experiments. More concretely, we want to explore tools, approaches and methods used in previous experiences for appropriate design of interactive games in HRI. Furthermore, based on previous research, a taxonomy for game design using social robots will be presented and attendees will have access to hands-on material created to facilitate the design of interactive games considering important aspects of the robotic systems to maximize the fun experience. We hope this workshop will bring to HRI researchers, game designers, roboticists, and technology enthusiasts enlightening thoughts and ideas to confront the often complicated and time-demanding process of designing compelling games for HRI experiments.
Today it seems even more evident that social robots will have a more integral role to play in the real-world scenarios and need to participate in the full richness of human society. Central to the success of robots being socially intelligent agents is insuring effective interactions between humans and robots. In order to achieve that goal, researchers and engineers from both industry and academia need to come together to share ideas, trials, failures, and successes. This workshops aims at creating the bridge between industry and academia and as such creating a community to tackle the current and future challenges of socially intelligent human-robot interaction in real-world scenarios by finding solutions for them.
The sound a robot or automated system makes and the sounds it listens for in our shared acoustic environment can greatly expand its contextual understanding and to shape its behaviors to the interactions it is trying to perform.
People convey significant information with sound in interpersonal communication in social contexts. Para-linguistic information about where we are, how loud we're speaking, or if we sound happy, sad or upset are relevant to understand for a robot that looks to adapt its interactions to be socially appropriate.
Similarly, the qualities of the sound an object makes can change how people perceive that object and can alter whether or not it attracts attention, interrupts other interactions, reinforces or contradicts an emotional expression, and as such should be aligned with the designer's intention for the object. In this tutorial, we will introduce the participants to software and design methods to help robots recognize and generate sound for human-robot interaction (HRI). Using open-source tools and methods designers can apply to their own robots, we seek to increase the application of sound to robot design and stimulate HRI research in robot sound.
As the field of child-robot interaction (CRI) research matures, and in light of the recent replication crisis in psychology, it is timely to tackle several important methodological challenges. Notably, studies on child-robot relationship formation face issues regarding the conceptualization and operationalization of this complex, comprehensive construct. In addressing these challenges, increased interdisciplinary collaboration is of vital importance. As such, this workshop aims to facilitate ongoing discussion between interdisciplinary experts on the topic of child-robot relationship formation to identify common issues and corresponding solutions (e.g., consistent definitions and rigorous measurement techniques). The workshop will begin with a keynote talk from Dr. Iolanda Leite, followed by discussion surrounding identified challenges. These discussions will be accompanied by intensive break-out groups moderated by senior researchers in the field (i.e., Mark Neerincx and Vanessa Evers). We hope this workshop will set the baseline for standardised methodologies that can later be expanded to other CRI constructs.
We believe that we can design robot clothes to help robots become better robots-help them to be useful in a wider array of contexts, or to better adapt or function in the contexts they are already in. We propose that robot clothing should avoid mere mimicry of human apparel, and instead be motivated by what robots need. While we have seen robots dressed in clothes in the last few decades, we believe that robot clothes can be designed with thoughtful intention and should be studied as its own field. In this workshop, we explore this new area within human robot interaction by bringing together HRI researchers, designers, fashion and costume designers, and artists. We will focus on potential functions of robot clothes, discuss potential trends, and design clothes for robots together in an interactive prototyping session. Through this workshop, we hope to build a community of people who will push forward the field of robot clothing design.
Robot sound spans a wide continuum, from subtle motor hums, through music, bleeps and bloops, to human-inspired vocalizations, and can be an important means of communication for robotic agents. This first workshop on sound in HRI aims to bring together interdisciplinary perspectives on sound, including design, conversation analysis, (computational) linguistics, music, engineering and psychology. The goal of the workshop is to stimulate interdisciplinary exchange and to form a more coherent overview of perspectives on how sound can facilitate human-robot interaction. During the half-day workshop, we will explore (1) the diverse application opportunities of sound in human-robot interaction, (2) strategies for designing sonic human-robot interactions, and (3) methodologies for the evaluation of robot sound. Workshop outcomes will be documented on a dedicated website and are planned to be collected in a special issue.
The aim of this workshop is to give researchers from academia and industry the possibility to discuss the inter- and multi- disciplinary nature of the relationships between people and robots towards effective and long-lasting collaborations. This workshop will provide a forum for the HRI and robotics communities to explore successful human-robot interaction (HRI) to analyse the different aspects of HRI that impact its success. Particular focus are the AI algorithms required to implement autonomous interactions, and the factors that enhance, undermine, or recover humans' trust in robots. Finally, potential ethical and legal concerns, and how they can be addressed will be considered. Website: https://sites.google.com/view/traits-hri
The Robots for Learning workshop series aims at advancing the research topics related to the use of social robots in educational contexts. This year's half-day workshop follows on previous events in Human-Robot Interaction conferences focusing on efforts to dis-cuss potential benchmarks in design, methodology and evaluation of new robotics systems that help learners. In this 6th edition of the workshop, we will be investigating in particular methods from technologies for education and online learning. Since the past few months, online and remote learning has been put in place in several countries to cope with the health and safety measures due to theCovid-19 pandemic. In this workshop, we aim to discuss strategies to design robotics system able to provide embodied assistance to the remote learners and to demonstrate long-term learning effects.
Interactive robots are becoming more commonplace and complex, but their identity has not yet been a key point of investigation. Identity is an overarching concept that combines traits like personality or a backstory (among other aspects) that people readily attribute to a robot to individuate it as a unique entity. Given people's tendency to anthropomorphize social robots, "who is a robot?" should be a guiding question above and beyond "what is a robot?" Hence, we open up a discussion on artificial identity through this workshop in a multi-disciplinary manner; we welcome perspectives on challenges and opportunities from fields of ethics, design, and engineering. For instance, dynamic embodiment, e.g., an agent that dynamically moves across one's smartwatch, smart speaker, and laptop, is a technical and theoretical problem, with ethical ramifications. Another consideration is whether multiple bodies may warrant multiple identities instead of an "all-in-one" identity. Who "lives" in which devices or bodies? Should their identity travel across different forms, and how can that be achieved in an ethically mindful manner? We bring together philosophical, ethical, technical, and designerly perspectives on exploring artificial identity.
The 4th International Workshop on Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI) will bring together HRI, robotics, and mixed reality researchers to address challenges in mixed reality interactions between humans and robots. Topics relevant to the workshop include development of robots that can interact with humans in mixed reality, use of virtual reality for developing interactive robots, the design of augmented reality interfaces that mediate communication between humans and robots, the investigations of mixed reality interfaces for robot learning, comparisons of the capabilities and perceptions of robots and virtual agents, and best design practices. Special topics of interest this year include VAM-HRI research during the COVID-19 pandemic as well as the ethical implications of VAM-HRI research. VAM-HRI 2021 will follow on the success of VAM-HRI 2018-20 and advance the cause of this nascent research community.
While most of the research in Human-Robot Interaction (HRI) focuses on short-term interactions, long-term interactions require bolder developments and a substantial amount of resources, especially if the robots are deployed in the wild. Robots need to incrementally learn new concepts or abilities in a lifelong fashion to adapt their behaviors within new situations and personalize their interactions with users to maintain their interest and engagement. The "Lifelong Learning and Personalization in Long-Term Human-Robot Interaction (LEAP-HRI)" Workshop aims to take a leap from the traditional HRI approaches towards addressing the developments and challenges in these areas and create a medium for researchers to share their work in progress, present preliminary results, learn from the experience of invited researchers and discuss relevant topics. The workshop extends the topics covered in the "Personalization in Long-Term Human-Robot Interaction (PLOT-HRI)" Workshop at the 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI) and "Lifelong Learning for Long-term Human-Robot Interaction (LL4LHRI)" Workshop at the 29th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), and focuses on studies on lifelong learning and adaptivity to users, context, environment, and tasks in long-term interactions in a variety of fields (e.g., education, rehabilitation, elderly care, collaborative tasks, customer-oriented service and companion robots).
Non-verbal Human-Robot Interaction (nHRI) encompasses the study of the exchange of human-robot gaze, gesture, touch, body language, paralinguistic, facial and affect expression. nHRI has advanced beyond theoretical and computational contributions. Progress has been made through a variety of user studies and laboratory experiments as well as practical efforts such as integration of nonverbal inputs with other HRI modalities including domain specific implementations. This workshop seeks to promote collaboration between two threads of research: experimental nHRI, and application domains that can benefit from its use.
The workshop will link researchers working on new approaches to nHRI in the laboratory to applied roboticists who present challenges in specific domains, such as: service robots, field robotics, socially-assistive robotics, and human-robot collaborative work that could benefit from richer nHRI. This workshop will draw participation from diverse areas to evaluate best practices and integration efforts across different research domains. We will target a broad, cross-disciplinary audience, and provide a venue for recent efforts related to multimodal interaction, system integration, data collection, and user studies.
This is the third annual, full-day workshop that aims to explore the state-of-practice in the metrology necessary for repeatably and independently assessing the performance of robotic systems in realworld human-robot interaction (HRI) scenarios. This workshop continues the aims of shortening the lead time between between the theory and applications of HRI, enabling reproducible studies, and accelerating the adoption of cutting edge technologies as the industry state-of-practice. This third installment of the annual workshop, "Test Methods and Metrics for Effective HRI,' seeks to identify novel and emerging test methods and metrics for the holistic assessment and assurance of HRI performance. The focus is on identifying innovative metrics and test methods for the evaluation of HRI performance, and to advance the growth of the HRI community based on the principles of collaboration, data sharing, and repeatability. The goal of this workshop is to aid in the advancement of HRI technologies through the development of experimental design, test methods, and metrics for assessing interaction and interface designs. | s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053759.24/warc/CC-MAIN-20210916204111-20210916234111-00304.warc.gz | CC-MAIN-2021-39 | 155,736 | 186 |
https://download.cnet.com/Google-Maps-with-GPS-Tracker/3000-12940_4-10494227.html?tag=keyword.feed&part=rss&subj=dl.gps | code | Golenfound's Google Maps with GPS Tracker is a small, free application that uploads your GPS position regularly via GPRS or 3G and then automatically updates your position on a Google Map display. You need a GPS device to use it, but you can download and try the GpsGate simulator software free for 14 days. Once you're familiar with Google Maps with GPS Tracker, you can buy a GPS device that suits your needs and configure the program to accept it.
A small, square, tabbed interface opened on the GPS tab, which showed blank fields. The program's Web-based documentation and assistance made the setup a snap. We clicked the Settings button and selected the GPS tab, which let us configure GPS Type, the COM Port on our PC we use to connect our GPS device, the baud rate, and kilometers or miles for distance, as well as choose to log our input. The Garmin Protocol tab lets Garmin GPS users enter their product ID, software version, and other data as well as set the protocol. The Share tab contained a single button that let us share our current location via a Web site. Activating this feature called up a slider that let us configure the time out in minutes. The first time we ran the program, we had to permit access through the Windows firewall and our other security tools, but it opened normally after that, displaying our current location with the usual indicator on a Google Map. Since our movements were within the default GPS radius, we couldn't track them, but Google Maps with GPS Tracker will get a workout on our next trip.
We could also configure Google Maps with GPS Tracker to send GPS data to a wide range of outputs, including system ports, com protocols, and even Google Earth. We tried this last option, which included a setup dialog for configuring Google Earth's settings. | s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038088471.40/warc/CC-MAIN-20210416012946-20210416042946-00555.warc.gz | CC-MAIN-2021-17 | 1,798 | 3 |
https://www.frankysnotes.com/2018/05/reading-notes-327.html | code | - Create Azure Functions using Visual Studio and deploy it to Azure (Abhijit Jana) - Nice little visual studio tutorial to create an Azure Function.
- Why you should use Durable Functions sub-orchestrations (Mark Heath) - Nice post.A lot of very useful pieces of information.
- Yes you can! develop on AWS using .NET (Dror Helper) - This post is a great invitation for .Net developers to try AWS.
- Announcing .NET Core 2.1 RC 1 Go Live AND .NET Core 3.0 Futures (Scott Hanselman) - The future with .Net Core looks very promising, learn why in this post.
- Using LazyCache for clean and simple .NET Core in-memory caching (Scott Hanselman) - New post in the series where Scott shares with us things he try while refactoring his website.I loved it.
- Web Performance Optimization is Simple 5535756836 - Can Your Developers Execute? 5535657166 (Chris Love) - Interesting post, learn from true stories. Simple thing can make a big difference.
- Azure SQL Database Table Partitioning Example (John Miner) - Wow, this post is a very completed tutorial that provides all steps and explanations.
- Ask Tim - Should I say "I Don't Know" to an Interviewer? (Tim Corey) - Great post. I apply this no bullshit rules to all my conversation.
- Five Things That Every Leader Could Use to be a Better Leader (Joan Pepin) - Interesting post about leadership and how we grow with our challenges.
- The Subtle Art of Not Giving a F*ck (Mark Manson) -Damn it's good!
The title of the book let's me thought it will be very negative. Not giving a fu#*... But it's really not. Quite the opposite in fact. I really like the book and I'm planning to read/listen it another time in... One year. To see what changed. | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679516047.98/warc/CC-MAIN-20231211174901-20231211204901-00566.warc.gz | CC-MAIN-2023-50 | 1,690 | 11 |
https://www.tindie.com/products/dustinwattsnl/esp32-touchdown/ | code | An ESP32 with a 3.5" (480*320) TFT with Capacitive TouchDesigned by Dustin Watts in Netherlands
The ESP32 TouchDown ESP32 TouchDown is complete solution for anyone who wants/needs an ESP32 with capacitive touchscreen. It also has battery management onboard, a piezo speaker, and an SD card rea...Read More…
The ESP32 TouchDown
ESP32 TouchDown is complete solution for anyone who wants/needs an ESP32 with capacitive touchscreen. It also has battery management onboard, a piezo speaker, and an SD card reader. ESP32 TouchDown works out of the box with the Arduino IDE, providing you have installed the ESP32 Arduino Core. Pins that are not used by the peripherals onboard are broken out.
The ESP32 TouchDown is designed with FreeTouchDeck (https://github.com/DustinWatts/FreeTouchDeck) in mind. The ESP32 TouchDown is the one stop solution to get a FreeTouchDeck up and running with capacitive touch without the need to buy separate modules. You have the option to get your ESP32 TouchDown shipped to you with the most current release of FreeTouchDeck installed!
Write your own software
The ESP32 TouchDown can be used for more the FreeTouchDeck alone. It is a fully featured DevKit. In the Github repository you will find some examples to get you started:
The ESP32 TouchDown can run of a Li-Po battery. Combined with WiFi and BLE this makes the ESP32 TouchDown very portable. It uses an MCP73831 Charge Management Controller set to a charge current of 330mA. Charging is done through the USB-C port (using 5V). The on/off switch doesn't effect the battery charging. So even when the ESP2 TouchDown is off, you can still charge the battery. When the ESP32 TouchDown is plugged in to USB, it will use USB power over battery power and will switch to battery power when USB is disconnected, without interrupting the ESP32.
Note: Always use a protected cell!
ESP32 TouchDown uses a 3.5" TFT screen with a resolution of 480x320. The driver is an ILI9488. Pins used by the TFT screen are:
The TFT backlight anode (positive supply) is selectable via a jumper on the back. You can either power it directly from 3.3V or use GPIO32. By default, the positive source is 3.3V. You can change this and use PWM to control the backlight brightness. If you choose to have your ESP32 TouchDown shipped with FreeTouchDeck, the jumper will already be set so you can dim the backlight from the software.
The capacitive touch controller is a FocalTech FT6236 (datasheet). The FT6236 uses I2C and has address 0X38. I made an Arduino IDE library available here: https://github.com/DustinWatts/FT6236.
Pins used by the FT6236 are:
The following GPIO's are broken out on the header:
Why is it so special?
ESP32 TouchDown comes ready to use. It includes all features that you would need additional modules for when using a development board. Besides that, ESP32 TouchDown is fully open source. All hardware designs can be found in the Github repository. ESP32 TouchDown is also OSHWA cerftified. You can find more information here: https://certification.oshwa.org/nl000004.html
Batch 2 (27th of February 2021): Currently being produced...
Batch 1 (16th of February 2021): PCB manufacturer: PCBWay, Parts source: LCSC, Hand assembled.
No country selected, please select your country to see shipping options.
No rates are available for shipping to .
Enter your email address if you'd like to be notified when ESP32 TouchDown can be shipped to you:
Thanks! We'll let you know when the seller adds shipping rates for your country.
|Shipping Rate||Tracked||Ships From||First Item||Additional Items|
We recognize our top users by making them a Tindarian. Tindarians have access to secret & unreleased features.
We look for the most active & best members of the Tindie community, and invite them to join. There isn't a selection process or form to fill out. The only way to become a Tindarian is by being a nice & active member of the Tindie community! | s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178370752.61/warc/CC-MAIN-20210305091526-20210305121526-00182.warc.gz | CC-MAIN-2021-10 | 3,920 | 25 |
http://www.actionscript.org/forums/showthread.php3?t=80023 | code | Accessing MC's in a for loop: a syntax question
This must be a really silly question but here goes: I am trying to access properties of multiple MCs on the stage. I would like to us a for loop and it's index to access these MCs. Here is what I'm thinking:
mcs_set = 0;
for ( i = 0; i < MC_COUNT_TOTAL; i++ )
mc = i;
if ( mc_holder_i.am_i_set == true )
trace("A MC has been set!");
Obviously the problem lies with using "i" to access the MCs: mc_holder_0, mc_holder_1, mc_holder_2, etc.
So my question: What is the proper syntax for accessing mutiple MCs via a for loop?
Cheers and thanks! | s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500836106.97/warc/CC-MAIN-20140820021356-00358-ip-10-180-136-8.ec2.internal.warc.gz | CC-MAIN-2014-35 | 588 | 10 |
http://fixunix.com/sco/537796-running-cpx-openserver-6-anyone-done-print.html | code | Running cpx on OpenServer 6 ? Anyone done it?
-----BEGIN PGP SIGNED MESSAGE-----
I have a customer that has an application that runs under cpx. The last
update of cpx was in 1995. They have a programmer that modifies their
application. They are starting to have HW problems and I need to get them
to OpenServer 6 if possible. They have cpn also running on their Novell
Network. I would like to move everything to OpenServer 6.
Boyd Gerber <[email protected]>
ZENEZ 1042 East Fort Union #135, Midvale Utah 84047
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.9 (GNU/Linux)
-----END PGP SIGNATURE----- | s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982298551.3/warc/CC-MAIN-20160823195818-00005-ip-10-153-172-175.ec2.internal.warc.gz | CC-MAIN-2016-36 | 602 | 12 |
http://rcspace.xyz/archives/13180 | code | Novel–Guild Wars–Guild Wars
Chapter 457 – Impossible Odds 1 tin clumsy
No less than, Rina had been in charge on the talent at the moment as it was cast from her Famous tool, thus it got only harmed her concentrates on and still left the region relatively untouched normally.
This wasn’t a method skill, only one arising from his Serpent G.o.d Inheritance. His affiliation was an Aurora Serpent important, that was a G.o.d Serpent of your lesser assortment, and the other that had not been documented down in mythology.
As for the Black Angel Inheritance, there had been no expensive difference there. Absolutely everyone with the bloodline basically obtained the same, merely to diverse specialties and levels of strength. While using the angelic wings authorized Basis to receive rough journey abilities, even though his velocity and maneuverability were not even in close proximity to Draco’s.
Have you really assume a s.p.a.ce dragon to hang on Earth???
With regards to Darker Angel Inheritance, there were no elaborate variety there. Anyone with the bloodline basically possessed exactly the same thing, only to various specialties and degrees of strength. While using the angelic wings allowed Essence to have tough flight functionality, though his rate and maneuverability were actually not even around Draco’s.
Sublime Notion didn’t have any buffs that can a.s.sist Uno’s competency, regardless of the Tome of Therapeutic in their arms, so she opted to watch by the side.
Looking at all this, one could realise why Eva sensed this future fight will be a s.h.i.+tfest of epic dimensions. It didn’t make a difference how OP they had been, what Divine or Famous Cla.s.ses they had, or what bloodline this or that person had.
Some also lay from the lava, remaining melted down in the most unpleasant possible way because they made an effort to crawl away regardless of the intense ache.
It was quite silly to consider that Aether-Imbuing a weapon enabled a gamer or NPC in a position to fight void monsters of all the Ranks, negating whatever built these monsters fearsome.
That’s appropriate, you found it necessary to spam an infiltration well worth 20 thousand harm 750 occasions to get rid of one of these TEN fellows. Not only this, you needed to live their clearly buffed and OP conditions while engaging regular harm.
Do you really count on a s.p.a.ce dragon to hang out on Globe???
His s.h.i.+eld negated 60Per cent of damages and it also acquired 8,000,000 Hewlett packard. Even so, the damage these meteors was doing business had not been a laugh. Despite the presence of nearly all it dispersed, Uno sensed the 8,000,000 Hewlett packard could be drained by the 5th following.
Period: 5 minutes
Superstar Storm was no completely different from a rainwater of passing away and exploitation. Many individuals from The planet often assumed what it will be like if your society obtained bombarded with atomic bombs.
So, what was it wish to observe the end on the planet losing from above your mind?
But 15 billion HP with at the least, 80Percent physiological and marvelous problems resistance… frightening. Because they experienced Aether-Imbued objects, they could probably lessen that range to 40%, about 50 %.
Everything were equal however. Because of that, his consumption of bloodline vitality had also been far lessen, meaning he could keep it for far more than Draco.
Each of the eye of Umbra’s participants, along with the remainder of Meiren, Kamisuo, along with the other individuals, fell on the dark brown-haired person who retained a employees along with a unusual cube-like piece in fingers.
In this instance, it was the second.
But Fitter Cleric literally had run a lottery and lucked out with the ability, still simply because it was basically the Method throwing the talent for him, it recommended he obtained no control of it and also it would episode allies and enemies equally. Having said that, the program still handled him since the caster on the ability, so he themselves would stop being injured by it, but…
guardians of the galaxy girl with antenna
Chapter 456 – The Abyss Occasion 10
This still left the preserved players speechless, and several even vomited. A lot of obtained witnessed Rina’s slaughter of everybody with Supernova back over the Urgent situation Goal, nonetheless they had all passed away simultaneously.
「Light of Pray – Busy talent
Fact Stalker’s face grew to become black when he cricked his throat menacingly. “It feels like you are the variety of kid who requirements loads of willpower as a way to behave properly before his daddy.”
Hi there manager, that has been YOUR proficiency. Whether or not any individual will be injured, you will be exempt. Are you so distrusting in the technique that you just wanted to hide out here still?!
Even Eva along with the Position 3 beauties were sensing suffocated. If it was a standard Rate 3 foe, Umbra alone could easily surpass the other to the pulp, particularly through the help of a really great lineup.
Some also place in the lava, remaining dissolved down during the most painful way possible while they aimed to crawl away regardless of the serious agony.
Just as the two very best buds were definitely getting ready to pummel the other to pieces, a world that had occured frequently in earlier times half a year throughout their solitude that it really possessed turn into their daily loaves of bread, Eva made an appearance from below the floor.
And issues didn’t stop there. The time period of the spell was 10 moments, so to the time, meteors kept surfacing and smas.h.i.+ng into their site. The individuals Umbra simply seen quietly, even while Uno began to shake.
Result: Mail out a wave of absolutely pure benevolent energy drives all allies nearby, increasing their injury, defense, and statistics by 50Per cent. In addition, it enables all objectives to attain 4Percent of their HP per secondly.
That’s appropriate, you necessary to spam an attack truly worth 20 million injury 750 times to get rid of only one of these TEN fellows. Not only this, you have to survive their clearly buffed and OP assaults while coping steady damage.
A minimum of, Rina had been in charge above the competency at the moment as it was cast from her Mythical tool, so that it got only harmed her is targeted on and left the location relatively untouched if not.
Novel–Guild Wars–Guild Wars | s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710698.62/warc/CC-MAIN-20221129132340-20221129162340-00697.warc.gz | CC-MAIN-2022-49 | 6,436 | 36 |
http://massfinder.com/wiki/MassFinder_Tutorial | code | |MassFinder 4: Tutorial|
Welcome to the world of MassFinder!
We believe that MassFinder is one of the most convenient software systems to do GC/MS with and we hope you will become an enthusiastic user as well. We are confident that with these step-by-step tutorials you will become an expert MassFinder user in a very short time.
We offer all customers intensive support by email or phone. If you have got any problems, inquiries or even suggestions on how to further improve MassFinder, please do not hesitate to contact us. We will do our best to solve all issues reported and to implement new features we consider to be of interest for the majority of users. It is our policy to provide all updates of MassFinder 4.x free of charge to all registered users, so every user will benefit from all modifications and enhancements.
If you have very special requirements of additional MassFinder features, we will gladly give you a free quotation for a customised, individual MassFinder version that solves your institute's or company's challenges.
About the MassFinder 4 tutorial!
This tutorial uses a step-by-step approach to teach the fundamental aspects of MassFinder. If you follow each step, you will swiftly and easily learn how to use all important MassFinder features. Please refrain from using your own data files at this moment; the tutorial data files are selected to offer the best and most rapid access to all important features. Later on in this tutorial you will learn how to analyse your own data files.
Please note that the tutorial does not cover all features and options of MassFinder. A complete explanation of all details of each dialog can be found in the MassFinder Reference Manual.
This first lesson will teach you step-by-step to open, visualize and navigate GC/MS data files.
Step 1: Starting MassFinder 4
Please ensure the dongle is connected to your computer. Start MassFinder 4 by double-clicking on MassFinder4.exe. Alternatively, you might have added a shortcut on your desktop, in the quick launch bar or in your start menu. If you have any problems with starting MassFinder, please read the installation instructions carefully.
Step 2: Opening a data file
Please click on the button "open GC/MS" as indicated in the picture above. If you hover with the mouse pointer over a button, a short help message will appear describing what the button does. This hover help will work almost for all buttons, dialogs and input boxes in MassFinder.
MassFinder will now display the open dialog box. For this tutorial, please select the file oil2122.mfg and click the OK button. Alternatively, you can double-click the file in order to open it.
Step 3: The first look on the user interface
The main MassFinder window is divided in three vertical panels:
- left: chromatogram profile displayed vertically from top to bottom
- center: three consecutive mass spectra of the GC/MS data file
- right: up to three mass spectra of the library (search hits)
The name of the data file, oil2122.mfg, is displayed in the title bar of the window.
Step 4: Navigation
Please note first, that you always see three consecutive scans of your GC/MS run on top of each other. The center scan is highlighted (light yellow background), the upper and lower scans have grey background. The center, highlighted scan should usually have your attention, because MassFinder applies all commands you will learn later (e.g. library search, add to library, export as graphic) to that highlighted scan.
The blue line in the center of the chromatogram indicates which point of the chromatogram represents the highlighted scan. The scan on top is exactly one scan before that, the scan on bottom is exactly one scan after the blue line (and after the highlighted one in the center).
There are several methods to navigate the GC/MS and to select exactly the scans you are interested in. You will learn all methods in this part of the tutorial.
Please use the mouse pointer to click on the chromatogram peak as shown in the picture on the right. MassFinder will instantly select exactly that scan you pointed to as the new highlighted scan displayed in the center.
Without knowing, you also just performed your first library search with MassFinder. In the moment you selected the new scan in the chromatogram, MassFinder executed a library search and now displays the best library hit on the right side, also highlighted with yellow background. Please feel free to click to other GC/MS positions and see how fast MassFinder displays the according library matches. If the right panel (where the library hits are shown) remains empty, MassFinder was not able to find a match.
The screen should now look similar to the following screen shot (red text added for this tutorial):
Please select a range of the chromatogram as indicated on the right. Point the mouse to the upper left corner of the desired range, press the left mouse button down (and hold it down), then drag the mouse to the lower right corner of the desired range and release the mouse button. Please note, that the first click needs to be pretty far left in the area of the chromatogram's base line.
Hint: If you want to cancel a zoom you already started, drag the mouse to a position higher than the starting point and release the button.
Again, MassFinder not only zooms into the desired range, but also performs a library search of the new highlighted scan.
Please note, how convenient it is to always see three consecutive scan after each other. This makes detecting co-eluting peaks, overloading effects, and all other kinds of slightly varying mass spectra very easy. In many cases, you would like to exactly move one scan up or down or to move scan-by-scan through a range of the chromatogram: Position the mouse pointer anywhere on one of the three GC/MS spectra and turn the mouse wheel up or down to move scan-by-scan. You may turn the wheel fast to scroll smoothly.
Adjusting to the peak
You can press the space key to adjust your active scan to the top of the closest peak.
Shifting the displayed range
Press down the left mouse button anywhere on top of the chromatogram (and hold it down), then drag the mouse to shift the displayed range of the chromatogram. Release the mouse button when you reached the desired position. Please note, that the first click must not be near the base line, since that would indicate mouse-zoom operation.
Stretching the displayed range
Press down the right mouse button on the lower half of the chromatogram (and hold it down), drag the mouse to stretch or compress the chromatogram, release the button when ready. Note, that MassFinder does not change the highlighted, active scan!
Hint: Do not start with this mouse operation in the baseline area of the profile. Just click anywhere on the main profile area. The baseline area and the upper half are reserved for creating average spectra, which you will learn in a later section.
Display the complete chromatogram
Double-click anywhere near the base line of the chromatogram to show the complete chromatogram. Directly after opening a GC/MS, MassFinder always automatically displays the complete chromatogram.
Modifying the intensity
You may easily change the intensity axis of the chromatogram by positioning the mouse pointer anywhere on top of the chromatogram profile and turning the mouse wheel up or down.
Later you will learn that there is an option to decide whether MassFinder will auto-adjust the intensity when zooming into the chromatogram with the mouse or maintain the intensity scale.
Please play around with all these navigation methods before proceding. You should feel comfortable with these mouse operations since they are basic for successfully employing MassFinder.
Continue with the second lesson of this tutorial: Lesson 2: Identifying peaks | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119642.3/warc/CC-MAIN-20170423031159-00276-ip-10-145-167-34.ec2.internal.warc.gz | CC-MAIN-2017-17 | 7,807 | 45 |
http://linuxsoft.cern.ch/cern/devtoolset/slc6X/x86_64/RPMS/repoview/devtoolset-2-strace.html | code | devtoolset-2-strace - Tracks and displays system calls associated with a running process
|Vendor:||Scientific Linux CERN, http://cern.ch/linux|
The strace program intercepts and records the system calls called and received by a running process. Strace can print a record of each system call, its arguments and its return value. Strace is useful for diagnosing problems and debugging, as well as for instructional purposes. Install strace if you need a tool to track the system calls made and received by a process.
|devtoolset-2-strace-4.7-13.el6.x86_64 [221 KiB]||
by Jeff Law (2013-12-18):
- Don't pass NULL pointers to strcmp when sorting by syscall name (#1044044)
|devtoolset-2-strace-4.7-11.el6.x86_64 [218 KiB]||
by Jeff Law (2013-06-13):
- Unconditionally define MADV_DUMP and MADV_DONTDUMP as DTS is built on RHEL 6.2 which doesn't define them (#921550). | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679103464.86/warc/CC-MAIN-20231211013452-20231211043452-00486.warc.gz | CC-MAIN-2023-50 | 863 | 9 |
http://joeganley.com/oldsite/labels/c%2B%2B.html | code | I ran into a bug the other day that I've run into enough times in my career that I thought it was worth sharing.
One of the perils of C++ is that it does certain things behind your back, and if you don't understand what those things are, they can bite you. One way this manifests is that apparently meaning-preserving changes can have surprising consequences.
In this recent case, I inherited some code that I was cleaning up. One of the changes was to change this:
foo = factory.getSomeClass();
Simple enough, right? But after making this change, a later access to
SomeClass foo = factory.getSomeClass();
Can you guess why? Here's a hint:
SomeClass uses a reference-counted shared-data implementation, similar to
Here's what happened:
SomeClass had defined an assignment operator, but not a copy constructor. The old code used the former, and the new code used the latter. In the absence of a user-defined copy constructor, the compiler generates one that just does a
bitwisememberwise copy. As a result, the reference count is copied and not incremented, and so later it becomes 0 before it should, and the data is deleted. A subsequent access to that data produces the segfault.
The solution is simple: Define a correct copy constructor. My own policy is never to rely on the compiler-generated copy constructor and assignment operator. When I write a new class, if I think I don't need these, I immediately add method prototypes for them that are private and have no implementations. Then, if you ever invoke them (deliberately or not), the compile will fail, and you'll know that you have to implement them for real (or change the calling code).Comments (0) | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359093.97/warc/CC-MAIN-20211201052655-20211201082655-00484.warc.gz | CC-MAIN-2021-49 | 1,662 | 12 |
http://blog.leomartins.org/2010/06/more-secure-secure-shell.html | code | When we connect to a remote machine using Secure Shell (SSH), we routinely must type the password that we use on this remote machine. But SSH allows for public key authentication, where the remote machine has a public key, and it grants access to whoever has the corresponding private key (broadly speaking; if you want a proper explanation please refer to e.g. this book's section or the "Authentication" section of the ssh(1) man page). A possible analogy is a lock-and-key pair, where the lock may be accessible to everybody as long as you keep the key with yourself. It is also similar to PGP cryptography.
But with SSH it is also possible to encrypt the private key with a passphrase, such that the file with the private key is not enough for authentication. In the lock-and-key analogy, this would be equivalent to these modern cars where the owner's fingerprint is necessary to unlock the doors, besides the key. So far I have always used empty passphrases (that is, the private key was unencrypted), but now that we're migrating to a more secure server I'm better trying a distinct passphrase for each remote machine I connect regularly.
To create the public/private key pair is relatively simple, this is how I did it:
Generating public/private rsa key pair.
Enter file in which to save the key (/home/leo/.ssh/id_rsa): /home/leo/.ssh/id_rsa-theirPC
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/leo/.ssh/id_rsa-theirPC.
Your public key has been saved in /home/leo/.ssh/id_rsa-theirPC.pub.
The key fingerprint is:
The key's randomart image is:
+--[ RSA 2048]----+
| .=.E. |
| o o. o S |
| o o . |
Here I have chosen the standard RSA algorithm, but gave a custom name to my private (id_rsa-theirPC) and public (id_rsa-theirPC.pub) files. This is because I want to use this pair for connecting only to theirPC machine (any name would do, BTW). You cannot see, but I chose a fancy passphrase for that too. The next step is to send the public file id_rsa-theirPC.pub to the remote machine and there, to include it into the list of public keys:
23:35[theirPC:~] cat id_rsa-theirPC.pub >> /home/leo/.ssh/authorized_keysTo test if this configuration is working, we can do the following:
23:37[myPC:~] ssh -i ~/.ssh/id_rsa-theirPC theirPC.bigcorp.com(notice how we must explicitly tell SSH about the location of the private key, otherwise it would fall back to your password at the remote host). If everything went fine, we can automatize the process by creating a configuration file for the SSH client with our customizations:
23:39[myPC:~] cat ~/.ssh/configThis way we can easily include more and more remote machines, and in the example above I could connect with the command "ssh theirPC". But the problem is that with more machines we connect to, the more times we need to type long passphrases into. To solve this you can use ssh-agent to keep the authentication keys (the unencrypted private keys) in memory for all commands spawned from it. That is, ssh-agent acts over a command (which can be a terminal, or startx), and all commands called from it will inherit the authorization keys. But sometimes this is not enough, you might want authentication keys shared between sessions (even after logout). For me the solution was keychain (not the Mac OSX application!), which is a wrapper around ssh-agent (and its low-level friend ssh-add).
Both keychain and ssh-add can read from the terminal prompt, but normally they will use a program that provides the passphrase dialog window. This program is generally called ssh-askpass, but each window manager offers its own implementation as well. In my case I'm using ksshaskpass, that can be integrated with the KDE Wallet system (which is installed under the name kwalletmanager). The kwallet is an application that manages passwords and passphrases, storing them in encrypted "wallets". That is, you only need to type one master password per wallet.
I will skip the details of kwallet configuration - it suffices to say that you should create at least one wallet and it will prompt you when creating or retrieving passwords - and comment on the keychain program. The installation instructions always tell us to call keychain from .bash_profile (like here), but this didn't work for me since I must make sure it is called only after KDE is up. When following the instructions, I had kwallet asking for my wallet's password (or ssh-askpass asking for the passphrase for id_rsa-theirPC, when I deactivated kwallet) before startkde being called, and thus the keyboard was unresponsive. Maybe because I have an exotic keyboard configuration. Anyway, the solution to that was to create an executable to be called after startkde, and the place to put these custom "daemons" is in ~/.kde/Autostart:
23:39[myPC:~] cat ~/.kde/Autostart/keychain.shKeychain can also handle GnuPG signatures and several SSH private keys, like in the id_rsa-anotherPC example. Now, when I start a KDE session I am asked about my wallet password, which then offers my SSH passphrases whenever requested. To be germane, it is actually more nuanced: the program requesting the passphrase will trigger the "upper-level" program to or offer the passphrase or ask another program for it (like in the hierarchy ssh -> ssh-agent -> keychain -> ssh-askpass -> kwallet).
# initialize SSH key management ("keychain" wrapper for ssh-agent)
# depends on a ssh_askpass (which for KDE is ksshaskpass)
keychain ~/.ssh/id_rsa-theirPC ~/.ssh/id_rsa-anotherPC
Some general comments:
- your private keys should not be accessible or readable by anybody but you ("chmod 600 /home/leo/ssh/id_rsa-theirPC");
- the passphrase can have spaces, punctuation marks etc. So you can mix passwords, but avoid literature quotes! The larger the better;
- avoid leaving private keys on your remote hosts, that you are unlikely to use to access other machines;
- do not share private keys among machines, these should be present only in the machines you physically access (in my case these are my desktop and my laptop, which have different configurations);
- you can change the passphrase later with "ssh-keygen -p id_rsa" (I haven't tried this, but I believe in the man page ;)
- within kwallet, you can see the passwords and passphrases of open wallets in plain text. I usually panic, but these wallets are themselves encrypted...
- I haven't tried, but it should be also possible to a) work with ssh-agent and ssh-add without keychain (launching them before the KDM or GDM display managers, or even in Autostart); b) do not use kwallet/ksshaskpass, by using the original x11-ssh-askpass or on the terminal; c) some other killer-app that I don't know about yet. | s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510754.1/warc/CC-MAIN-20181016134654-20181016160154-00510.warc.gz | CC-MAIN-2018-43 | 6,723 | 33 |
http://linuxaudio.org/mailarchive/lau/2009/10/15/160717 | code | On Mon, Oct 12, 2009 at 12:42 PM, Carlos Sanchiavedraz
Yes, triggering recording of loops was impossible in some contexts.
At that time I did not require a very tight loop sync so it was good
> Anyway, really nice. Do you have the project available.
I am not sure what you mean by "project". Although I try to archive
as many performances as possible, I could not pinpoint today which
employed that gizmo. So I do not have any specific recordings. Else,
if you mean DIY project instructions, I never really compiled any. It
was just a spur of a moment thing when I ripped a joystick open,
bent-circuit it and fastened to the guitar with elastics. Not much
thought put into it, as long as it worked. The only documentation is
the set of pictures you saw on flickr.
> Talking about wiimote and MIDI, keep track of a project I'm involved that I
That's cool. I have been using the [cwiidmote] in Pd but it is good
to know that other alternatives exist, especially since I hack python
occasionally as well.
Linux-audio-user mailing list | s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657135080.9/warc/CC-MAIN-20140914011215-00341-ip-10-234-18-248.ec2.internal.warc.gz | CC-MAIN-2014-41 | 1,031 | 17 |
https://help.perfectfit.net/knowledgebase/parts-used-on-cut-tickets/ | code | Cut Tickets: Parts Used Button
Opens the “Part Used for Cut Ticket # –” window.
This window allows you to adjust allocations and enter raw material usage.
You do not have to receive an entire cut ticket to adjust Parts Used.
This window will calculate the percentage of the total units received to date.
Insert Total Used button
Allows you to remove Allocations and insert quantities Actually Used for this cut ticket. Enter a positive number in Remove from Allocation.
Click on the Load Used button to enter all allocations as Actually Used This Time. If you have only received part of the cut ticket, enter the quantity received in the box in the lower left part of window. The Load Used button will now enter this percentage of allocations as Actually Used.
Tracking partially completed Cut Tickets is much more complex than waiting until the Cut is Complete.
Reports > Parts ..Used
Shows Number of Units Received and Average Cost per Unit.
Parts are subtotaled by Part Category
Search by Season: Season as defined on the Cut Ticket window.
Cut Ticket > Parts Used > Print
This report has same information as above report, but also shows margin and percent profit, based on the Level 1 Sell Price of the first Style on the Cut Ticket.
Use the Parts button and Description button at the top of the window to change the sorted order of the list. | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100602.36/warc/CC-MAIN-20231206162528-20231206192528-00497.warc.gz | CC-MAIN-2023-50 | 1,352 | 16 |
http://sysadmin3.blogspot.com/2015/ | code | Step 1: Create public and private keys using ssh-key-gen on local-host
goldenjohn@local-host$ [Note: You are on local-host here]
Generating public/private rsa key pair.
Enter file in which to save the key (/home/goldenjohn/.ssh/id_rsa):[Enter key]
Enter passphrase (empty for no passphrase): [Press enter key]
Enter same passphrase again: [Pess enter key]
Your identification has been saved in /home/goldenjohn/.ssh/id_rsa.
Your public key has been saved in /home/goldenjohn/.ssh/id_rsa.pub.
The key fingerprint is:
Step 2: Copy the public key to remote-host using ssh-copy-id
goldenjohn@local-host$ ssh-copy-id -i ~/.ssh/id_rsa.pub remote-host
Now try logging into the machine, with "ssh 'remote-host'", and check in:
to make sure we haven't added extra keys that you weren't expecting.
Note: ssh-copy-id appends the keys to the remote-host’s .ssh/authorized_key.
Step 3: Login to remote-host without entering the password
goldenjohn@local-host$ ssh remote-host
Last login: Sun Nov 16 17:22:33 2008 from 192.168.1.2
[Note: SSH did not ask for password.]
goldenjohn@remote-host$ [Note: You are on remote-host here]
The above 3 simple steps should get the job done in most cases
That it , please carry on ...with smiles.. | s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251728207.68/warc/CC-MAIN-20200127205148-20200127235148-00099.warc.gz | CC-MAIN-2020-05 | 1,222 | 21 |
https://forum.chirpstack.io/t/lora-gateway-os-joinaccept-is-not-received/4282 | code | I have only been experimenting with LoRa for a few days and still have a few problems. Thanks to the good documentation I was able to get the gateway up and running. Now I would like to connect my first device for testing…
- Gateway: Raspberry Pi 3 Model B+ with iC880 and LoRa Gateway OS
- Device:Arduino Wan MKR 1300
I used a simple Arduino example script (“hello world”) to activate the device.
In the Live Frames tab of the LoRaServer for the Device you can see the Join Request in the UPLINK. Subsequently, the gateway also sends a JoinAccept in the downlink. However, the JoinAccept is not received by my device and it continuously sends new JoinAccept requests.
I am still very inexperienced with LoRa and can’t get any further at this point. I hope you can help me. In the following I have added my current gateway config.
Thanks a lot!
Current Gateway Configuration:
- Device-status request frequency: 0
- Minimum allowed data-rate *: 0
- Maximum allowed data-rate *: 0
- LoRaWAN MAC version ; 1.1.0
- LoRaWAN Regional Parameters revision *: A
- Max EIRP *: 0
Device supports OTAA: true
Device supports Class-B: false
Device supports Class-C: false
- device name: MKR1300
Device EUI: a8610************* (–>from MKR 1300)
Disable frame-counter validation: true
Network key (LoRaWAN 1.1) *: dc a5 ****************** (MSB)
Application key (LoRaWAN 1.1) : 2a 32 *************** (MSB)
This device has not (yet) been activated.
LIVE LORAWAN FRAMES: | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644907.31/warc/CC-MAIN-20230529173312-20230529203312-00422.warc.gz | CC-MAIN-2023-23 | 1,461 | 24 |
https://developers.google.com/style/cross-references?hl=en-GB | code | In general, cross-references link to nonessential information that adds to the reader's understanding.
References to other documents
- Give an explanation as to why you are referring the reader to this
Recommended: For more information, see the auth guide.
- Use meaningful link text.
Recommended: To begin coding right away, read Building Your First App.
- If a link downloads a file, the link text needs to indicate this action as well as the file type.
Cross-references within generated reference documents
When linking from one reference topic to another in generated reference documents, use the reference generator's standard linking syntax rather than hard-coding links within the reference, so that the links will change appropriately when the reference docs change.
When a cross-reference is a link, don't put the link text in quotation marks.
Recommended: For more information, see Meet Android Studio.
Recommended: Learn about what's new in Android Wear 2.0.
In the rare case when a cross-reference isn't a link, use quotation marks.
Recommended: For more information, see "Describing system versions," below.
For cross-references that are titles of published works such as books or movies, use italics and title caps but no quotation marks.
Use an external link icon to indicate that the link opens in a new window or tab. For more information, see External link icons.
Recommended: For more information, see Make link text meaningful. | s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575751.84/warc/CC-MAIN-20190922221623-20190923003623-00196.warc.gz | CC-MAIN-2019-39 | 1,447 | 17 |
https://vevurka.github.io/categories/dsp17 | code | Some thoughts on my participation in DSP17 contest
Overview on SAT and QBF
Short overview on P, NP and PSPACE sets of decision problems
Setting up Travis CI for github repo in Python
Quick overview of Travis CI possibilities
What to do if you want to stop kafka consumer properly?
About some issues you may have with Django test framework
One of my git stories
About filtering and managing multiple third party libraries in Django
They happen all the time…
Pagination in Django works great!
How to start with code review?
I’ve almost reinvented a wheel again, because tagging libraries for Django suck!
About resources from which you can learn how to hack.
Why should I use ModelForms for models in Django?
How to represent graph structure as matrix in Python?
About finding the best way to use bootstrap in Django Forms.
How to represent graph structure in Python?
About adding first django template to my urls box - frisor.
TDD or not TDD
Django application set up.
About my dsp17 project | s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247486936.35/warc/CC-MAIN-20190218135032-20190218161032-00022.warc.gz | CC-MAIN-2019-09 | 994 | 22 |
https://forums.larian.com/ubbthreads.php?ubb=showthreaded&Number=714205 | code | So how ARE you people embedding images? (Some great designs so far. )
Once you have a screen shot of your character. You go to something like Imgur, an image uploader. Upload your image and then "open image in new window" to get the .png or .jpg URL. Then on the forums here you use the "image tag" in the full editor and put the URL. Boom picture. | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945144.17/warc/CC-MAIN-20230323100829-20230323130829-00515.warc.gz | CC-MAIN-2023-14 | 348 | 2 |
https://community.wd.com/t/firmware-problem-or-issues-with-wd30efrx-68euzn0-on-mac/238788 | code | I support the SoftRAID for Mac software and am trying to help a mutual customer, with lots of WD drives, but a set of 4 that do not work in the Mac.
SoftRAID creates RAID volumes in Mac OS X, and a limited version of the SoftRAID driver is bundled with all OS X installs, so the driver passes all Apple QA testing as a kernel extension. There are no compatibility issues with SoftRAID and OS X. We have thousands of users who use various WD drives with SoftRAID.
The only significant differences between OS X (disk utility) scanning the drive and SoftRAID, is SoftRAID sends multi-threaded queries to the drive and does a SMART query. I had the user disable the SMART query, so that should not be the issue.
The problem is specific to WD30EFRX-68EUZN0 drives.
The firmware version on the problematic drives (all 4 of them) is 07.01E.8
I have isolated the problem to the specific firmware on one of his sets of drives, version 07.01E.8.
Drive models with this firmware do not work in the Mac with SoftRAID. The disks hang when queried.
He has several 2014 models of this drive that work fine and several 2018 versions that work.
WD30EFRX-68EUZN0 manufactured in 2018 have multiple firmware versions, that work:
The firmware versions are 82.00A82 (2015) and 80.00A80 (2015). No problems at all.
The user is in Italy, so it is not easy for me to obtain the specific drive to debug this.
My questions are:
Is there a firmware updater for this model drive that may resolve this problem?
(needs to run on a Mac, or an ISO Thumbdrive image that can startup his Mac.)
Is there anywhere I could purchase this specific drive AND firmware version to test it?
(or get a loaner from WD)?
Once we obtain the drive, we will figure out the problem and report it to WD, so a contact would be useful, or I could post it to this thread at that point. | s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038921860.72/warc/CC-MAIN-20210419235235-20210420025235-00418.warc.gz | CC-MAIN-2021-17 | 1,831 | 17 |
https://aakjaer.co/archive/printix/ | code | In an industry where change and innovation is scarce, Danish start-up Printix is on a mission to replace traditionel business print infrastructure with a cloud-based SaaS solution.
I was initially hired to give Printix's public website a make-over, but Printix was in dire need of UI, UX and templating experience, so I ended up focusing on redesigning all of their existing web applications: An administrative system to manage the print environment in, an app for the end-user to handle and release their printed documents from and a couple of minor apps for technicians etc.
The old client software installation process turned out to be one of the main causes for low conversion rates, and was therefor heavily improved focusing on giving the user clear indications of the process, remaining time and by adding a familiar app look'n'feel to the design.
The process of designing and implementing dashboards is often times challenging, because of the complexity and the many use-cases it has to cover.
This too was a great exercise in user-needs vs. actual available data and involved people from all areas of the company.
Keeping things clean
A common problem when designing applications that grow rapidly in complexity is to maintain a UI that is simple and easy on the eye.
I purposely kept the design very light with only a few colors while minimising the number of elements on each page to allow for an inevitable growing feature set.
Describing heavy technical concepts like virtual print environments in a UI eventually takes a lot of forms to tweak the many settings and options.
The forms were kept clean with scalabillity in mind and in the process I wrote a couple of custom components (fx. checkbox/switch) to improve the experience on mobile.
Get in touch
Whether you have ideas for a project or just feedback, feel free to send me an e-mail. | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662577757.82/warc/CC-MAIN-20220524233716-20220525023716-00148.warc.gz | CC-MAIN-2022-21 | 1,855 | 12 |
http://stackoverflow.com/questions/14358425/how-can-i-execute-a-native-sql-script-in-jpa-hibernate | code | I have a SQL script with database dump. How can I execute it using Hibernate's
I tried it this way:
EntityManager manager = getEntityManager(); Query q = manager.createNativeQuery(sqlScript); q.executeUpdate();
but it works only when
sqlScript contains a single SQL query, while I need to run multiple inserts and other complex stuff.
RDBMS: Oracle Database 11g Express Edition Release 22.214.171.124.0 - 64bit Production | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064869.18/warc/CC-MAIN-20150827025424-00013-ip-10-171-96-226.ec2.internal.warc.gz | CC-MAIN-2015-35 | 421 | 6 |
https://speechcentral.net/2023/01/13/can-text-to-speech-apps-help-in-cases-of-dyslexia-or-adhd/ | code | As someone who has been the developer of one such app for more than a decade I’ll try to provide the answer based on my experience which was acquired in contact with many such users and also by reading various materials on this topic.
Text-to-speech apps may help those groups in two ways:
- by providing the text in the audio form which may be more accessible to those users
- by providing the text in a more readable form than the original.
First thing to note in driving the conclusion is that both Dyslexia and ADHD are syndromes. As such they don’t work the same for everyone. There is also not a universal answer for everyone and in some cases text-to-speech apps might not be helpful at all, but in my experience success rate is fairly high and it reasonable to take some time to see if this method helps.
Even more important conclusion is that one setup of the text-to-speech app may work for some people with those syndromes, but it may not work for others. As such finding the right solution is a process of testing different setups. It may take some time to find the one that works for you, but when you find it it is more than rewarding.
However not all text-to-speech apps are customizable to the same extent. As such it is a good idea to first find the app that offers the widest range of such options and then to try the find the best configuration in it.
Speech Central was always made with this in mind, and I think that it offers generally the widest set of configuration options. You can see more on this fact sheet that compares features across popular text-to-speech apps (Speech Central, Voice Dream Reader, Speechify). | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500126.0/warc/CC-MAIN-20230204110651-20230204140651-00603.warc.gz | CC-MAIN-2023-06 | 1,645 | 8 |
http://www.wakeworld.com/forum/showthread.php?t=801145&page=999 | code | Has anyone else noticed that full length releases or full length web releases are back in a big way? I feel like I've noticed a small drop in banger web edits over the past season and catch myself watching older web edits from 11 and 12.... But with that said things may have slowed down because we have coming up.... Prime, drop the gun, Waffle House, quiet please, and Al Sur all on deck.
What do you guys think. Is the full length flick back to stay? Are you happy or upset about this? Is a video part every year more important than 2 minutes of clips?
Last edited by simplej; 02-11-2014 at 7:01 AM. | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120206.98/warc/CC-MAIN-20170423031200-00207-ip-10-145-167-34.ec2.internal.warc.gz | CC-MAIN-2017-17 | 602 | 3 |
https://www.nativewind.dev/core-concepts/quirks | code | NativeWind aligns CSS and React Native into a common language. However the two style engines do have their differences. We refer to these differences as quirks.
React Native has various issues when conditionally applying styles. To prevent these issues it's best to declare all styles.
For example, instead of only applying a text color for dark mode, provide both a light and dark mode text color.
dp vs px
React Native's default unit is density-independent pixels (dp) while the web's default is pixels (px). These two units are different, however NativeWind treats them as if they are equivalent. Additionally, the NativeWind's compiler requires a unit for most numeric values forcing some styles to use a
React Native uses a different base flex definition to the web. This can be fixed by adding
flex-1 to your classes, which forces the platforms to align.
React Native uses a different default
flex-direction to the web. This can be fixed by explicitly setting a | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816942.33/warc/CC-MAIN-20240415045222-20240415075222-00221.warc.gz | CC-MAIN-2024-18 | 967 | 9 |
http://developer.blackberry.com/cascades/reference/device_and_communication_phone.html | code | Provide support for phone communication in your apps.
You can initiate calls and perform other phone-related operations by using the classes in this category. You can listen for incoming calls, place outgoing calls, and retrieve detailed information about a call, such as the ID, call state, and call type.
Libraries and permissions
To link against these classes, add the following line to your .pro file:
LIBS += -lbbsystem
You must also specify the following permissions in your bar-descriptor.xml file:
To download a code sample that demonstrates how to support phone communication in your apps, visit the Cascades Samples repository in Github. | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705284037/warc/CC-MAIN-20130516115444-00005-ip-10-60-113-184.ec2.internal.warc.gz | CC-MAIN-2013-20 | 647 | 7 |
https://www.ibizsoftinc.com/blog/setting-up-custom-qualifiers-for-istore/ | code | Setting up Custom Qualifiers for iStore
Advanced Pricing in Oracle Ebusiness Suite has provided the functionality to allow us to set up custom qualifiers based on the attribute values in an active cart. Cart header information resides in aso_quote_headers_all and the line information resides in aso_quote_lines_all table.
Based on any of the 15 attribute values in these tables, one can manipulate the price that shows up in a cart. For eg. if the price is driven by a value derived from a hard-copy catalog (one of our customer had this requirement), we provided a text box for the user to enter the catalog code value and click on ‘Update Cart’, this would display a new price based on the attribute mapping setup for this catalog code. This setup is done via Oracle Pricing Manager responsiblity using Attribute Management functionality, as discussed below. Note : We have used Minisite Site Id as our qualifier in this example. | s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999817.30/warc/CC-MAIN-20190625092324-20190625114324-00090.warc.gz | CC-MAIN-2019-26 | 936 | 3 |
https://github.com/elastic/kibana-docker | code | To build Kibana docker images for pre-6.6 releases, switch branches in this repo to the matching release.
This repository has been archived by the owner. It is now read-only.
Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.Sign up | s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655888561.21/warc/CC-MAIN-20200705184325-20200705214325-00242.warc.gz | CC-MAIN-2020-29 | 332 | 4 |
https://www.subatomicglue.com/secret/eventhorizon/readme.html | code | event horizon - lua/c++ event system
- an event system useful for games (especially with a lua scripting component)
- event notification mechanism compatible with both lua and c/c++ method callbacks
- each event notification can deliver a packet of information (a data payload)
that may be used as contextual data for the event.
- this payload mechanism brings much variability to each individual event type
- event types are defined as fast memory-aligned short-strings (128 bit), and is up to the user to uniqify
the eventsystem operates with a command metaphor.
for each event, there is a data packet payload that travels alongside.
we can think of the event as the "command", and the payload as the
"arguments" to the command.
For our examples here, our payload is called "TestStruct", which is simply
a c++ struct containing the arguments for the command:
// we'll fill in the details of this struct later...
arg1, arg2, my3rdArg;
in c++, events are triggered like this:
trigger( "fire", TestStruct( 112358, 3.14159265f, "pi is good to taste, please use enough cornstarch" ) );
in lua, events are triggered like this:
trigger( "fire", 987654321, 0.112358, "cake is for feasting, 33 times this morning" )
To recieve event notification upon an event trigger, we register a callback:
c++ callbacks have this signature, and may be global or member functions:
void trig( const void* arg )
// NOTE: inside, you then cast the "arg" to the payload
// struct containing the list of args:
const TestStruct* args = (const TestStruct*)arg;
lua callbacks have this signature, and deals with real arguments directly:
function trig( arg1, arg2, my3rdArg )
Registration of a callback is supplied with:
1.) a function
2.) a description of the arguments.
c++ callbacks are registered like this:
reg( "fire", EventCallback( &trig, TestStruct::getdef() ) );
lua callbacks are registered like this:
reg( "fire", trig ) --description supplied by TestStruct in c++
we can also register [obj,member] callbacks in both c++ and lua:
reg( "fire", EventCallback( &obj, &MyClass::trig, TestStruct::getdef() ) );
reg( "fire", trig, obj )
finally... you're probably wondering about getdef()
getdef() is what enables the magic to happen to translate arguments from lua->c++ and c++->lua
Only basic types (int, bool, float, string) are supported as members.
TestStruct( int b, float f, const char* o )
beans = b;
frys = f;
safeStrCpy( oompas, o, sizeof( oompas ) );
/// describe the data packet
static packet& getdef()
static packet o =
packet_( "TestStruct" )
.add( "beans", &TestStruct::beans )
.add( "frys", &TestStruct::frys )
.add( "oompas", &TestStruct::oompas );
Thanks to Don Clugston's awesome FastDelegate code for genericizing
function pointers of any kind.
event horizon - lua/c++ event system demonstration
Copyright (c) 2006 kevin meinert all rights reserved
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA | s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573476.67/warc/CC-MAIN-20190919101533-20190919123533-00361.warc.gz | CC-MAIN-2019-39 | 3,528 | 66 |
https://www.standblog.org/blog/post/2013/01/29/Firefox-OS-App-Days-in-Paris | code | Over the week-end, More than 150 people gathered in an engineering school classroom to learn, hack and celebrate:
- Learn about HTML5 and Web applications.
- Hack such applications for Firefox OS (and Firefox for Android) with the assistance of Mozilla hackers.
- Celebrate that we're changing the world with what could become a universal mobile application platform.
Mozillians (paid staff and volunteers) gave talks about mobile development with HTML5 on Firefox OS, then the hacking session started, with Mozillians helping those who wanted. In the meantime, updates on Twitter connected us with the 25 or so other App Days events taking place around the world.
At the end of the day, 36 applications were ready to be demoed, and the authors of the best demos have received a voucher for the upcoming and very cool Firefox OS developer preview phones. Of course, everyone has been handed a T-shirt!
I would like to thank all the people who helped making this amazing event possible. I won't name names because I would surely forget someone, but you know who you are. The event was a blast and it demonstrated the hunger for an Open Web mobile platform like Firefox OS! | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100227.61/warc/CC-MAIN-20231130130218-20231130160218-00663.warc.gz | CC-MAIN-2023-50 | 1,171 | 7 |
https://www.justinmind.com/community/topic/usernote_browser_compatability | code | Usernote browser compatability
I'm trying to run my prototype in Usernote but I cant seem to select from my "select lists" in Internet Explorer and Firefox. The values for the lists polulate just fine but I just cant select one. I've tried running the prototype from Safari and from an IPhone and it seems to work just fine. It seems to either be a browser issue with IE and Firefox or a Windows problem. I've tried it on multiple computers with and without firewall/antivirus. Thanks for any help. | s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247518425.87/warc/CC-MAIN-20190222135147-20190222161147-00423.warc.gz | CC-MAIN-2019-09 | 498 | 2 |
https://forums.meteor.com/t/local-build-run-works-galaxy-deploy-issue-with-specific-package/34280 | code | Hi, I need some help please. I am running Meteor [email protected]
I am using a 3rd party NPM package https://www.npmjs.com/package/personality-sunburst-chart
It runs fine locally, I can render the Sunburst charts. Meteor build locally and it succeeds.
When I try and deploy to Galaxy, I get the following error https://github.com/personality-insights/sunburst-chart/issues/20
I see issues like https://github.com/meteor/meteor/issues/5517 phew long thready… but trying to work out where the issue is.
Any help appreciated? I’ll log with Galaxy Support too, but I expect they’d say this issue is relevant to the package. | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662541747.38/warc/CC-MAIN-20220521205757-20220521235757-00738.warc.gz | CC-MAIN-2022-21 | 639 | 6 |
https://support.esri.com/en/technical-article/000011730 | code | FAQ: Under what circumstances would the Spatial Join tool choose the wrong features to satisfy a spatial relationship?
Under what circumstances would the Spatial Join tool choose the wrong features to satisfy a spatial relationship?
There are several factors that could contribute to the Spatial Join tool producing unexpected output.
Tolerance A likely cause for unexpected output of a spatial join is that the tolerance of the data is too large for the resolution. This causes features to be considered 'touching,' 'within,' 'identical to,' 'intersecting,' or to have some other relationship that is not actually true when the data are viewed at a very large scale, such as 1:10, 1:1, or 1:0.1.
A recommended best practice is to set the tolerance to ten times the resolution. Tolerance is the distance within which vertices are considered coincident.
If the spatial join is done with ArcMap's Join command, joining data from another layer based on spatial location, examine the coordinate system of the two layers and of the map dataframe to be sure they are compatible as described above, and that any necessary geographic transformation has been set in the dataframe properties. Data Conversion Factors such as tolerance and resolution cannot be changed in an existing dataset. Create a new feature class with the desired resolution and tolerance. There are a number of tools that can load data from the original feature class into a new one: - The Object Loader in ArcMap - The Simple Data Loader in ArcCatalog or the Catalog window in ArcMap - ArcToolbox > Data Management Tools > General > Append
For comparing spatial properties such as distance and position, it is recommended that a projected coordinate system (with linear units of feet, meters, etc.) be used, rather than a geographic coordinate system (with angular units of degrees).
Last Published: 5/5/2016
Article ID: 000011730
Software: ArcGIS for Desktop Advanced 10.1 ArcGIS for Desktop Basic 10.1 ArcGIS for Desktop Standard 10.1 ArcGIS-ArcEditor 9.3.1, 9.3, 9.2, 9.1, 9.0, 10 ArcGIS-ArcInfo 9.3.1, 9.3, 9.2, 9.1, 9.0, 10 ArcGIS-ArcView 9.3.1, 9.3, 9.2, 9.1, 9.0, 10 | s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741764.35/warc/CC-MAIN-20181114082713-20181114104713-00541.warc.gz | CC-MAIN-2018-47 | 2,137 | 10 |
http://helmersblog.nl/tag/windows-8/ | code | Windows 8 Archive
The Surface Pro is available in several countries at the moment, in a couple of days
On November 7th 2012 Microsoft Netherlands had invited 8 Dutch experts to share their experiences with
Windows 8 doesn’t only bring the Windows 8 Interface (formerly known as Metro), but also a
When you use Windows 8 in the Enterprise I can imagine that you (the IT Administrator)
Update (Oct 17th) : Around 14:00 (CET) today I read on Tweakers that all the beta-slots are filled up.
On October 5th Steven Sinofsky announced that the built-in Apps for Windows 8 will receive an | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00006-ip-10-60-113-184.ec2.internal.warc.gz | CC-MAIN-2013-20 | 582 | 7 |
https://support.squarespace.com/hc/en-us/articles/360025606831-How-blog-tags-and-categories-appear-in-search | code | Understand how search results display filtered pages of the tags and categories added to blog posts.
Each tag and category you add to a blog page automatically creates a page of filtered results. Links to these filtered pages can appear in search results.
Filtered page URLs
When you click a tag or category link on your site, the filtered page that appears has a static URL, like this:
- Tags and categories are case-sensitive. For example, if your category is Test, its URL would be examplesite.com/blog/category/Test.
- This URL appears in your site map, and can appear in search engine results, depending on what visitors search for.
- You can hide static URLs from search and your sitemap.
- Currently, only blog pages have static URLs. All other collections, and blog pages in discontinued templates, have dynamic URLs.
Old links may have dynamic URLs
In the past, filtered blog pages had dynamic URLs, like this:
- Dynamic URLs are less likely to rank highly in search results.
- If you linked to dynamic URLs in the past, the links will continue to work. The filtered page of results is the same as the static URL. The only difference visitors see is the URL in the browser bar.
- If you're linking to a tag or category filter on your site or another platform, we recommend using the static URL. This version is easier for visitors to understand, and linking to it may have a small positive impact on SEO.
Hide static URLs
To hide filtered pages' static URLs from search engines and remove them from your sitemap:
- Open page settings for that blog page.
- Click the SEO tab.
- Use the Hide from search engines section to choose your settings. To hide the blog landing page and blog posts from search engines as well, check All pages in this collection.
By default, tags and categories are visible.
Are filtered pages seen as duplicate content?
While search engines sometimes flag duplicate content that looks malicious or deceptive, like identical pages on different domains, filtered views of tags and categories are a regular feature on the web, and aren't seen as suspicious. We also add special code (a canonical metadata tag) to tell search engines that the static URL is the one they should index.
For more information on duplicate content, visit Google's documentation. | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473824.13/warc/CC-MAIN-20240222161802-20240222191802-00231.warc.gz | CC-MAIN-2024-10 | 2,285 | 22 |
https://carterbancroft.com/review-your-own-pull-requests/ | code | Review Your Own Pull Requests
Here's what I mean:
Before you request a review from someone else you should go over it yourself and fix as much as you can. While you're at it, add clarifying comments wherever necessary to help your guardian angel out.
I'll elaborate, but first... Why do we do code reviews?
To me, it's mostly a gut check. It's so someone who hasn't been staring at this nonsense for a week straight can tell you if you're doing something insane (because you probably are). Someone who has different (possibly more) perspective may notice things you never would, and often a review is the last line of defense before pushing the glowing embers of a dumpster fire to production.
You may be too close to this code to judge it fairly but you aren't so close to miss stupid mistakes. The amount of other people's time I've wasted due to typos, copy pasta and leaving commented code lying around is... well, it's a lot. This stuff is easy to spot and if you give your PR a good once over yourself you'll be surprised at how much you catch.
So what exactly am I advocating here?
Step 1: Scan the code in your PR exactly as a reviewer who isn't you would.
I mean, do all this in the GitHub (or Bitbutcket or GitLab or whatever) review flow, just like your reviewer will. This allows you to see changes as they will see them which makes you more capable of filling the role yourself.
Step 2: Commit fixes for any problems you find.
Step 3: Add comments to specific changes clarifying why you did what you did.
You don't have to do this for every single line but it's not difficult to guess what might need some explainin'.
I picked up this commenting habit from a long time co-worker of mine. I loved reviewing his thoughtful, well commented PRs and I'm ashamed to admit that it was years before I even once tried it myself.
And, that's it.
Code review isn't easy and time is precious; It's awful nice of your colleagues to donate it. Be kind to them. Before you hand over that PR, review it yourself. | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683683.99/warc/CC-MAIN-20220707033101-20220707063101-00271.warc.gz | CC-MAIN-2022-27 | 2,009 | 15 |
https://www.arashahmadi.ca/wp/ | code | I am an Electrical Engineering graduate from the University of New Brunswick with a focus on instrumentation and controls. I have experience in designing and developing electrical systems.
This website showcases my skills, experiences, interests, and activities.
You can get in touch with me using the Contact link at the top of this page. I will be happy to hear from you. | s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585507.26/warc/CC-MAIN-20211022114748-20211022144748-00245.warc.gz | CC-MAIN-2021-43 | 373 | 3 |
http://alho.online-officejob.de/simple-webdav-server.html | code | Mozilla Thunderbird supports CalDAV with a specific add-on called Ligntning that will need to be added. OpenKM is an Enterprise Content Management Software, often referred to as Document Management Systems (DMS). WebDAV servers can be difficult to set up. sabre/dav supports a wide range of internet standards related to these protocols. Create a virtual directory in IIS that points to the filesystem directory. This update enables administrators to configure the IIS 7. host uri specifies the host name or IP address of the host or target running the WebDAV server. iCal is a personal calendar application made by Apple Inc. It can be run as a daemon. Support for WebDAV Add a feature. If your request handling code forks you need to make sure you reset this or unexpected things will happen if somebody sends a HUP to all running processes spawned by your app (e. There are a few other settings as well if you want to configure them. 50 thoughts on “ WebDAV Detection, Vulnerability Checking and Exploitation ” Reply. com account as a WebDAV share, so you can mount your Box. Simple HTTP Server with Python. For Advanced Users: Using FTP, SFTP and WebDAV to access your files within other apps 10th June 2011 by Robert Category: News , Tips and Tutorials Comments Livedrive have always believed that one of our tasks is to make your online data as accessible as possible, so that you can use it in any way you like. The phone system does not allow us to add credentials for authentication purposes, only an url. If you never heard about WebDAV, this extension to the HTTP protocol allows creating, moving, copying, and deleting resources and collections on a remote web server. It's time to get the WebDAV out! The attack is fairly simple. With this app you can use WebDav for accessing files from multiple clouds in one WebDav view. WebDAV allows you to securely access files over the internet using SSL encryption from a remote server that is configured as a mapped drive on a local PC using Windows Explorer. simple webdav server. Connecting to WebDAV for working with portal themes and skins The portal contains the WebDAV service and enablement layer. The term "file share" in Windows Server is a bit of a misnomer. Don’t worry, the following article will show you a quite easy way to connect OneDrive and WebDAV no matter on your computers or phones, no matter in Windows or Mac, etc. Originally designed for the creation of tools which can be installed by users in content management systems and which are portable between servers from different vendors. If you're using Apache 2. Whatever i type, i get a blank page as return and an exception in the server:. Open Internet Information Services (IIS) Manager: If you are using Windows Server 2012 or Windows Server 2012 R2: On the taskbar, click Server Manager, click Tools, and then click Internet Information Services (IIS) Manager. Create a virtual directory in IIS that points to the filesystem directory. The WebDAV extensions should be designed to allow client implementations to be simple. WebDAV can be used to create, delete, change, and search items and folders in the Exchange Web Store. It contains a set of concepts and accompanying extension methods to allow read and write across the HTTP 1. CuteFTP from Globalscape does it all! Schedule transfers, regularly back up or synch your sites, monitor changes, easily drag & drop files for fast & easy file transfers. Here is the scenario. Compare WebDAV Hosting. With Time Machine, you can back up all the Mac computers on your network to OS X Server and protect valuable data. I use the same Python WebDav package in the post, but Dockerized it: The run. I looked at the list of opened ports (using nmap) and couldn't find which corresponds to WebDAV. Known issues and troubleshooting: Passwords with characters like @ will not work, try to install this using a simple account and password (A-Za-z0-9). It's the real cloud file server you've been looking for!. It’s written in PHP and can use SQLite or MySQL databases. > > Want to setup a webdav server using node. Figure 1, Create a UNC share to use with WebDAV. As an alternative to using the request() method described above, you can also send your request step by step, by using the four functions below. Windows PC and Windows Server 2012 use features built into Internet Information Services (IIS) to enable WebDAV. While there is a WebDAV client package for PHP, I ended up using a framework called sabre/dav. WebDAV CGI Setup The WebDAV CGI can be easier upgraded if you use a configuration file instead of changing the setup section of webdav. After selecting the Server click on "Next". Let's walk through the steps necessary to have WebDAV enabled on an IIS machine and then show how to publish and remotely modify content using WebDAV. After that, you can create a folder in your account, e. Our simple and secure software will ensure that you never lose your files. py generic WebDAV server, adds WebDAV methods to web_server above. You can also access your files from anywhere on any device. Few weeks ago I was looking how I can connect SAP XI/PI with WebDAV server. Activate it by putting @DEV_EXTENSIONS or 'Localize' to your @EXTENSIONS setup (see extensions doc) in your webdav. Answer, Resolution. EasyDAV is an easy-to-install WebDAV server. Watch Queue Queue. For more information on ftp synchronization or WebDAV synchronization, please visit our Synchronize Applications page. With the WebDAVPro application, you configure the access data for your WebDAV shares. Learn more download licensing. A Simple Java FTP Client Package. It’s reliable and easy to use, but there are plenty of potential problems if you’re starting from scratch. It does not matter whether you are a power user managing many WebDAV servers or just a beginner creating his first web site. 6 New TclHttpd 3. WebDrive includes a simple backup utility which allows you to backup the files on your workstation to any remote server that WebDrive is connected to. In order to develop a fully compliant and secure free WebDAV server, developers must thoroughly follow the WebDAV RFC 4918 Specifications. WebDAV Apps. With WebDAV File you can access your files on a WebDAV enabled server where ever you are. py abstract data types, named constants for all WebDAV properties, and definitions for all exceptions used in this program. WebDAV Server Examples in C# & VB. An FTP client program initiates a connection to a remote computer running FTP server software. Simple HTTP Server with Python. Configure the Samba daemon. pl for further information. Stack Overflow for Teams is a private, secure spot for you and your coworkers to find and share information. See CHANGELOG in your installation path and take a look into webdav. CentreStack approaches file server sync-and-share differently, by preserving NTFS permissions, folder structures and Active Directory user identities, with mapped drives and file locking. 5 to act as a simple WebDAV server pointing to a local share. The programs I'm suggesting are server software, just like Filezilla Server, but this is a web server what you're asking for (remember WebDav isn't a protocol on its own like FTP, but an application layer on top of the existing HTTP protocol). WebDAV and Nginx - Setting Up a Bulletproof WebDAV Server Isn't Easy. WebDAV for Exchange has been extended by Microsoft to accommodate working with messaging data. Webdav over http was always. Regardless of what you want to use, setting up a server for either of them is extremely simple, and we’ll walk through how to start either an FTP or SFTP server in OS X. Configuring Windows Server 2008 IIS7 for Avaya IP Phone HTTP File Server for Firmware, Settings and Backup/Restore Files Created by: Alex Morales Setup Local User for IP Telephone Basic Authentication (used for webdav write permission). Download Moon+ Reader Pro 3. WebDAV Server lets you run the HTTP / WebDAV service on your Mac computer and you can access the files from other computers / devices with WebDAV. Tiny HTTPd tinyhttpd is a relatively simple webserver I wrote for a school project. It enables you to map DriveHQ Cloud Storage as a network drive using the WebDAV protocol. It's 100% portable and just has you choose a username, password, port, and root path. This option also influences readLock=changed to control whether it performs a fast check to update file information or not. WebDAV Clients. Windows Server uses the Server Message Block (SMB. On CRX level, the CRX webapp needs to be reconfigured. This is a fully functional Class 2 WebDAV server that stores all data in file system. This app cost you one time $ 5. Read the full. It is also able to read GoogleDocs or IMAP. Simple but effective. This morning I heard (from the security-basics mailing list, of all places) that there's a zero-day vulnerability going around for WebDAV on Windows 2003. This application is extremely easy. Once mounted, users can create, access, edit, and collaborate on files using. The goal of this project is to provide a secure, efficient and extensible server that provides HTTP services in sync with the current HTTP standards. py Utilties for travsersing. Rumpus makes installing a full-featured file transfer server stunningly simple. From a simple file server to a. Please remember these are the minimal requirements and we don't recommend them for companies with over 30 users. Configuration Sample. First go to the Places menu and select Connect to Server… Select the Service Type - in my case SecureWebDAV (HTTPS) - and type the required information: After pressing the Connect button the window will close. I just applied the upgrade and enabled WebDAV for my share. Files on the remote server appear as if they were stored locally. A simple, secure and complete WebDAV Server! The app supports adding multiple users, has WebDAV over SSL/TLS (HTTPS) support and can be set to automatically start a WebDAV Server when your device is connected to a specific WIFI network!. However, they can be switched off either though the Control Panel or at the command line (net stop server at DOS prompt). A suitable sub title for this post might be, 'Tim, taking a tiny step forward after several days of misery is now a happy camper. Tiny HTTPd tinyhttpd is a relatively simple webserver I wrote for a school project. RFC 2518 WEBDAV February 1999 6. File Transfer Protocol (FTP) is a simple network protocol based on IP, which allows users to transfer files between computers on the Internet. Even though DSM 6. Since some FTP server may not support to list the file directly, if the option is false, camel-\ftp will use the old way to list the directory and check if the file exists. Windows Server uses the Server Message Block (SMB. You can find the instructions from Microsoft here. Test File Opened from WebDAV Server via File Explorer. This quick guide will get you set up with a secured simple WebDAV server using Ruby and Rack. Upload and download large files for easy sharing. This protocol permits a WebDAV client to discover what principal (user) the server thinks is currently authenticated. WebDAV is a distributed web authoring implementation built into HTTP that allows you to easily share files and work collaboratively with others. 1 (webdavsystem. On your file share server, share the folder you wish to make accessible using WebDAV. BitKinex FTP Client Internet & Networking - FTP, Freeware, $0. Q&A for Work. Free WebDAV Server – What you need to know first. Now, long ago we’d call Transmit an “FTP client”, but today, with Transmit 5, we connect to lots of different server types and cloud services. Simple Webdav Server. Mac OS X has excellent support for WebDAV built right into the operating system. WebDav Data Integration. TLS support - if needed. GitHub Gist: instantly share code, notes, and snippets. SimpleFTP is a Java FTP client package that lets you connect to FTP servers and upload files. For example, a WebDAV Compliance Test is a unit test used to confirm if a WebDAV server, for example WebDAV server 110, is compliant with the WebDAV standard. It enables me to manage medias, file synchronization, and remote desktop/files access. Free SMTP Server - Communications/E-Mail Clients Free SMTP Server is a SMTP Server program for Windows that lets you send email messages directly from your computer. Moon+ Reader Pro Innovative book reader with powerful controls full functions: – Read thousands of ebooks for free, supports online ebook libraries – Read local books with smooth scroll and tons of innovation Support epub, mobi, chm, cbr, cbz, umd, fb2, txt, html, rar, zip or OPDS, key features:. For businesses, it works like a Cloud file server. ' Isn't it amazing in the world of software and development how: 1. I just spent the past 2 days attempting to find an inexpensive, simple solution for handling the Outlook calendar publish feature. If your request handling code forks you need to make sure you reset this or unexpected things will happen if somebody sends a HUP to all running processes spawned by your app (e. webdav - Simple Go WebDAV server. It provides Extended SQL, SSL, basic authentication, prepared statement, query cache, WebDAV and table level access control. OS X Server also takes advantage of Time Machine to back up your server data — including shared files, calendars, mail, wikis, and more — to another hard drive, so you can easily restore. Enter the appropriate username and password in the authentication box that shows up, and you're done. 4 Web Application Server is started. Microsoft Windows 7 through 10 and Server 2008 through 2016 allow for the use and mapping of storage through WebDav. For MS Exchange interoperability, WebDAV can be used for reading/updating/deleting items in a mailbox or public folder. But it's for certain no easy to use solution for universities with thousands of teachers. On the Select Features page, click Next. WebDAV (Web Distributed Authoring and Versioning) allows clients to perform remote Web content authoring operations. This application is extremely easy. The server configuration properties for Spotfire Statistics Services are contained in the file spserver. On the Select Role Services page, expand Web Server (IIS), expand Web Server, expand Common HTTP Features, and then select WebDAV Publishing. The Web Distributed Authoring and Versioning (WebDAV) protocol is currently the main method for accessing SAS Content Server. It helps to schedule uploads, downloads, backups, and synchronize safely with easy, no matter whether you are a power user managing many WebDav servers or just a beginner creating his first site. Simple camera application for iOS that uploads pictures to WebDAV server or Dropbox quickly. Stack Overflow for Teams is a private, secure spot for you and your coworkers to find and share information. It is possible to mount a drive from one of these operating systems to one or several remote computers. The file server uses WebDAV which is a set of extensions to the HTTP(S) protocol that allows a web server to appear as a standard network drive. For example, you can share Word or Excel documents with your colleagues by uploading them to your WebDAV server. WebDAV Server lets you run the HTTP / WebDAV service on your Mac computer and you can access the files from other computers / devices with WebDAV. 0, enabling mod_dav support (the module used to provide WebDAV functionality) is as simple as compiling Apache 2. FREE! ISyncU Simple File Synchronization BETA! Sync files between your local hard drive and a mapped path, FTP Server of WebDAV Server. 5 WebDAV module to store WebDAV-based properties in NTFS alternate data streams instead of properties. Enter the appropriate username and password in the authentication box that shows up, and you're done. This value only represents a request and the server can either ignore the request, or inform the client of an arbitrary value. Non-system processes like webdav_simple_prop. Our simple and secure software will ensure that you never lose your files. How to Set Up a Quick, Simple WebDAV Server for Remote File Sharing Step 1: Get wsgidav. If the server was not a WebDAV server, the response will be empty. DriveHQ is one of the largest FTP Server Hosting service providers; we have offered WebDAV Cloud Drive Mapping service for over a decade. that allows anonymous access. Multi-user file lock for Microsoft Office files. This article pertains to MDaemon version 15. by Robert McMurray. WebDAV folders can be mounted as virtual filesystems under Windows, Linux and Mac OS X, as well as accessed through separate clients (such as cadaver and CyberDuck) or through the built-in web browser interface. Due to the fact that CQ5. The first step to syncing your Zotero library is to create a Zotero account (which is also used for the Zotero Forums). The WebDAV server integrates with NI-Auth, the authentication service used by your target to validate login credentials. WebDAV is widely deployed in many enterprise file sharing solutions. We’ll assume that your main web site was built in the default location on your Mac: /Library/WebServer/Documents. It's a set of extensions to the HTTP protocol which allows users to collaboratively edit and manage files on remote web servers. Easily enable WebDAV compatibility on your IDriveSync by following these simple steps: Open Pages, Numbers, or Keynote on your iPhone/ iPad/ iPod touch, select your desired file, then tap on and select Copy to WebDAV. Apache Jackrabbit JCR WebDAV Server was basically designed to support remote JCR API calls via underlying WebDAV protocol. 2's URL version matches the working URL, users can hardly find it unless they pause mouse cursor under Location in the screenshot. Create a server in a minute with Gandi Cloud and get root access to your own server. Add the NuGet package of the WebDAV server in your application. One is a very minimal server that's simple to understand and easy to set up in about a minute. There are various ways to set-up your Workamajig site for maximum efficiency. There are lots of solutions - using a local copy, using a combination of HTTP and FTP tools to download the original and upload the changes etc. Based on Samba and SambaDAV. Mapping Drives with WebDav. Here we are using Let’s Encrypt certificate for our website. Setup Server. See CHANGELOG in your installation path and take a look into webdav. 5 ? I searched the redbook but couldn't find IBM mention anything about it. webdav is a simple tool that creates a WebDAV server for you. IT Hit WebDAV Server Engine. WebDAV Server lets you run the HTTP / WebDAV service on your Mac computer and you can access the files from other computers / devices with WebDAV. Screenshot below. Open Internet Information Services (IIS) Manager: If you are using Windows Server 2012 or Windows Server 2012 R2: On the taskbar, click Server Manager, click Tools, and then click Internet Information Services (IIS) Manager. Abyss Web Server in the media Some of the books, reviews, and publications featuring Abyss Web Server. What looks to be the simplest thing in the world can trip you up, kick you in the googlies while you're. The following WebDAV clients make it possible to use a remote FuguHub server as a network drive. The repository is much like an ordinary file server, except that it remembers every change ever made to files and. MyWorkDrive allows companies to convert an ordinary Windows File Server to a WebDAV Server. So it runs under Windows, Linux, etc. With WebDAV File you can access your files on a WebDAV enabled server where ever you are. ownCloud is an open source, simple cloud server, providing WebDAV, CalDAV, CardDAV, etc. fixed in 0. All the basic HTTP requests are handled by the DefaultServlet. On your file share server, share the folder you wish to make accessible using WebDAV. WebDAV and Nginx - Setting Up a Bulletproof WebDAV Server Isn’t Easy. ) to a WebDav/Web Folder on a NAS box. Setting up external access is outside the scope of this write-up, but let me know if you have questions about it. By tricking ya to click on any data type files in the hacker's directory ( it can be USB, WebDAV, remote server or even just a local harddisk directory!!!) , the hacker can make your program load. Installing WebDAV on IIS 8. Note: This page outlines secure methods for transferring files between a Macintosh client and a Unix server. txt, which can be modified at will from any location where the drive is mounted. The WebDav drive can easily be setup from the SME Control Panel or if zero install is required it can be setup at the command line or in a script as below: NET USE * \\webdav. Web Disk (also known as WebDAV) is a drag-and-drop interface in cPanel which allows you to access your website's files as if it were a local drive on your computer. username is the username with which to access the WebDAV server, if required. FTP, WebDAV, CalDAV & CardDAV. A simple change within the standard WebDAV configuration in the Apache configuration file allows user authentication to WebDAV shares against an LDAP server. See Directories and WebDAV Servers in the Application Developer's Guide. Low prices AND expert support. CarotDAV is a Simple WebDAV / FTP / SFTP / Online Storages client for Windows OS, available free of charge. File Transfer Protocol (FTP) is a simple network protocol based on IP, which allows users to transfer files between computers on the Internet. FTP Client Screen Saver Desktop Enhancements - Screen Savers, Freeware, $0. A WebDAV client called "WebDAV Redirector" is built into Windows 8 and Windows 10. connect ¶ Connect to the server specified when the object was created. Last but not least, FileZilla Server is a free open source FTP and FTPS Server. The DreamHost panel allows you to specify which users have access to a WebDAV-enabled directory. We don't tie you down to our preferred options. Workamajig Server Requirements. Other protocols that are often used instead of WebDAV are FTP and SFTP. A utility that allows you to mount a remote SFTP, FTP, FTPS, or WebDAV server as a local device in Finder. 3 Windows Windows 7-(free) Windows 7 has a built-in webDAV server. As I previously stated, for those of you using Clonezilla Server the instructions should be pretty much the same, except that instead of booting from a LiveCD, you will be booting from the network connecting to a Clonezilla server which will store your image. Credentials = CredentialCache. sudo apt-get install davfs2. The repository is much like an ordinary file server, except that it remembers every change ever made to files and. Its biggest deficiency is that it doesn't do any form of http authentication and generally an unprotected WebDAV server is a bad idea. that allows anonymous access. Hello everybody. It can send HTTP requests to a WebDav server and perform operations to access its files. 50 thoughts on “ WebDAV Detection, Vulnerability Checking and Exploitation ” Reply. I use the same Python WebDav package in the post, but Dockerized it: The run. Lotus Domino provides us a convenient way of centrally storing all the files that make up a website - the NSF file. Set the IIS directory security appropriately. From a simple file server to a. It is implemented in Python using WSGI, and can be installed under e. Connect your WebDAV document storage to the ONLYOFFICE Documents module. Upload files to an FTP server. Abyss Web Server is a compact web server available for Windows, Mac OS X/macOS, and Linux operating systems. Fast and Simple WebDAV Plug-in. I was assuming that WebDAV, being newer than FTP, had its own more robust authentication. For Internet Information Services (IIS) 7. However, the installation via PowerShell described in the IT Bros link in Suncatcher's answer also works on Server 2012R2 Essentials:. Ever want to add a WebDAV server to your Android. Recently I came across a situation where I needed to create a one-click web-based solution, that allows clients to open and edit PhotoShop, AutoCAD, WordPerfect, PDF, LibreOffice and other files hosted remotely from my WebDAV enabled Apache web server by using their own desktop tools. ini file but it's not exist i only see smc. The WebDAV protocol is a popular option for accessing files remotely as it runs over the http/https protocols which are accessible from any location. What is akaDAV. We are also offering FileZilla Pro, with additional protocol support for WebDAV, Amazon S3, Backblaze B2, Dropbox, Microsoft OneDrive, Google Drive, Microsoft Azure Blob and File Storage, and Google Cloud Storage. Fast and Convenient WebDAV Backup Software. The phone system does not allow us to add credentials for authentication purposes, only an url. Fully compatible (100%) with WordPress, Drupal, Joomla, Magento, phpBB, MediaWiki, and more. Apache::WebDAV is a WebDAV server implementation. Use Case I(2) Drive WebDAV Server can be used as departmental. Install Samba Server # apt-get install samba. On Windows Server, this must be enabled in the features/roles before trying to use. Basic WebDAV server. It will overwrite the original file as well, so you don't get duplicates. storagemadeeasy. Fast and Simple WebDAV Plug-in. WebDAV Server For ad-hoc access to databases, BaseX offers a WebDAV service, which allows users to quickly store, modify, and organize resources with a simple WebDAV-enabled file manager. I already use nginx as my web server, so it has already passed my secondary considerations—nginx is relatively simple and makes efficient use of resources. g HiDrive) I have worked a long time with SVN and now for me was the time to switch to GIT, because GIT offers a lot of possibilibtys I have choose the option to but my remote Retro on WebDAV in this case the HiDrive from Strato. However, I would need the possibility to select the list of folders (rather than the list of SMB shared folders) available through WebDAV. WebDAV Notifications, describing a standard way to communicate notifications to a cal-, card- or webdav user. We don't tie you down to our preferred options. Apache TomEE is a lightweight, yet powerful, JavaEE Application server with feature rich tooling. Besides the WebDav to personal server-drives there should also be a possibility to configure other server-drives for teams of teachers who work together in one or more courses. Features include web UI administration, server event triggers and scripting, and authentication using Windows, ODBC, or Active Directory. WebDAV is a long-standing protocol that enables a webserver to act as a fileserver and support collaborative authoring of content on the web. HTTPConnection. 03/28/2014; 10 minutes to read; In this article. Q&A for Work. Windows PC Secure WebDAV Server. If your proxy server has several network interfaces, sometimes you might need to choose a particular source IP address for connecting to a proxied server or an upstream. com account and access it via WebDAV over HTTP/HTTPS. I got application server running in Windows - IIS6. It's aim is to provide a simple interface to webdav services to any application which needs it. A simple change within the standard WebDAV configuration in the Apache configuration file allows user authentication to WebDAV shares against an LDAP server. Debian has "cadaver" which is a simple(?) command line WebDAV client, which is very handy for testing things. CrossFTP makes it extremely simple to speed up the WebDav(s) and iDisk related tasks. I figured if the calendar hasn't been updated in 2007, it's out of date, so I didn't bother with those. Generally, to use WebDAV, you need to manually log into the remote server. There are two ways you can get WsgiDAV. Read the full. Fortunately, we can mount OneDrive for Business through WebDav, the same way with the official OneDrive UWP on Windows 10. The idea is to make your Dropbox accessible through WebDAV. No special knowledge of Slide or WebDAV is required to make the usual Windows, Mac and Linux clients work with your server system. It provides Extended SQL, SSL, basic authentication, prepared statement, query cache, WebDAV and table level access control. [Solved] WebDAV Problems with Windows Explorer on Windows 10 2016 server, you have to install a feature and start services, see here : you how to install. An administrator needs to claim ownership of the company by DNS validation of doamin. Rails log file: No route matches [PROPFIND] "/webdav" (after Ubuntu server upgrade) Attributes in DB using active records not updated even when save gives no errors in Rails Rails 5, Route declare but not found. On Windows Server, this must be enabled in the features/roles before trying to use. For a Server 2012R2 Essentials, penzoiders' answer does not work because Essentials does not have the server manager and I couldn't find a way to install the Desktop Experience through Dashboard. FIX: A hotfix is available that enables WebDAV to store the properties of file resources by using NTFS alternate data streams in IIS 7. WebDAV servers can be difficult to set up. LabVIEW: The requested resource does not support the specified DAV method. This problem occurs when you use Vista and attempt to map to a share across http because Vista, coming configured for a much more stringent security model, doesn't do WebDAV over http, only https. com via freends. Manuel Lemos. It is implemented in Python using WSGI, and can be installed under e. Secure FTP Server that also supports Secure Shell Access and Web browser based secure file transfer. See Directories and WebDAV Servers in the Application Developer's Guide. Create a server in a minute with Gandi Cloud and get root access to your own server. It’s reliable and easy to use, but there are plenty of potential problems if you’re starting from scratch. I always like a good vulnerability early in the week, so I decided to write an Nmap script to find it! The first open script I found was. " In simple terms, you can use WebDAV method to access your portal/website content in a windows folder format from a remote system, in the case of BO-DS and Sharepoint integration, this is the basic technology used to transfer/receive data from/to Sharepoint. py A simple Web server in Python WebDAV. DriveHQ is one of the largest FTP Server Hosting service providers; we have offered WebDAV Cloud Drive Mapping service for over a decade. HashBackup supports WebDAV as a backup destination. The webdav_scanner module scans a server or range of servers and attempts to determine if WebDav is enabled. WebDrive includes a simple backup utility which allows you to backup the files on your workstation to any remote server that WebDrive is connected to. Sardine is a next generation WebDAV client for Java. It can be run as a daemon. WebDAV is only available for accounts with default encryption. This will allow you to significantly extend. Step 2: Follow the instructions provided by Rumpus to connect to your server. BitKinex FTP Client Internet & Networking - FTP, Freeware, $0. This server can accept multiple requests at Wserve HTTP server A simple HTTP server for Windows NT/2000/XP. So for example, imagine that someone posted a String value to an API endpoint that expected a String value; but, the value of the string contained data that was blacklisted (ex. WebDAV Server - An Overview. Have you ever wanted to create a file server to link multiple TVs and to store all photos, movies, and videos into a centralized file server? In this updated guide, I will cover the steps needed to setup your own Network Attached Storage (NAS server) to store all your media files in a home cloud. Important: Due to how Windows 8 and 10 establish WebDAV connections, a WebDAV connection using a Windows 8 or 10 network location will not continue to work after you restart your computer - you will need to delete an existing connection and follow the steps below each time you would like to connect to a site's Resources or File Drop. The steps below will detail how t. Configuration Sample. It's possible to update the information on WebDAV File Manager or report it as discontinued, duplicated or spam. Features a HTTP component. py generic WebDAV server, adds WebDAV methods to web_server above. Then find and select “WebDAV Publishing” and “Windows Authentication”. Find out how to create a WebDAV server on Windows and connect it to ONLYOFFICE. IT Hit WebDAV Server Engine. MiniWeb is a high-efficiency, cross-platform, small-footprint HTTP server implementation in C language. | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665767.51/warc/CC-MAIN-20191112202920-20191112230920-00519.warc.gz | CC-MAIN-2019-47 | 32,640 | 1 |
https://www.techtaffy.com/ubuntu-17-10-released/ | code | Canonical announced the release of Ubuntu 17.10 featuring a new GNOME desktop on Wayland, and new versions of KDE, MATE and Budgie. On the cloud, 17.10 brings Kubernetes 1.8 for hyper-elastic container operations, and minimal base images for containers.
This is the 27th release of Ubuntu, and forms the baseline for features in the upcoming Long Term Support enterprise-class release in April 2018.
Let’s take a quick look at what is new with this release (based on a statement from Canonical):
- The Atom editor and Microsoft Visual Studio Code are both available across all supported releases of Ubuntu including 16.04 LTS and 17.10.
- The new default desktop features the latest version of GNOME with extensions developed in collaboration with the GNOME Shell team. Ubuntu users. 17.10 will run Wayland as the default display server on compatible hardware, with the option of Xorg where required.
- Connecting to WiFi in public areas is simplified with support for captive portals.
Firefox 56 and Thunderbird 52 both come as standard together with the latest LibreOffice 5.4.1 suite.
- Ubuntu 17.10 supports driverless printing with IPP Everywhere, Apple AirPrint, Mopria, and WiFi Direct.
- The release enables simple switching between built-in audio devices and Bluetooth.
Ubuntu 17.10 features platform snaps for GNOME and KDE. Hiri, Wavebox, and the Heroku CLI are notable snaps published during this cycle.
- The catkin Snapcraft plugin enables Robot Operating System (ROS) snaps for secure, easily updated robots and drones.
- There are new mediated secure interfaces available to snap developers, including the ability to use Amazon Greengrass and Password Manager.
- Ubuntu 17.10 ships with the 4.13 based Linux kernel. The 17.10 kernel adds support for OPAL disk drives and numerous improvements to disk I/O. Namespaced file capabilities and Linux Security Module stacking reinforce Ubuntu’s leadership in container capabilities for cloud and bare-metal Kubernetes, Docker and LXD operations.
- Canonical’s Distribution of Kubernetes, CDK, supports the 1.8 series of Kubernetes.
- 17.10 introduces netplan as the standard declarative YAML syntax for configuring interfaces in Ubuntu.
[Image courtesy: Canonical] | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510179.22/warc/CC-MAIN-20230926075508-20230926105508-00290.warc.gz | CC-MAIN-2023-40 | 2,230 | 16 |
https://xcp-ng.org/forum/topic/5200/secureboot-certs-install-fails/2?lang=en-US | code | secureboot-certs install fails
apz last edited by
This is a fresh 8.2 system with the secure boot features added per XCP-ng documentation. The secureboot-certs install however fails:
# secureboot-certs install No arguments provided to command install, default arguments will be used: - PK: default - KEK: default - db: default - dbx: latest Downloading https://www.microsoft.com/pkiops/certs/MicCorKEKCA2011_2011-06-24.crt... Downloading https://www.microsoft.com/pkiops/certs/MicCorUEFCA2011_2011-06-27.crt... Downloading https://www.microsoft.com/pkiops/certs/MicWinProPCA2011_2011-10-19.crt... Downloading https://uefi.org/sites/default/files/resources/dbxupdate_x64.bin... error: unable to retrieve certificate from URL: https://uefi.org/sites/default/files/resources/dbxupdate_x64.bin. Error message: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)>. If the failure can't be fixed at the network configuration level, consider downloading the certificates manually and then loading one or more of them with secureboot-certs install <PK-filename>|default <KEK-filename>|default <db-filename>|default <dbx-filename>|latest. Check secureboot-certs install -h for usage details as well as a list of the download links used by secureboot-certs install.
The system's clock is correct and the uefi.org certificate seems fine to me:
* Server certificate: * subject: CN=uefi.org * start date: Oct 19 13:50:03 2021 GMT * expire date: Jan 17 13:50:02 2022 GMT * common name: uefi.org * issuer: CN=R3,O=Let's Encrypt,C=US
wget on the same system says the certificate is expired. A desktop's browser was fine with it and allowed me to download the file. Is there something I missed here? I did the same kind of installation couple of months back and had no issues with that.
It's related to the expiry of a root certificate and sub-optimal behaviour of the version of openssl we have in XCP-ng.
You can fix it by updating the
ca-certificatesRPM (it's been validated internally already and will be released to everyone with our next updates).
yum update ca-certificates --enablerepo=xcp-ng-testing
@stormi I know this is quite an old topic, but this is still happening on XCP-ng 8.2.1. Do we need to update the RPM manually still or should this have been dealt with in the newer versions?
@planedrop Are you sure it's the same error? There's another known issue that is a 403 forbidden error received from microsoft's download site. You will find a workaround for this in another thread that mentions it. We're working on an update but it's not an easy situation: we don't choose which user agents Microsoft will decide to block.
antonseitz last edited by
Hi, I just made a quick and dirty workaround by editing
I replaced this line:
by this line:
req.add_header("User-Agent", "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.67 Safari/537.36")
Afterwards install went through without 403 error:
[18:10 xs ~]# secureboot-certs install default default default latest
Successfully installed certificates to the XAPI DB for pool.
[18:26 xs ~]#
@antonseitz This is indeed the workaround that is mentioned in the other thread.
@stormi Figured I'd ask, is this something that is going to be fixed in a future release? Seems like it should be default in there if a modification is needed.
Makes me a little hesitant to get this going in a production environment, which I was planning on doing after some more lab testing.
@planedrop Yes, as I wrote above, we're working on an update.
@planedrop Why does it make you hesitant? Are you going to install certificates a lot? Usually one only needs to run it once, on the first host of the pool. If you're running Windows VMs, you will get certificate updates through Windows update for each VM anyway. The pool certs are just here to bootstrap the VMs (this holds true for linux VMs too).
By the way, we aren't the ones making Microsoft's policy which is "we prefer that users get their certificates through their server's firmware, or download them manually from our website". As a convenience, we provided
secureboot-certs install(Citrix doesn't in Citrix hypervisor for example... They get the certs from the host. If it's in BIOS mode... Too bad for you, no secure boot in VMs...) to automate things for you, but you're not in a dead end if the automated way doesn't work.
Lastly, even if they would block downloads again:
- the new secureboot-certs install will both have a different default user agent and an option to let you use the user agent you want
- you can always install certificates after downloading them manually, if the script can't download for some reason.
@stormi OK this is a good point. My thing is that I generally don't like to modify things from near default in a prod environment because some update comes along later and breaks it.
To be fair though, I've never had this happen with XCP-ng in my lab and I also doubt something related to certs and secure boot would cause that, so maybe I'm paranoid and incorrect here.
Also, my apologies, I missed the reply about working on a fix.
This all makes sense to me now though, and the fix is pretty easy to get the automated install to work.
Thanks for all the help!
Update candidate available to fix certificate download: https://xcp-ng.org/forum/post/49373 | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572063.65/warc/CC-MAIN-20220814173832-20220814203832-00530.warc.gz | CC-MAIN-2022-33 | 5,351 | 38 |
https://history.louisiana.edu/node/29 | code | History is an excellent minor to add to any major degree. The history minor provides you with a context for your work and study.
For example, engineers would benefit from a larger understanding of the history of engineering and the environmental influence of their profession on the region. A future doctor could benefit from the context of previous medical practices and the history of medical discovery.
Our digital and public history courses can also be used in computer science, communication, journalism, business, geosciences and many other fields. | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817206.54/warc/CC-MAIN-20240418124808-20240418154808-00530.warc.gz | CC-MAIN-2024-18 | 554 | 3 |
https://www.caa.org.au/leverage-pepperstone-forex/ | code | Have been using it for couple of years. Leverage Pepperstone Forex is covered in this article …
Client service was excellent.
No issue with deposit.
Withdrawal no issue.
Pepperstone uses clients the most complete trading experience in the online forex broker community. The broker’s lightning-fast execution systems, several account types, competitive rates, and several platforms (MT4 and MT5, and complete cTrader performance) outshine the large bulk of around the world forex brokers.
Being FCA-regulated provides trustworthiness to the company, however the disparity of using negative balance protection while lacking ensured stop losses is a bit befuddling. Substandard site maintenance speaks to an absence of attention to information. Client service is somewhat above average, and the education brochure is adequate.
In general, Pepperstone uses a remarkable trade experience for all kinds of traders, whether it is low spreads for the cost-conscious trader or interface performance for the more technologically advanced trader.
Konstantinos from assistance helped me with my application status demand effectively, through email and online chat.
It is an excellent broker. No issues with withdrawals. They provide great platform – fast, easy to use (there is space for improvement and hopefully they will continue the development).
When I came across an obstacle, outstanding response. I was New and the group directed me on what I need to do. Keep it up that spirit
Broker has to pay me switch if my trades are held overnight. After a month, I noticed that my equity is continuously decreasing in Pepperstone while equity in my other 3 accounts doubled in the exact same time. Due to their stealing of my money, my positions were stopped out due to lack of cash & my account is now almost ZERO, whereas my other accounts in other brokers get more than double in the very same time with very same parameters/Setting
Thanks to TradingView I found the most competitive broker. Their products are incredible however services and CRM require enhancement. Apart from Becca, the other agents are trained like chatbots. Thanks Becca you conserved my day and the brand name image.
When I started trading years back and now I have an expert account there, pepperstone was my very first broker. I value lots of things they provide, consisting of the kind and prompt customer service, the pro take advantage of (probably the best around, specially for indices), the reasonable spreads, the execution and the choice of platforms. It genuinely is an excellent trading environment.
Alberto is incredible. Very useful and has actually linked me to great deals of useful resources for a new trader. When it comes to getting in positions with self-confidence, this took away lots of concerns and then doubts. Expert and personable person.
After evaluating each broker based on their number of held licenses, years in business, and a variety of other data-driven variables, we have actually determined that Interactive Brokers (99) earned a higher Trust Score than Pepperstone
Pepperstone provides uncomplicated access to the markets which enables the client to focus on the complex task of trying to effectively trade the markets. Pepperstone is ideally matched to traders that want a manageable variety of inexpensive offerings, multiple options of interface and account types, and effective client assistance. Investopedia’s ranking algorithm factored in these attributes in stating Pepperstone as the very best Forex Broker for Trading Experience in 2020.
Website upkeep leaves a lot to be preferred. One of the trademarks of an efficient organization, specifically in 2020, is its web presence. While Pepperstone’s site has an instinctive feel, there are a couple of pages with either incorrect, out-of-date, and/or incomplete details.
Pepperstone does decline U.S. customers due to regulative restraints, which prevents it from genuinely being considered a worldwide broker. This would be a red flag were it not for the truth that the business is regulated by the FCA which, in addition to U.S. regulatory agencies (NFA, CFTC), is extensively thought about to be the preeminent regulatory body.
Pepperstone does not use “unfavorable balance defense” for non-U.K./ E.U. clients. This indicates that a customer can lose more than their account balance and end up owing cash to the broker.
Pepperstone does not offer ensured stop loss orders (GSLO) for anybody. GSLOs safeguard the trader from market space threat and many
Pepperstone uses a broad variety of platforms to suit every investment and trading design. The platforms are third-party, white-label offerings, as Pepperstone has avoided constructing a proprietary user interface. Consumers can choose between MetaTrader (MT) 4/5 and cTrader, a higher-end system with direct liquidity-provider pricing and advanced technical functions that consist of detachable charts, back-testing, and algorithmic method assistance. Smart Trader Tools for MT4 extend technical functionality, adding a suite of apps that help with trade execution, marketing research, and depth of market analysis.
Pepperstone’s cTrader is a structured trading platform that is available as a download or web-based user interface, which is simple and stable to gain access to from any internet browser (Chrome, Firefox, Safari, or Internet Explorer). This platform offers an updated look, one-click trading, and full integration across desktop and mobile platforms, which enhance the trading experience for all kinds of traders.
Pepperstone’s cTrader has a basic and easy-to-use user interface where traders can establish watchlists, examine charts, location and screen trades, have access to an instrument’s “depth-of-market,” and keep up with upcoming events with the marketplace calendar. The technical analysis charts can be broadened to full screen and feature more than 70 technical indicators that you can apply over several time frames, from tick charts to month-to-month charts.
The Autochartist program produces trade concepts based upon technical analysis patterns. The platform furthermore provides traders with the choice of “copy” or “social” trading, which can be accessed through the desktop trading platform, and likewise automating their own techniques. | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100745.32/warc/CC-MAIN-20231208112926-20231208142926-00762.warc.gz | CC-MAIN-2023-50 | 6,320 | 24 |
https://physics.stackexchange.com/users/195021/jo%C3%A3o-v%C3%ADtor-g-lima | code | João Vítor G. Lima
I am 17 year old student from Brazil (and I speak english fluently, so don't worry), currently studying for the entrance exams of some of the best technology and engineering schools in Brazil. I mostly seek opportunities in the area of Physics.
I was scientifically initiated by a professor with a PhD in the area of Optics, and have since worked on a few projects/experiments related to optical phenomena.
So far I have received a few prizes in physics and mathematics and hope to achieve more!
Perhaps I want to become a professor, and so far I have taught classes on calculus, kinematics, mechanics, thermodynamics and special relativity.
My hope with Physics Stack Exchange is to receive knowledge from those who have a lot to share and to share myself some of what I've learned in the past few years.
Member for 8 months
48 profile views
Last seen Jan 19 at 3:30 | s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584445118.99/warc/CC-MAIN-20190124014810-20190124040810-00211.warc.gz | CC-MAIN-2019-04 | 888 | 9 |
http://robynleatherman.com/index-1092.html | code | I have had a version of the following conversation more than a few times with community members trying to sort out where to run their containerized apps in production:
User: So, where should I run my containers? Bare metal or VM’s
Me: It’s not a question of “either / or” – that’s the beauty of Docker. That choice is based solely on what’s right for your application and business goals – physical or virtual, cloud or on premise. Mix and match as your application and business needs dictate (and change).
User: But, surely you have a recommendation.
Me: I’m going to give you the two word answer that nobody likes: It depends.
User: You’re right, I don’t like that answer.
Me: I kind of figured you wouldn’t, but it really is the right answer.
There are tough questions in the world of tech, and the answer “It depends” can often be a cop out. But in the case of where to run your containerized applications it really is the best answer because no two applications are exactly the same, and no two companies have exactly the same business needs.
Any IT decision is based on a myriad of variables: Performance, scalability, reliability, security, existing systems, current skillsets, and cost (to name just a few). When someone sets out to decide how to deploy a Docker-based application in production all of these things need to be considered.
Docker delivers on the promise of allowing you to deploy your applications seamlessly regardless of the underlying infrastructure. Bare metal or VM. Datacenter or public cloud. Heck, deploy your app on bare metal in your data center and on VMs across multiple cloud providers if that’s what is needed by your application or business.
The key here is that you’re not locked into any one option. You can easily move your app from one infrastructure to another. There is essentially zero friction.
But that freedom also makes the process of deciding where to run those apps seem more difficult than it really is. The answer is going to be influenced what you’re doing today, and what you might need to do in the future.
So while I can’t answer “Where should I run my app” outright, I can provide a list of things to consider when it comes time to make that decision.
I’m sure this list is far from complete, but hopefully it’s enough to start a conversation and get the gears turning
Latency: Applications with a low tolerance for latency are going to do better on physical. This something we see quite a bit in financial services (trading applications are prime example).
Capacity: VMs made their bones by optimizing system load. If your containerized app doesn’t consume all the capacity on a physical box, virtualization still offers a benefit here.
Mixed Workloads: Physical servers will run a single instance of an operating system. So, you if you wish to mix Windows and Linux containers on the same host, you’ll need to use virtualization
Disaster Recovery: Again, like capacity optimizations, one of the great benefits of VMs are advanced capabilities around site recovery and high availability. While these capabilities may exist with physical hosts, the are a wider array of options with virtualization.
Existing Investments and Automation Frameworks : A lot of the organizations have already built a comprehensive set of tools around things like infrastructure provisioning. Leveraging this existing investment and expertise makes a lot of sense when introducing new elements.
Multitenancy: Some customers have workloads that can’t share kernels. In this case VMs provide an extra layer of isolation compared to running containers on bare metal.
Resource Pools / Quotas: Many virtualization solutions have a broad feature set to control how virtual machines use resources. Docker provides the concept of resource constraints, but for bare metal you’re kind of on your own.
Automation/APIs: Very few people in an organization typically have the ability to provision bare metal from an API. If the goal is automation you’ll want an API, and that will likely rule out bare metal.
Licensing Costs: Running directly on bare metal can reduce costs as you won’t need to purchase hypervisor licenses. And, of course, you may not even need to pay anything for the OS that hosts your containers.
There is something really powerful about being able to make a decision on where to run your app solely based on the technical merits of the platform AND being able to easily adjust that decision if new information comes to light.
In the end the question shouldn’t be “bare metal OR virtual” – the question is which infrastructure makes the most sense for my application needs and business goals. So mix and match to create the right answer today, and know with Docker you can quickly and easily respond to any changes in the future.
Read more by Mike Coleman in this series:
Check out these resources to start learning more about Docker and containers:
- Watch an Intro to Docker webinar
- Sign up for a free 30 day trial
- Read the Containers as a Service white paper
Learn More about Docker
- New to Docker? Try our 10 min online tutorial
- Share images, automate builds, and more with a free Docker Hub account
- Read the Docker 1.11 Release Notes
- Subscribe to Docker Weekly
- Sign up for upcoming Docker Online Meetups
- Attend upcoming Docker Meetups
- Watch DockerCon EU 2015 videos
- Start contributing to Docker | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100525.55/warc/CC-MAIN-20231204052342-20231204082342-00736.warc.gz | CC-MAIN-2023-50 | 5,424 | 39 |
https://www.tenable.com/blog/log-correlation-engine-rules-update-0 | code | Log Correlation Engine Rules Update
Several new PRM libraries and one TASL script have been updated and are available for download and use with the Log Correlation Engine. The list below shows what has changed. Each PRM or TASL links to the URL for downloading.
- detect_change tasl Support for Windows and FreeBSD system time change events.
- os_freebsd.prm Support for FreeBSD system time change, disk errors and more types of user login events.
- os_linux.prm Support for Linux named logs.
- os_win2k_app.prm Uses the Windows server name as the 'sensor' name.
- os_win2k_sec.prm Uses the Windows server name as the 'sensor' name.
- os_win2k_sys.prm Uses the Windows server name as the 'sensor' name.
- web_squid.prm Support for more Squid logging formats.
- virus_symantec.prm Support for Symantec anti-virus 'virus removed' messages.
To install these files, simply download them and place them in the /usr/thunder/daemons/plugins directory and then restart the thunderd process.
Customers are encouraged to periodically monitor their notmatched.txt file, which contains a list of all logs that were collected, but did not match a known pattern. Please contact Tenable if one of the supported applications or products is missing logs in your environment.
Cybersecurity News You Can Use
Enter your email and never miss timely alerts and security guidance from the experts at Tenable. | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474852.83/warc/CC-MAIN-20240229170737-20240229200737-00808.warc.gz | CC-MAIN-2024-10 | 1,385 | 14 |
https://matchmaticians.com/questions/xwa0oh/find-the-number-of-children-and-seperatly-ratio-males-to | code | Find the number of children, and seperatly the ratio of males to females, which ensures existance of genetic line and of one male and one female respectively the following generations
Imagine you can have an unlimited number of children, and can choose the gender of each one. Your genetic line will exist as long as at least one of your descendants is alive. In each generation, your children and descendants will produce offspring according to the following distribution:
|Number of Offspring||Probability|
Assume all offspring are born at the same time in each generation, and that after producing offspring the previous generation will die. For each of the following generations, find the minimum number of initial children which ensures the genetic line will exist in that generation to 99%, 99.9%, and 99.99% certainty.
Separately, assume now that the probability a descendant will be male is 51.2%, and that there is an additional 6% chance a male will not produce offspring. For each of the following generations, find the minimum number of initial children and the optimal ratio of males to females which ensures not only that the genetic line will exist, but that there will be at least one male and one female in each generation up til and including that point to 99%, 99.9%, and 99.99% certainty.
- 293 views
- Pulling balls out of a bin
- There are a total of 95 coins, quarters and dimes, and the total is $15.35. How many dimes are there ?
- Differently loaded dices in repeated runs
- Find the maximum likelihood estimator
- Calculating P values from data.
- Stochastic Processes Questions
- applied probability
- Statistics problem | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494986.94/warc/CC-MAIN-20230127132641-20230127162641-00836.warc.gz | CC-MAIN-2023-06 | 1,648 | 14 |
https://www.sesame-h2020-5g-ppp.eu/News/tabid/95/mid/421/newsid421/21/Default.aspx | code | Virtual Open Systems was presented at IEEE CSCN Berlin 2016, 2016/10/31 - 2016/11/01, as well as the activity carried out in the FP7 SESAME project and the project itself, by showcasing an ARMv8 microserver based on the NXP’s LS2085A.
The main focus of the conference was the upcoming 5G network standard.
One of the main highlights of the conference, the Network Slicing approach, was directly related to the ideas in SESAME, where multiple operators share the same Small Cell network.
The demo booth organized by VOSyS presented the SESAME project, and the VOSyS virtual switch VOSYSwitch with OpenFlow, in a high resiliency scenario, with an SDN application monitoring the availability of a web service: once the service is off-line, the application, leveraging the OpenDaylight REST API eas switching the traffic to the backup service instance.
The conference was attended by about 200 people and about 40 presentations were shown.
The demo attracted the interest on the project from different industrial and academia representatives, receiving a good feedback and interesting inputs. | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304947.93/warc/CC-MAIN-20220126101419-20220126131419-00421.warc.gz | CC-MAIN-2022-05 | 1,090 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.