code
stringlengths 3
1.05M
| repo_name
stringlengths 4
116
| path
stringlengths 3
942
| language
stringclasses 30
values | license
stringclasses 15
values | size
int32 3
1.05M
|
---|---|---|---|---|---|
<!--
~ Copyright (c) 2017. MIT-license for Jari Van Melckebeke
~ Note that there was a lot of educational work in this project,
~ this project was (or is) used for an assignment from Realdolmen in Belgium.
~ Please just don't abuse my work
-->
<html>
<head>
<meta charset="utf-8">
<script src="esl.js"></script>
<script src="config.js"></script>
</head>
<body>
<style>
html, body, #main {
width: 100%;
height: 100%;
}
</style>
<div id="main"></div>
<script>
require([
'echarts',
'echarts/chart/scatter',
'echarts/component/legend',
'echarts/component/polar'
], function (echarts) {
var chart = echarts.init(document.getElementById('main'), null, {
renderer: 'canvas'
});
var data1 = [];
var data2 = [];
var data3 = [];
for (var i = 0; i < 100; i++) {
data1.push([Math.random() * 5, Math.random() * 360]);
data2.push([Math.random() * 5, Math.random() * 360]);
data3.push([Math.random() * 10, Math.random() * 360]);
}
chart.setOption({
legend: {
data: ['scatter', 'scatter2', 'scatter3']
},
polar: {
},
angleAxis: {
type: 'value'
},
radiusAxis: {
axisAngle: 0
},
series: [{
coordinateSystem: 'polar',
name: 'scatter',
type: 'scatter',
symbolSize: 10,
data: data1
}, {
coordinateSystem: 'polar',
name: 'scatter2',
type: 'scatter',
symbolSize: 10,
data: data2
}, {
coordinateSystem: 'polar',
name: 'scatter3',
type: 'scatter',
symbolSize: 10,
data: data3
}]
});
})
</script>
</body>
</html> | N00bface/Real-Dolmen-Stage-Opdrachten | stageopdracht/src/main/resources/static/vendors/gentelella/vendors/echarts/test/polarScatter.html | HTML | mit | 2,536 |
# Contributing
I explicitly welcome contributions from people who have never contributed to open-source before: we were all beginners once!
I can help build on a partially working pull request with the aim of getting it merged.
I am also actively seeking to diversify our contributors and especially welcome contributions from women from all backgrounds and people of color. <sup>[1](#References)</sup>
If you're interested in contributing, fork this repo and create a pull request.
Please include a short descriptive link to your code in the readme, and order the link alphpabetically by file name.
Include a description of each data structure or algorithm at the top of the file, and if you feel that your code needs further explanation,
you can include a more detailed summary in the Data Structures or Algorithms subfolder's readme.
Please follow the [Ruby](https://github.com/bbatsov/ruby-style-guide) and [JavaScript](https://github.com/airbnb/javascript) Style Guides.
Tests are recommended, but optional.
If you're looking for inspiration, I'd love to have a:
+ [Priority Queue](https://en.wikipedia.org/wiki/Priority_queue)
+ [Valid Sudoku Board](https://en.wikipedia.org/wiki/Sudoku_solving_algorithms)
+ [Sorting Algorithms](https://en.wikipedia.org/wiki/Sorting_algorithm#Popular_sorting_algorithms)
+ [A* Search Algorithm](https://en.wikipedia.org/wiki/A*_search_algorithm)
+ [Knuth-Morris-Pratt Algorithm](https://en.wikipedia.org/wiki/Knuth%E2%80%93Morris%E2%80%93Pratt_algorithm)
+ [Heap](https://en.wikipedia.org/wiki/Heap_\(data_structure\))
+ [Bloom Filter](https://en.wikipedia.org/wiki/Bloom_filter)
+ [Or refactor one of these files!](/REFACTOR.md)
## Attribution
1. I used and modified [Homebrew's](https://github.com/Homebrew/brew#contributing) welcoming contributing section.
| Dbz/Algorithms | CONTRIBUTING.md | Markdown | mit | 1,821 |
Alchemy sentiment analysis: fb12d2c55fff36e1e268584e261b6b010b37279f
Africa Is Talking: 676dbd926bbb04fa69ce90ee81d3f5ffee2692aaf80eb5793bd70fe93e77dc2e
| crakama/bc_7_twitment | keys.py | Python | mit | 156 |
module.exports = {
project: {
server: {
basePath: '',
ip: '0.0.0.0',
request: {
sesskey: 'sid',
limit: 5000,
parameters: 60
},
render: 'swig',
path: {
routes: 'app/routes',
views: 'app/views',
public: 'public/',
docs: false
},
views: {
extension: 'swig',
errors: 'errors/'
}
}
},
environment: {
server: {
debug: true,
host: 'localhost',
port: 3000,
request: {
secret: new Date().getTime() + '' + Math.random(),
cors: true,
geolocation: false
},
views: {
cache: false
}
}
}
}; | PearlVentures/Crux | boilerplate/server/config.js | JavaScript | mit | 699 |
<?php
namespace RedMedica\ConsultasBundle\Entity;
use Doctrine\Common\Collections\ArrayCollection;
use Doctrine\ORM\Mapping as ORM;
use RedMedica\ConsultasBundle\Entity\Article;
use FOS\ElasticaBundle\Configuration\Search;
/**
* Category
*
* @ORM\Table(name="category")
* @ORM\Entity()
* @Search(repositoryClass="RedMedica\ConsultasBundle\Entity\SearchRepository\CategoryRepository")
*/
class Category
{
/**
* @var integer
*
* @ORM\Column(name="id", type="integer", nullable=false)
* @ORM\Id
* @ORM\GeneratedValue(strategy="IDENTITY")
*/
protected $id;
/**
* @var string
*
* @ORM\Column(name="label", type="string", length=250, nullable=false)
*/
protected $label;
/**
* @var Doctrine\Common\Collections\ArrayCollection
*
* @ORM\OneToMany(targetEntity="RedMedica\ConsultasBundle\Entity\Article", mappedBy="category")
*/
protected $articles;
public function __construct()
{
$this->articles = new ArrayCollection();
}
public function __toString()
{
return $this->label;
}
public function getId()
{
return $this->id;
}
public function setLabel($label)
{
$this->label = $label;
return $this;
}
public function getLabel()
{
return $this->label;
}
public function addArticle(Article $article)
{
$this->articles->add($article);
return $this;
}
public function setArticles($articles)
{
$this->articles = $articles;
return $this;
}
public function getArticles()
{
return $this->articles;
}
} | dysan1376/hospi | src/RedMedica/ConsultasBundle/Entity/Category.php | PHP | mit | 1,682 |
import React from "react";
import styled from 'styled-components'
import Link from './link';
const nextArrow = "/icons/next-arrow.png";
const prevArrow = "/icons/prev-arrow.png";
const PatternLink = styled.span`
width: 100%;
display: flex;
flex-direction: column;
padding: 1em;
float: ${props => props.previous ? 'left' : 'right'}
@media(min-width: $width-tablet) {
width: auto;
}
`;
const ImageContainer = styled.span`
height: 50px;
`;
const Image = styled.img`
height: 100%;
background-color: white;
float: ${props => props.previous ? 'right' : 'left'}
`;
const ArrowContainer = styled.div`
display: flex;
flex-direction: ${props => props.previous ? 'row-reverse' : 'row'};
align-items: center;
`;
const Name = styled.p`
padding: 10px 0;
`;
const Arrow = styled.img`
height: 10px;
flex-direction: row-reverse;
padding: ${props => props.previous ? '0 10px 0 0' : '0 0 0 10px'};
`;
const NextPrevPattern = ({pattern, direction}) => {
const previous = direction === "previous"
return (
<Link href={pattern.url}>
<PatternLink previous={previous}>
<ImageContainer>
<Image previous={previous} src={pattern.painted || pattern.lineDrawing} />
</ImageContainer>
<ArrowContainer previous={previous}>
<Name>{pattern.name}</Name>
{
(direction === "next") &&
<Arrow src={nextArrow}/>
}
{
(direction === "previous") &&
<Arrow previous src={prevArrow} />
}
</ArrowContainer>
</PatternLink>
</Link>
)
};
export default NextPrevPattern;
| redfieldstefan/kibaktile.com | src/components/next-prev-pattern.js | JavaScript | mit | 1,640 |
// Copyright (c) 2009-2010 Satoshi Nakamoto
// Copyright (c) 2009-2016 The Bitcoin Core developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#ifndef DIGIBYTE_NET_PROCESSING_H
#define DIGIBYTE_NET_PROCESSING_H
#include "net.h"
#include "validationinterface.h"
/** Default for -maxorphantx, maximum number of orphan transactions kept in memory */
static const unsigned int DEFAULT_MAX_ORPHAN_TRANSACTIONS = 100;
/** Expiration time for orphan transactions in seconds */
static const int64_t ORPHAN_TX_EXPIRE_TIME = 20 * 60;
/** Minimum time between orphan transactions expire time checks in seconds */
static const int64_t ORPHAN_TX_EXPIRE_INTERVAL = 5 * 60;
/** Default number of orphan+recently-replaced txn to keep around for block reconstruction */
static const unsigned int DEFAULT_BLOCK_RECONSTRUCTION_EXTRA_TXN = 100;
/** Register with a network node to receive its signals */
void RegisterNodeSignals(CNodeSignals& nodeSignals);
/** Unregister a network node */
void UnregisterNodeSignals(CNodeSignals& nodeSignals);
class PeerLogicValidation : public CValidationInterface {
private:
CConnman* connman;
public:
PeerLogicValidation(CConnman* connmanIn);
virtual void SyncTransaction(const CTransaction& tx, const CBlockIndex* pindex, int nPosInBlock);
virtual void UpdatedBlockTip(const CBlockIndex *pindexNew, const CBlockIndex *pindexFork, bool fInitialDownload);
virtual void BlockChecked(const CBlock& block, const CValidationState& state);
virtual void NewPoWValidBlock(const CBlockIndex *pindex, const std::shared_ptr<const CBlock>& pblock);
};
struct CNodeStateStats {
int nMisbehavior;
int nSyncHeight;
int nCommonHeight;
std::vector<int> vHeightInFlight;
};
/** Get statistics from node state */
bool GetNodeStateStats(NodeId nodeid, CNodeStateStats &stats);
/** Increase a node's misbehavior score. */
void Misbehaving(NodeId nodeid, int howmuch);
/** Process protocol messages received from a given node */
bool ProcessMessages(CNode* pfrom, CConnman& connman, std::atomic<bool>& interrupt);
/**
* Send queued protocol messages to be sent to a give node.
*
* @param[in] pto The node which we are sending messages to.
* @param[in] connman The connection manager for that node.
* @param[in] interrupt Interrupt condition for processing threads
* @return True if there is more work to be done
*/
bool SendMessages(CNode* pto, CConnman& connman, std::atomic<bool>& interrupt);
#endif // DIGIBYTE_NET_PROCESSING_H
| DigiByte-Team/digibyte | src/net_processing.h | C | mit | 2,633 |
#
# $Header: svn://svn/SWM/trunk/web/Reports/ReportAdvanced_TXSold.pm 8251 2013-04-08 09:00:53Z rlee $
#
package Reports::ReportAdvanced_TXSold;
use strict;
use lib ".";
use ReportAdvanced_Common;
use Reports::ReportAdvanced;
our @ISA =qw(Reports::ReportAdvanced);
use strict;
sub _getConfiguration {
my $self = shift;
my $currentLevel = $self->{'EntityTypeID'} || 0;
my $Data = $self->{'Data'};
my $SystemConfig = $self->{'SystemConfig'};
my $clientValues = $Data->{'clientValues'};
my $CommonVals = getCommonValues(
$Data,
{
MYOB => 1,
},
);
my $txt_Clr = $Data->{'SystemConfig'}{'txtCLR'} || 'Clearance';
my %config = (
Name => 'Transactions Sold Report',
StatsReport => 0,
MemberTeam => 0,
ReportEntity => 3,
ReportLevel => 0,
Template => 'default_adv',
TemplateEmail => 'default_adv_CSV',
DistinctValues => 1,
SQLBuilder => \&SQLBuilder,
DefaultPermType => 'NONE',
Fields => {
intPaymentType=> [
'Payment Type',
{
active=>1,
displaytype=>'lookup',
fieldtype=>'dropdown',
dropdownoptions => \%Defs::paymentTypes,
allowsort=>1,
dbfield=>'TL.intPaymentType'
}
],
strTXN=> [
'PayPal Reference Number',
{
displaytype=>'text',
fieldtype=>'text',
dbfield=>'TL.strTXN',
active=>1
}
],
intLogID=> [
'Payment Log ID',
{
displaytype=>'text',
fieldtype=>'text',
dbfield=>'TL.intLogID',
allowgrouping=>1,
active=>1
}
],
dtSettlement=> [
'Settlement Date',
{
active=>1,
displaytype=>'date',
fieldtype=>'datetime',
allowsort=>1,
dbformat=>' DATE_FORMAT(dtSettlement,"%d/%m/%Y %H:%i")'
}
],
intAmount => [
'Total Amount Paid',
{
displaytype=>'currency',
fieldtype=>'text',
allowsort=>1,
dbfield=>'TL.intAmount',
active=>1
}
],
SplitAmount=> [
'Split Amount',
{
displaytype=>'currency',
fieldtype=>'text',
allowsort=>1,
total=>1,
active=>1
}
],
SplitLevel=> [
'Split Level',
{
displaytype=>'text',
fieldtype=>'text',
allowsort=>1,
active=>1
}
],
PaymentFor=> [
'Payment For',
{
active=>1,
displaytype=>'text',
fieldtype=>'text',
allowsort => 1
}
],
intExportBankFileID=> [
'PayPal Distribution ID',
{
displaytype=>'text',
fieldtype=>'text',
dbfield=>'intExportAssocBankFileID'
}
],
intMyobExportID=> [
'SP Invoice Run',
{
displaytype=>'lookup',
fieldtype=>'dropdown',
dropdownoptions => $CommonVals->{'MYOB'}{'Values'},
active=>1,
dbfield=>'intMyobExportID'
}
],
dtRun=> [
'Date Funds Received',
{
displaytype=>'date',
fieldtype=>'date',
allowsort=>1,
dbformat=>' DATE_FORMAT(dtRun,"%d/%m/%Y")',
allowgrouping=>1,
sortfield=>'TL.dtSettlement'
}
],
},
Order => [qw(
intLogID
intPaymentType
strTXN
intAmount
dtSettlement
PaymentFor
SplitLevel
SplitAmount
intMyobExportID
)],
OptionGroups => {
default => ['Details',{}],
},
Config => {
FormFieldPrefix => 'c',
FormName => 'txnform_',
EmailExport => 1,
limitView => 5000,
EmailSenderAddress => $Defs::admin_email,
SecondarySort => 1,
RunButtonLabel => 'Run Report',
},
);
$self->{'Config'} = \%config;
}
sub SQLBuilder {
my($self, $OptVals, $ActiveFields) =@_ ;
my $currentLevel = $self->{'EntityTypeID'} || 0;
my $intID = $self->{'EntityID'} || 0;
my $Data = $self->{'Data'};
my $clientValues = $Data->{'clientValues'};
my $SystemConfig = $Data->{'SystemConfig'};
my $from_levels = $OptVals->{'FROM_LEVELS'};
my $from_list = $OptVals->{'FROM_LIST'};
my $where_levels = $OptVals->{'WHERE_LEVELS'};
my $where_list = $OptVals->{'WHERE_LIST'};
my $current_from = $OptVals->{'CURRENT_FROM'};
my $current_where = $OptVals->{'CURRENT_WHERE'};
my $select_levels = $OptVals->{'SELECT_LEVELS'};
my $sql = '';
{ #Work out SQL
my $clubWHERE = $currentLevel == $Defs::LEVEL_CLUB
? qq[ AND ML.intClubID = $intID ]
: '';
$sql = qq[
SELECT DISTINCT
TL.intLogID,
TL.intAmount,
TL.strTXN,
TL.intPaymentType,
ML.intLogType,
ML.intEntityType,
ML.intMyobExportID,
dtSettlement,
IF(T.intTableType=$Defs::LEVEL_PERSON, CONCAT(M.strLocalSurname, ", ", M.strLocalFirstname), Entity.strLocalName) as PaymentFor,
SUM(ML.curMoney) as SplitAmount,
IF(ML.intEntityType = $Defs::LEVEL_NATIONAL, 'National Split',
IF(ML.intEntityType = $Defs::LEVEL_STATE, 'State Split',
IF(ML.intEntityType = $Defs::LEVEL_REGION, 'Region Split',
IF(ML.intEntityType = $Defs::LEVEL_ZONE, 'Zone Split',
IF(ML.intEntityType = $Defs::LEVEL_CLUB, 'Club Split',
IF((ML.intEntityType = 0 AND intLogType IN (2,3)), 'Fees', '')
)
)
)
)
) as SplitLevel
FROM
tblTransLog as TL
INNER JOIN tblMoneyLog as ML ON (
ML.intTransLogID = TL.intLogID
AND ML.intLogType IN ($Defs::ML_TYPE_SPMAX, $Defs::ML_TYPE_LPF, $Defs::ML_TYPE_SPLIT)
)
LEFT JOIN tblTransactions as T ON (
T.intTransactionID = ML.intTransactionID
)
LEFT JOIN tblPerson as M ON (
M.intPersonID = T.intID
AND T.intTableType = $Defs::LEVEL_PERSON
)
LEFT JOIN tblEntity as Entity ON (
Entity.intEntityID = T.intID
AND T.intTableType = $Defs::LEVEL_PERSON
)
LEFT JOIN tblRegoForm as RF ON (
RF.intRegoFormID= TL.intRegoFormID
)
WHERE TL.intRealmID = $Data->{'Realm'}
$clubWHERE
$where_list
GROUP BY TL.intLogID
];
return ($sql,'');
}
}
1;
| facascante/slimerp | fifs/web/Reports/ReportAdvanced_TXSold.pm | Perl | mit | 5,822 |
#!/usr/bin/node --harmony
'use strict'
const noble = require('noble'),
program = require('commander')
program
.version('0.0.1')
.option('-p, --prefix <integer>', 'Manufacturer identifier prefixed to all fan commands', parseInt)
.option('-t, --target [mac]', 'MAC address of devices to target', function(val){ return val.toLowerCase() })
.option('-s, --service <uuid>', 'UUID of fan controller BLE service')
.option('-w, --write <uuid>', 'UUID of fan controller BLE write characteristic')
.option('-n, --notify <uuid>', 'UUID of fan controller BLE notify characteristic')
class FanRequest {
writeInto(buffer) {
throw new TypeError('Must override method')
}
toBuffer() {
var buffer
if (program.prefix > 0) {
buffer = new Buffer(13)
buffer.writeUInt8(program.prefix)
this.writeInto(buffer.slice(1))
} else {
buffer = new Buffer(12)
this.writeInto(buffer)
}
const checksum = buffer.slice(0, buffer.length - 1).reduce(function(a, b){
return a + b
}, 0) & 255
buffer.writeUInt8(checksum, buffer.length - 1)
return buffer
}
}
class FanGetStateRequest extends FanRequest {
writeInto(buffer) {
buffer.fill(0)
buffer.writeUInt8(160)
}
}
Math.clamp = function(number, min, max) {
return Math.max(min, Math.min(number, max))
}
class FanUpdateLightRequest extends FanRequest {
constructor(isOn, level) {
super()
this.on = isOn ? 1 : 0
this.level = Math.clamp(level, 0, 100)
}
writeInto(buffer) {
buffer.fill(0)
buffer.writeUInt8(161)
buffer.writeUInt8(255, 4)
buffer.writeUInt8(100, 5)
buffer.writeUInt8((this.on << 7) | this.level, 6)
buffer.fill(255, 7, 10)
}
}
class FanUpdateLevelRequest extends FanRequest {
constructor(level) {
super()
this.level = Math.clamp(level, 0, 3)
}
writeInto(buffer) {
buffer.fill(0)
buffer.writeUInt8(161)
buffer.writeUInt8(this.level, 4)
buffer.fill(255, 5, 10)
}
}
class FanResponse {
static fromBuffer(buffer) {
if (program.prefix > 0) {
buffer = buffer.slice(1)
}
if (buffer.readUInt8(0) != 176) { return null }
const response = new FanResponse()
const windVelocity = buffer.readUInt8(2)
response.supportsFanReversal = (windVelocity & 0b00100000) != 0
response.maximumFanLevel = windVelocity & 0b00011111
const currentWindVelocity = buffer.readUInt8(4)
response.isFanReversed = (currentWindVelocity & 0b10000000) != 0
response.fanLevel = currentWindVelocity & 0b00011111
const currentBrightness = buffer.readUInt8(6)
response.lightIsOn = (currentBrightness & 0b10000000) != 0
response.lightBrightness = (currentBrightness & 0b01111111)
return response
}
}
// MARK: -
var command
program
.command('current')
.description('print current state')
.action(function(env, options) {
command = new FanGetStateRequest()
})
program
.command('fan')
.description('adjusts the fan')
.option('-l --level <size>', 'Fan speed', /^(off|low|medium|high)$/i, 'high')
.action(function(env, options) {
var level
switch (env.level) {
case 'low':
level = 1
break
case 'medium':
level = 2
break
case 'high':
level = 3
break
default:
level = 0
break
}
command = new FanUpdateLevelRequest(level)
})
program
.command('light <on|off>')
.description('adjusts the light')
.option('-l, --level <percent>', 'Light brightness', parseInt, 100)
.action(function(env, options) {
command = new FanUpdateLightRequest(env !== 'off', options.level)
})
program.parse(process.argv);
if (!command) {
program.help();
}
if (!program.target) {
throw new Error('MAC address required')
}
const serviceUUID = program.service || '539c681361a021374f79bf1a11984790'
const writeUUID = program.write || '539c681361a121374f79bf1a11984790'
const notifyUUID = program.notify || '539c681361a221374f79bf1a11984790'
noble.on('stateChange', function(state) {
if (state === 'poweredOn') {
console.log('scanning.')
noble.startScanning([ serviceUUID ], false)
} else {
noble.stopScanning()
}
})
noble.on('discover', function(peripheral) {
console.log('found ' + peripheral.address)
if (peripheral.address !== program.target) { return }
noble.stopScanning()
explore(peripheral)
});
function bail(error) {
console.log('failed: ' + error);
process.exit(1)
}
function explore(peripheral) {
console.log('connecting.')
peripheral.once('disconnect', function() {
peripheral.removeAllListeners()
explore(peripheral)
})
peripheral.connect(function(error) {
if (error) { bail(error); }
peripheral.discoverSomeServicesAndCharacteristics([ serviceUUID ], [ writeUUID, notifyUUID ], function(error, services, characteristics) {
if (error) { bail(error); }
var service = services[0]
var write = characteristics[0], notify = characteristics[1]
notify.on('data', function(data, isNotification) {
const response = FanResponse.fromBuffer(data)
if (response) {
console.log(response)
} else {
console.log('sent')
}
process.exit()
})
notify.subscribe(function(error) {
if (error) { bail(error); }
console.log('sending')
const buffer = command.toBuffer()
write.write(buffer, false, function(error){
if (error) { bail(error); }
})
})
})
})
}
| zwaldowski/homebridge-satellite-fan | test/poc.js | JavaScript | mit | 5,557 |
---
layout: post
category: "工具"
tags: [zsh,fish,linux]
---
[TOC]
### zsh 与yosemite 的bug? ###
在更新了Mac Yosemite 后,发现各种问题,首先是php,macport等问题
接着就是zsh了,不知道为什么,`zsh`总是几乎占了100%的cpu,这让我的macbook电池
暴跌,非常郁闷. 开始怀疑是插件的问题,但是即使把插件全部关了,也还是那样.
之前也用过fish,发现还是不错的一个shell,从设计上面说,非常方便.功能也不错.
于是就准备换到fish算了.

> 发了封邮件给zsh后,
>reply: Any chance that it's this issue with zsh-autosuggestions?
[问题解决](https://github.com/tarruda/zsh-autosuggestions/issues/24 )
发现原来是因为zsh-autosuggestions 的问题.
### fish优点 ###
- Autosuggestions 自动提示history,命令补全等很方便
- 命令的completions 甚至包括man的提示
- 一些`zsh`我喜欢的插件fish也有 例如`autojump` 通过[oh-my-fish](https://github.com/bpinto/oh-my-fishURL ) 可以很方便安装
- `fish-config`命令 可以在线的编辑fish的配置
其实以上一些功能其实`zsh`也可以做到,不过个人觉得补全做的没有`fish`好,只是一直以来`zsh`的社区比较强大
而`fish` 插件会少点,但是一般使用其实用不上很多插件,而且`zsh`用多几个插件就变得很慢.
一直使用 [oh-my-zsh](https://github.com/robbyrussell/oh-my-zsh ) ,但是发现了[oh-my-fish](https://github.com/bpinto/oh-my-fishURL ) 后
就想转过去了,因为以前一直以为fish没有插件支持.
### 安装[oh-my-fish](https://github.com/bpinto/oh-my-fishURL )###
brew install fish
sudo vi /etc/shells 将/usr/local/bin/fish加上,否则下面的命令会报错
chsh -s /usr/local/bin/fish
git clone git://github.com/bpinto/oh-my-fish.git ~/.oh-my-fish
copy配置文件
cp ~/.oh-my-fish/templates/config.fish ~/.config/fish/config.fish
### 设置fish ###
- 编辑~/config/fish/config.fish
```
set fish_plugins autojump bundler brew
set -xu PATH /usr/local/bin:$PATH
比较不爽的就是export 在这里不能用要使用`set -x`代替:
`set -x PATH /usr/local/bin $PATH`
-x : -export
-u : 意思是对所有fish session都使用
```
### 编写fish插件 ###
fish 的插件看起来非常好懂,是基于函数的
```ruby
function rg
rails generate $argv
end
```
| algking/algking.github.com | _posts/2014-10-19-从oh-my-zsh到oh-my-fish.md | Markdown | mit | 2,439 |
const HEX_SHORT = /^#([a-fA-F0-9]{3})$/;
const HEX = /^#([a-fA-F0-9]{6})$/;
function roundColors(obj, round) {
if (!round) return obj;
const o = {};
for (let k in obj) {
o[k] = Math.round(obj[k]);
}
return o;
}
function hasProp(obj, key) {
return obj.hasOwnProperty(key);
}
function isRgb(obj) {
return hasProp(obj, "r") && hasProp(obj, "g") && hasProp(obj, "b");
}
export default class Color {
static normalizeHex(hex) {
if (HEX.test(hex)) {
return hex;
} else if (HEX_SHORT.test(hex)) {
const r = hex.slice(1, 2);
const g = hex.slice(2, 3);
const b = hex.slice(3, 4);
return `#${r + r}${g + g}${b + b}`;
}
return null;
}
static hexToRgb(hex) {
const normalizedHex = this.normalizeHex(hex);
if (normalizedHex == null) {
return null;
}
const m = normalizedHex.match(HEX);
const i = parseInt(m[1], 16);
const r = (i >> 16) & 0xFF;
const g = (i >> 8) & 0xFF;
const b = i & 0xFF;
return { r, g, b };
}
static rgbToHex(rgb) {
const { r, g, b} = rgb;
const i = ((Math.round(r) & 0xFF) << 16) + ((Math.round(g) & 0xFF) << 8) + (Math.round(b) & 0xFF);
const s = i.toString(16).toLowerCase();
return `#${"000000".substring(s.length) + s}`;
}
static rgbToHsv(rgb, round = true) {
const { r, g, b } = rgb;
const min = Math.min(r, g, b);
const max = Math.max(r, g, b);
const delta = max - min;
const hsv = {};
if (max === 0) {
hsv.s = 0;
} else {
hsv.s = (delta / max * 1000) / 10;
}
if (max === min) {
hsv.h = 0;
} else if (r === max) {
hsv.h = (g - b) / delta;
} else if (g === max) {
hsv.h = 2 + (b - r) / delta;
} else {
hsv.h = 4 + (r - g) / delta;
}
hsv.h = Math.min(hsv.h * 60, 360);
hsv.h = hsv.h < 0 ? hsv.h + 360 : hsv.h;
hsv.v = ((max / 255) * 1000) / 10;
return roundColors(hsv, round);
}
static rgbToXyz(rgb, round = true) {
const r = rgb.r / 255;
const g = rgb.g / 255;
const b = rgb.b / 255;
const rr = r > 0.04045 ? Math.pow(((r + 0.055) / 1.055), 2.4) : r / 12.92;
const gg = g > 0.04045 ? Math.pow(((g + 0.055) / 1.055), 2.4) : g / 12.92;
const bb = b > 0.04045 ? Math.pow(((b + 0.055) / 1.055), 2.4) : b / 12.92;
const x = (rr * 0.4124 + gg * 0.3576 + bb * 0.1805) * 100;
const y = (rr * 0.2126 + gg * 0.7152 + bb * 0.0722) * 100;
const z = (rr * 0.0193 + gg * 0.1192 + bb * 0.9505) * 100;
return roundColors({ x, y, z }, round);
}
static rgbToLab(rgb, round = true) {
const xyz = Color.rgbToXyz(rgb, false);
let { x, y, z } = xyz;
x /= 95.047;
y /= 100;
z /= 108.883;
x = x > 0.008856 ? Math.pow(x, 1 / 3) : 7.787 * x + 16 / 116;
y = y > 0.008856 ? Math.pow(y, 1 / 3) : 7.787 * y + 16 / 116;
z = z > 0.008856 ? Math.pow(z, 1 / 3) : 7.787 * z + 16 / 116;
const l = (116 * y) - 16;
const a = 500 * (x - y);
const b = 200 * (y - z);
return roundColors({ l, a, b }, round);
}
constructor(value) {
this.original = value;
if (isRgb(value)) {
this.rgb = value;
this.hex = Color.rgbToHex(value);
} else {
this.hex = Color.normalizeHex(value);
this.rgb = Color.hexToRgb(this.hex);
}
this.hsv = Color.rgbToHsv(this.rgb);
}
}
| tsuyoshiwada/color-classifier | src/utils/color.js | JavaScript | mit | 3,342 |
export { default } from 'ember-validation/components/ember-validation-error-list';
| ajile/ember-validation | app/components/ember-validation-error-list.js | JavaScript | mit | 83 |
.WeatherStations {
margin: 30px 30px 30px 30px;
}
.clear{
clear: both;
} | atSistemas/react-base | src/app/containers/WeatherStations/styles.css | CSS | mit | 77 |
/*global window */
/**
* @license countdown.js v2.5.2 http://countdownjs.org
* Copyright (c)2006-2014 Stephen M. McKamey.
* Licensed under The MIT License.
*/
/*jshint bitwise:false */
/**
* @public
* @type {Object|null}
*/
var module;
/**
* API entry
* @public
* @param {function(Object)|Date|number} start the starting date
* @param {function(Object)|Date|number} end the ending date
* @param {number} units the units to populate
* @return {Object|number}
*/
var countdown = (
/**
* @param {Object} module CommonJS Module
*/
function(module) {
/*jshint smarttabs:true */
'use strict';
/**
* @private
* @const
* @type {number}
*/
var MILLISECONDS = 0x001;
/**
* @private
* @const
* @type {number}
*/
var SECONDS = 0x002;
/**
* @private
* @const
* @type {number}
*/
var MINUTES = 0x004;
/**
* @private
* @const
* @type {number}
*/
var HOURS = 0x008;
/**
* @private
* @const
* @type {number}
*/
var DAYS = 0x010;
/**
* @private
* @const
* @type {number}
*/
var WEEKS = 0x020;
/**
* @private
* @const
* @type {number}
*/
var MONTHS = 0x040;
/**
* @private
* @const
* @type {number}
*/
var YEARS = 0x080;
/**
* @private
* @const
* @type {number}
*/
var DECADES = 0x100;
/**
* @private
* @const
* @type {number}
*/
var CENTURIES = 0x200;
/**
* @private
* @const
* @type {number}
*/
var MILLENNIA = 0x400;
/**
* @private
* @const
* @type {number}
*/
var DEFAULTS = YEARS|MONTHS|DAYS|HOURS|MINUTES|SECONDS;
/**
* @private
* @const
* @type {number}
*/
var MILLISECONDS_PER_SECOND = 1000;
/**
* @private
* @const
* @type {number}
*/
var SECONDS_PER_MINUTE = 60;
/**
* @private
* @const
* @type {number}
*/
var MINUTES_PER_HOUR = 60;
/**
* @private
* @const
* @type {number}
*/
var HOURS_PER_DAY = 24;
/**
* @private
* @const
* @type {number}
*/
var MILLISECONDS_PER_DAY = HOURS_PER_DAY * MINUTES_PER_HOUR * SECONDS_PER_MINUTE * MILLISECONDS_PER_SECOND;
/**
* @private
* @const
* @type {number}
*/
var DAYS_PER_WEEK = 7;
/**
* @private
* @const
* @type {number}
*/
var MONTHS_PER_YEAR = 12;
/**
* @private
* @const
* @type {number}
*/
var YEARS_PER_DECADE = 10;
/**
* @private
* @const
* @type {number}
*/
var DECADES_PER_CENTURY = 10;
/**
* @private
* @const
* @type {number}
*/
var CENTURIES_PER_MILLENNIUM = 10;
/**
* @private
* @param {number} x number
* @return {number}
*/
var ceil = Math.ceil;
/**
* @private
* @param {number} x number
* @return {number}
*/
var floor = Math.floor;
/**
* @private
* @param {Date} ref reference date
* @param {number} shift number of months to shift
* @return {number} number of days shifted
*/
function borrowMonths(ref, shift) {
var prevTime = ref.getTime();
// increment month by shift
ref.setMonth( ref.getMonth() + shift );
// this is the trickiest since months vary in length
return Math.round( (ref.getTime() - prevTime) / MILLISECONDS_PER_DAY );
}
/**
* @private
* @param {Date} ref reference date
* @return {number} number of days
*/
function daysPerMonth(ref) {
var a = ref.getTime();
// increment month by 1
var b = new Date(a);
b.setMonth( ref.getMonth() + 1 );
// this is the trickiest since months vary in length
return Math.round( (b.getTime() - a) / MILLISECONDS_PER_DAY );
}
/**
* @private
* @param {Date} ref reference date
* @return {number} number of days
*/
function daysPerYear(ref) {
var a = ref.getTime();
// increment year by 1
var b = new Date(a);
b.setFullYear( ref.getFullYear() + 1 );
// this is the trickiest since years (periodically) vary in length
return Math.round( (b.getTime() - a) / MILLISECONDS_PER_DAY );
}
/**
* Applies the Timespan to the given date.
*
* @private
* @param {Timespan} ts
* @param {Date=} date
* @return {Date}
*/
function addToDate(ts, date) {
date = (date instanceof Date) || ((date !== null) && isFinite(date)) ? new Date(+date) : new Date();
if (!ts) {
return date;
}
// if there is a value field, use it directly
var value = +ts.value || 0;
if (value) {
date.setTime(date.getTime() + value);
return date;
}
value = +ts.milliseconds || 0;
if (value) {
date.setMilliseconds(date.getMilliseconds() + value);
}
value = +ts.seconds || 0;
// if (value) {
date.setSeconds(date.getSeconds() + value);
// }
value = +ts.minutes || 0;
if (value) {
date.setMinutes(date.getMinutes() + value);
}
value = +ts.hours || 0;
if (value) {
date.setHours(date.getHours() + value);
}
value = +ts.weeks || 0;
if (value) {
value *= DAYS_PER_WEEK;
}
value += +ts.days || 0;
if (value) {
date.setDate(date.getDate() + value);
}
value = +ts.months || 0;
if (value) {
date.setMonth(date.getMonth() + value);
}
value = +ts.millennia || 0;
if (value) {
value *= CENTURIES_PER_MILLENNIUM;
}
value += +ts.centuries || 0;
if (value) {
value *= DECADES_PER_CENTURY;
}
value += +ts.decades || 0;
if (value) {
value *= YEARS_PER_DECADE;
}
value += +ts.years || 0;
if (value) {
date.setFullYear(date.getFullYear() + value);
}
return date;
}
/**
* @private
* @const
* @type {number}
*/
var LABEL_MILLISECONDS = 0;
/**
* @private
* @const
* @type {number}
*/
var LABEL_SECONDS = 1;
/**
* @private
* @const
* @type {number}
*/
var LABEL_MINUTES = 2;
/**
* @private
* @const
* @type {number}
*/
var LABEL_HOURS = 3;
/**
* @private
* @const
* @type {number}
*/
var LABEL_DAYS = 4;
/**
* @private
* @const
* @type {number}
*/
var LABEL_WEEKS = 5;
/**
* @private
* @const
* @type {number}
*/
var LABEL_MONTHS = 6;
/**
* @private
* @const
* @type {number}
*/
var LABEL_YEARS = 7;
/**
* @private
* @const
* @type {number}
*/
var LABEL_DECADES = 8;
/**
* @private
* @const
* @type {number}
*/
var LABEL_CENTURIES = 9;
/**
* @private
* @const
* @type {number}
*/
var LABEL_MILLENNIA = 10;
/**
* @private
* @type {Array}
*/
var LABELS_SINGLUAR;
/**
* @private
* @type {Array}
*/
var LABELS_PLURAL;
/**
* @private
* @type {string}
*/
var LABEL_LAST;
/**
* @private
* @type {string}
*/
var LABEL_DELIM;
/**
* @private
* @type {string}
*/
var LABEL_NOW;
/**
* Formats a number as a string
*
* @private
* @param {number} value
* @return {string}
*/
var formatNumber;
/**
* @private
* @param {number} value
* @param {number} unit unit index into label list
* @return {string}
*/
function plurality(value, unit) {
return formatNumber(value)+((value === 1) ? LABELS_SINGLUAR[unit] : LABELS_PLURAL[unit]);
}
/**
* Formats the entries with singular or plural labels
*
* @private
* @param {Timespan} ts
* @return {Array}
*/
var formatList;
/**
* Timespan representation of a duration of time
*
* @private
* @this {Timespan}
* @constructor
*/
function Timespan() {}
/**
* Formats the Timespan as a sentence
*
* @param {string=} emptyLabel the string to use when no values returned
* @return {string}
*/
Timespan.prototype.toString = function(emptyLabel) {
var label = formatList(this);
var count = label.length;
if (!count) {
return emptyLabel ? ''+emptyLabel : LABEL_NOW;
}
if (count === 1) {
return label[0];
}
var last = LABEL_LAST+label.pop();
return label.join(LABEL_DELIM)+last;
};
/**
* Formats the Timespan as a sentence in HTML
*
* @param {string=} tag HTML tag name to wrap each value
* @param {string=} emptyLabel the string to use when no values returned
* @return {string}
*/
Timespan.prototype.toHTML = function(tag, emptyLabel) {
tag = tag || 'span';
var label = formatList(this);
var count = label.length;
if (!count) {
emptyLabel = emptyLabel || LABEL_NOW;
return emptyLabel ? '<'+tag+'>'+emptyLabel+'</'+tag+'>' : emptyLabel;
}
for (var i=0; i<count; i++) {
// wrap each unit in tag
label[i] = '<'+tag+'>'+label[i]+'</'+tag+'>';
}
if (count === 1) {
return label[0];
}
var last = LABEL_LAST+label.pop();
return label.join(LABEL_DELIM)+last;
};
/**
* Applies the Timespan to the given date
*
* @param {Date=} date the date to which the timespan is added.
* @return {Date}
*/
Timespan.prototype.addTo = function(date) {
return addToDate(this, date);
};
/**
* Formats the entries as English labels
*
* @private
* @param {Timespan} ts
* @return {Array}
*/
formatList = function(ts) {
var list = [];
var value = ts.millennia;
if (value) {
list.push(plurality(value, LABEL_MILLENNIA));
}
value = ts.centuries;
if (value) {
list.push(plurality(value, LABEL_CENTURIES));
}
value = ts.decades;
if (value) {
list.push(plurality(value, LABEL_DECADES));
}
value = ts.years;
if (value) {
list.push(plurality(value, LABEL_YEARS));
}
value = ts.months;
if (value) {
list.push(plurality(value, LABEL_MONTHS));
}
value = ts.weeks;
if (value) {
list.push(plurality(value, LABEL_WEEKS));
}
value = ts.days;
if (value) {
list.push(plurality(value, LABEL_DAYS));
}
value = ts.hours;
if (value) {
list.push(plurality(value, LABEL_HOURS));
}
value = ts.minutes;
if (value) {
list.push(plurality(value, LABEL_MINUTES));
}
value = ts.seconds;
// if (value) {
list.push(plurality(value, LABEL_SECONDS));
// }
value = ts.milliseconds;
if (value) {
list.push(plurality(value, LABEL_MILLISECONDS));
}
return list;
};
/**
* Borrow any underflow units, carry any overflow units
*
* @private
* @param {Timespan} ts
* @param {string} toUnit
*/
function rippleRounded(ts, toUnit) {
switch (toUnit) {
case 'seconds':
if (ts.seconds !== SECONDS_PER_MINUTE || isNaN(ts.minutes)) {
return;
}
// ripple seconds up to minutes
ts.minutes++;
ts.seconds = 0;
/* falls through */
case 'minutes':
if (ts.minutes !== MINUTES_PER_HOUR || isNaN(ts.hours)) {
return;
}
// ripple minutes up to hours
ts.hours++;
ts.minutes = 0;
/* falls through */
case 'hours':
if (ts.hours !== HOURS_PER_DAY || isNaN(ts.days)) {
return;
}
// ripple hours up to days
ts.days++;
ts.hours = 0;
/* falls through */
case 'days':
if (ts.days !== DAYS_PER_WEEK || isNaN(ts.weeks)) {
return;
}
// ripple days up to weeks
ts.weeks++;
ts.days = 0;
/* falls through */
case 'weeks':
if (ts.weeks !== daysPerMonth(ts.refMonth)/DAYS_PER_WEEK || isNaN(ts.months)) {
return;
}
// ripple weeks up to months
ts.months++;
ts.weeks = 0;
/* falls through */
case 'months':
if (ts.months !== MONTHS_PER_YEAR || isNaN(ts.years)) {
return;
}
// ripple months up to years
ts.years++;
ts.months = 0;
/* falls through */
case 'years':
if (ts.years !== YEARS_PER_DECADE || isNaN(ts.decades)) {
return;
}
// ripple years up to decades
ts.decades++;
ts.years = 0;
/* falls through */
case 'decades':
if (ts.decades !== DECADES_PER_CENTURY || isNaN(ts.centuries)) {
return;
}
// ripple decades up to centuries
ts.centuries++;
ts.decades = 0;
/* falls through */
case 'centuries':
if (ts.centuries !== CENTURIES_PER_MILLENNIUM || isNaN(ts.millennia)) {
return;
}
// ripple centuries up to millennia
ts.millennia++;
ts.centuries = 0;
/* falls through */
}
}
/**
* Ripple up partial units one place
*
* @private
* @param {Timespan} ts timespan
* @param {number} frac accumulated fractional value
* @param {string} fromUnit source unit name
* @param {string} toUnit target unit name
* @param {number} conversion multiplier between units
* @param {number} digits max number of decimal digits to output
* @return {number} new fractional value
*/
function fraction(ts, frac, fromUnit, toUnit, conversion, digits) {
if (ts[fromUnit] >= 0) {
frac += ts[fromUnit];
delete ts[fromUnit];
}
frac /= conversion;
if (frac + 1 <= 1) {
// drop if below machine epsilon
return 0;
}
if (ts[toUnit] >= 0) {
// ensure does not have more than specified number of digits
ts[toUnit] = +(ts[toUnit] + frac).toFixed(digits);
rippleRounded(ts, toUnit);
return 0;
}
return frac;
}
/**
* Ripple up partial units to next existing
*
* @private
* @param {Timespan} ts
* @param {number} digits max number of decimal digits to output
*/
function fractional(ts, digits) {
var frac = fraction(ts, 0, 'milliseconds', 'seconds', MILLISECONDS_PER_SECOND, digits);
if (!frac) { return; }
frac = fraction(ts, frac, 'seconds', 'minutes', SECONDS_PER_MINUTE, digits);
if (!frac) { return; }
frac = fraction(ts, frac, 'minutes', 'hours', MINUTES_PER_HOUR, digits);
if (!frac) { return; }
frac = fraction(ts, frac, 'hours', 'days', HOURS_PER_DAY, digits);
if (!frac) { return; }
frac = fraction(ts, frac, 'days', 'weeks', DAYS_PER_WEEK, digits);
if (!frac) { return; }
frac = fraction(ts, frac, 'weeks', 'months', daysPerMonth(ts.refMonth)/DAYS_PER_WEEK, digits);
if (!frac) { return; }
frac = fraction(ts, frac, 'months', 'years', daysPerYear(ts.refMonth)/daysPerMonth(ts.refMonth), digits);
if (!frac) { return; }
frac = fraction(ts, frac, 'years', 'decades', YEARS_PER_DECADE, digits);
if (!frac) { return; }
frac = fraction(ts, frac, 'decades', 'centuries', DECADES_PER_CENTURY, digits);
if (!frac) { return; }
frac = fraction(ts, frac, 'centuries', 'millennia', CENTURIES_PER_MILLENNIUM, digits);
// should never reach this with remaining fractional value
if (frac) { throw new Error('Fractional unit overflow'); }
}
/**
* Borrow any underflow units, carry any overflow units
*
* @private
* @param {Timespan} ts
*/
function ripple(ts) {
var x;
if (ts.milliseconds < 0) {
// ripple seconds down to milliseconds
x = ceil(-ts.milliseconds / MILLISECONDS_PER_SECOND);
ts.seconds -= x;
ts.milliseconds += x * MILLISECONDS_PER_SECOND;
} else if (ts.milliseconds >= MILLISECONDS_PER_SECOND) {
// ripple milliseconds up to seconds
ts.seconds += floor(ts.milliseconds / MILLISECONDS_PER_SECOND);
ts.milliseconds %= MILLISECONDS_PER_SECOND;
}
if (ts.seconds < 0) {
// ripple minutes down to seconds
x = ceil(-ts.seconds / SECONDS_PER_MINUTE);
ts.minutes -= x;
ts.seconds += x * SECONDS_PER_MINUTE;
} else if (ts.seconds >= SECONDS_PER_MINUTE) {
// ripple seconds up to minutes
ts.minutes += floor(ts.seconds / SECONDS_PER_MINUTE);
ts.seconds %= SECONDS_PER_MINUTE;
}
if (ts.minutes < 0) {
// ripple hours down to minutes
x = ceil(-ts.minutes / MINUTES_PER_HOUR);
ts.hours -= x;
ts.minutes += x * MINUTES_PER_HOUR;
} else if (ts.minutes >= MINUTES_PER_HOUR) {
// ripple minutes up to hours
ts.hours += floor(ts.minutes / MINUTES_PER_HOUR);
ts.minutes %= MINUTES_PER_HOUR;
}
if (ts.hours < 0) {
// ripple days down to hours
x = ceil(-ts.hours / HOURS_PER_DAY);
ts.days -= x;
ts.hours += x * HOURS_PER_DAY;
} else if (ts.hours >= HOURS_PER_DAY) {
// ripple hours up to days
ts.days += floor(ts.hours / HOURS_PER_DAY);
ts.hours %= HOURS_PER_DAY;
}
while (ts.days < 0) {
// NOTE: never actually seen this loop more than once
// ripple months down to days
ts.months--;
ts.days += borrowMonths(ts.refMonth, 1);
}
// weeks is always zero here
if (ts.days >= DAYS_PER_WEEK) {
// ripple days up to weeks
ts.weeks += floor(ts.days / DAYS_PER_WEEK);
ts.days %= DAYS_PER_WEEK;
}
if (ts.months < 0) {
// ripple years down to months
x = ceil(-ts.months / MONTHS_PER_YEAR);
ts.years -= x;
ts.months += x * MONTHS_PER_YEAR;
} else if (ts.months >= MONTHS_PER_YEAR) {
// ripple months up to years
ts.years += floor(ts.months / MONTHS_PER_YEAR);
ts.months %= MONTHS_PER_YEAR;
}
// years is always non-negative here
// decades, centuries and millennia are always zero here
if (ts.years >= YEARS_PER_DECADE) {
// ripple years up to decades
ts.decades += floor(ts.years / YEARS_PER_DECADE);
ts.years %= YEARS_PER_DECADE;
if (ts.decades >= DECADES_PER_CENTURY) {
// ripple decades up to centuries
ts.centuries += floor(ts.decades / DECADES_PER_CENTURY);
ts.decades %= DECADES_PER_CENTURY;
if (ts.centuries >= CENTURIES_PER_MILLENNIUM) {
// ripple centuries up to millennia
ts.millennia += floor(ts.centuries / CENTURIES_PER_MILLENNIUM);
ts.centuries %= CENTURIES_PER_MILLENNIUM;
}
}
}
}
/**
* Remove any units not requested
*
* @private
* @param {Timespan} ts
* @param {number} units the units to populate
* @param {number} max number of labels to output
* @param {number} digits max number of decimal digits to output
*/
function pruneUnits(ts, units, max, digits) {
var count = 0;
// Calc from largest unit to smallest to prevent underflow
if (!(units & MILLENNIA) || (count >= max)) {
// ripple millennia down to centuries
ts.centuries += ts.millennia * CENTURIES_PER_MILLENNIUM;
delete ts.millennia;
} else if (ts.millennia) {
count++;
}
if (!(units & CENTURIES) || (count >= max)) {
// ripple centuries down to decades
ts.decades += ts.centuries * DECADES_PER_CENTURY;
delete ts.centuries;
} else if (ts.centuries) {
count++;
}
if (!(units & DECADES) || (count >= max)) {
// ripple decades down to years
ts.years += ts.decades * YEARS_PER_DECADE;
delete ts.decades;
} else if (ts.decades) {
count++;
}
if (!(units & YEARS) || (count >= max)) {
// ripple years down to months
ts.months += ts.years * MONTHS_PER_YEAR;
delete ts.years;
} else if (ts.years) {
count++;
}
if (!(units & MONTHS) || (count >= max)) {
// ripple months down to days
if (ts.months) {
ts.days += borrowMonths(ts.refMonth, ts.months);
}
delete ts.months;
if (ts.days >= DAYS_PER_WEEK) {
// ripple day overflow back up to weeks
ts.weeks += floor(ts.days / DAYS_PER_WEEK);
ts.days %= DAYS_PER_WEEK;
}
} else if (ts.months) {
count++;
}
if (!(units & WEEKS) || (count >= max)) {
// ripple weeks down to days
ts.days += ts.weeks * DAYS_PER_WEEK;
delete ts.weeks;
} else if (ts.weeks) {
count++;
}
if (!(units & DAYS) || (count >= max)) {
//ripple days down to hours
ts.hours += ts.days * HOURS_PER_DAY;
delete ts.days;
} else if (ts.days) {
count++;
}
if (!(units & HOURS) || (count >= max)) {
// ripple hours down to minutes
ts.minutes += ts.hours * MINUTES_PER_HOUR;
delete ts.hours;
} else if (ts.hours) {
count++;
}
if (!(units & MINUTES) || (count >= max)) {
// ripple minutes down to seconds
ts.seconds += ts.minutes * SECONDS_PER_MINUTE;
delete ts.minutes;
} else if (ts.minutes) {
count++;
}
if (!(units & SECONDS) || (count >= max)) {
// ripple seconds down to milliseconds
ts.milliseconds += ts.seconds * MILLISECONDS_PER_SECOND;
delete ts.seconds;
} else if (ts.seconds) {
count++;
}
// nothing to ripple milliseconds down to
// so ripple back up to smallest existing unit as a fractional value
if (!(units & MILLISECONDS) || (count >= max)) {
fractional(ts, digits);
}
}
/**
* Populates the Timespan object
*
* @private
* @param {Timespan} ts
* @param {?Date} start the starting date
* @param {?Date} end the ending date
* @param {number} units the units to populate
* @param {number} max number of labels to output
* @param {number} digits max number of decimal digits to output
*/
function populate(ts, start, end, units, max, digits) {
var now = new Date();
ts.start = start = start || now;
ts.end = end = end || now;
ts.units = units;
ts.value = end.getTime() - start.getTime();
if (ts.value < 0) {
// swap if reversed
var tmp = end;
end = start;
start = tmp;
}
// reference month for determining days in month
ts.refMonth = new Date(start.getFullYear(), start.getMonth(), 15, 12, 0, 0);
try {
// reset to initial deltas
ts.millennia = 0;
ts.centuries = 0;
ts.decades = 0;
ts.years = end.getFullYear() - start.getFullYear();
ts.months = end.getMonth() - start.getMonth();
ts.weeks = 0;
ts.days = end.getDate() - start.getDate();
ts.hours = end.getHours() - start.getHours();
ts.minutes = end.getMinutes() - start.getMinutes();
ts.seconds = end.getSeconds() - start.getSeconds();
ts.milliseconds = end.getMilliseconds() - start.getMilliseconds();
ripple(ts);
pruneUnits(ts, units, max, digits);
} finally {
delete ts.refMonth;
}
return ts;
}
/**
* Determine an appropriate refresh rate based upon units
*
* @private
* @param {number} units the units to populate
* @return {number} milliseconds to delay
*/
function getDelay(units) {
if (units & MILLISECONDS) {
// refresh very quickly
return MILLISECONDS_PER_SECOND / 30; //30Hz
}
if (units & SECONDS) {
// refresh every second
return MILLISECONDS_PER_SECOND; //1Hz
}
if (units & MINUTES) {
// refresh every minute
return MILLISECONDS_PER_SECOND * SECONDS_PER_MINUTE;
}
if (units & HOURS) {
// refresh hourly
return MILLISECONDS_PER_SECOND * SECONDS_PER_MINUTE * MINUTES_PER_HOUR;
}
if (units & DAYS) {
// refresh daily
return MILLISECONDS_PER_SECOND * SECONDS_PER_MINUTE * MINUTES_PER_HOUR * HOURS_PER_DAY;
}
// refresh the rest weekly
return MILLISECONDS_PER_SECOND * SECONDS_PER_MINUTE * MINUTES_PER_HOUR * HOURS_PER_DAY * DAYS_PER_WEEK;
}
/**
* API entry point
*
* @public
* @param {Date|number|Timespan|null|function(Timespan,number)} start the starting date
* @param {Date|number|Timespan|null|function(Timespan,number)} end the ending date
* @param {number=} units the units to populate
* @param {number=} max number of labels to output
* @param {number=} digits max number of decimal digits to output
* @return {Timespan|number}
*/
function countdown(start, end, units, max, digits) {
var callback;
// ensure some units or use defaults
units = +units || DEFAULTS;
// max must be positive
max = (max > 0) ? max : NaN;
// clamp digits to an integer between [0, 20]
digits = (digits > 0) ? (digits < 20) ? Math.round(digits) : 20 : 0;
// ensure start date
var startTS = null;
if ('function' === typeof start) {
callback = start;
start = null;
} else if (!(start instanceof Date)) {
if ((start !== null) && isFinite(start)) {
start = new Date(+start);
} else {
if ('object' === typeof startTS) {
startTS = /** @type{Timespan} */(start);
}
start = null;
}
}
// ensure end date
var endTS = null;
if ('function' === typeof end) {
callback = end;
end = null;
} else if (!(end instanceof Date)) {
if ((end !== null) && isFinite(end)) {
end = new Date(+end);
} else {
if ('object' === typeof end) {
endTS = /** @type{Timespan} */(end);
}
end = null;
}
}
// must wait to interpret timespans until after resolving dates
if (startTS) {
start = addToDate(startTS, end);
}
if (endTS) {
end = addToDate(endTS, start);
}
if (!start && !end) {
// used for unit testing
return new Timespan();
}
if (!callback) {
return populate(new Timespan(), /** @type{Date} */(start), /** @type{Date} */(end), /** @type{number} */(units), /** @type{number} */(max), /** @type{number} */(digits));
}
// base delay off units
var delay = getDelay(units),
timerId,
fn = function() {
callback(
populate(new Timespan(), /** @type{Date} */(start), /** @type{Date} */(end), /** @type{number} */(units), /** @type{number} */(max), /** @type{number} */(digits)),
timerId
);
};
fn();
return (timerId = setInterval(fn, delay));
}
/**
* @public
* @const
* @type {number}
*/
countdown.MILLISECONDS = MILLISECONDS;
/**
* @public
* @const
* @type {number}
*/
countdown.SECONDS = SECONDS;
/**
* @public
* @const
* @type {number}
*/
countdown.MINUTES = MINUTES;
/**
* @public
* @const
* @type {number}
*/
countdown.HOURS = HOURS;
/**
* @public
* @const
* @type {number}
*/
countdown.DAYS = DAYS;
/**
* @public
* @const
* @type {number}
*/
countdown.WEEKS = WEEKS;
/**
* @public
* @const
* @type {number}
*/
countdown.MONTHS = MONTHS;
/**
* @public
* @const
* @type {number}
*/
countdown.YEARS = YEARS;
/**
* @public
* @const
* @type {number}
*/
countdown.DECADES = DECADES;
/**
* @public
* @const
* @type {number}
*/
countdown.CENTURIES = CENTURIES;
/**
* @public
* @const
* @type {number}
*/
countdown.MILLENNIA = MILLENNIA;
/**
* @public
* @const
* @type {number}
*/
countdown.DEFAULTS = DEFAULTS;
/**
* @public
* @const
* @type {number}
*/
countdown.ALL = MILLENNIA|CENTURIES|DECADES|YEARS|MONTHS|WEEKS|DAYS|HOURS|MINUTES|SECONDS|MILLISECONDS;
/**
* Override the unit labels
* @public
* @param {string|Array=} singular a pipe ('|') delimited list of singular unit name overrides
* @param {string|Array=} plural a pipe ('|') delimited list of plural unit name overrides
* @param {string=} last a delimiter before the last unit (default: ' and ')
* @param {string=} delim a delimiter to use between all other units (default: ', ')
* @param {string=} empty a label to use when all units are zero (default: '')
* @param {function(number):string=} formatter a function which formats numbers as a string
*/
countdown.setLabels = function(singular, plural, last, delim, empty, formatter) {
singular = singular || [];
if (singular.split) {
singular = singular.split('|');
}
plural = plural || [];
if (plural.split) {
plural = plural.split('|');
}
for (var i=LABEL_MILLISECONDS; i<=LABEL_MILLENNIA; i++) {
// override any specified units
LABELS_SINGLUAR[i] = singular[i] || LABELS_SINGLUAR[i];
LABELS_PLURAL[i] = plural[i] || LABELS_PLURAL[i];
}
LABEL_LAST = ('string' === typeof last) ? last : LABEL_LAST;
LABEL_DELIM = ('string' === typeof delim) ? delim : LABEL_DELIM;
LABEL_NOW = ('string' === typeof empty) ? empty : LABEL_NOW;
formatNumber = ('function' === typeof formatter) ? formatter : formatNumber;
};
/**
* Revert to the default unit labels
* @public
*/
var resetLabels = countdown.resetLabels = function() {
LABELS_SINGLUAR = ' millisecond| second| minute| hour| day| week| month| year| decade| century| millennium'.split('|');
LABELS_PLURAL = ' milliseconds| seconds| minutes| hours| days| weeks| months| years| decades| centuries| millennia'.split('|');
LABEL_LAST = ' and ';
LABEL_DELIM = ', ';
LABEL_NOW = '';
formatNumber = function(value) { return '<span class="contest_timedelta">' + value + "</span>"; };
};
resetLabels();
if (module && module.exports) {
module.exports = countdown;
} else if (typeof window.define === 'function' && typeof window.define.amd !== 'undefined') {
window.define('countdown', [], function() {
return countdown;
});
}
return countdown;
})(module);
| entpy/beauty-and-pics | beauty_and_pics/website/static/website/js/vendor/countdown.js | JavaScript | mit | 27,520 |
---
layout: page
title: Avila - Wu Wedding
date: 2016-05-24
author: Abigail Heath
tags: weekly links, java
status: published
summary: Vestibulum enim odio, dapibus non turpis.
banner: images/banner/people.jpg
booking:
startDate: 05/10/2016
endDate: 05/13/2016
ctyhocn: HTSBVHX
groupCode: AWW
published: true
---
Nam luctus finibus nisi vel accumsan. Nunc luctus diam orci, sed sodales mi luctus quis. Donec eget aliquet augue. Nunc eleifend, nisi id vulputate vehicula, eros dui iaculis velit, ac feugiat lectus diam quis tortor. Nam vitae elementum nisi. Suspendisse sed blandit diam. Cras id sodales magna. Integer quam neque, feugiat in venenatis eget, convallis id velit. Donec posuere lectus tincidunt, malesuada sapien ac, lacinia ante. Pellentesque ex risus, volutpat id augue ac, scelerisque ullamcorper mauris. Nam ac metus mauris. Etiam leo mauris, auctor eget pellentesque eu, aliquam sit amet neque. Quisque eget eleifend dolor. Aenean venenatis odio a est egestas commodo quis quis nulla. Duis luctus velit vitae pulvinar elementum. Curabitur quis tincidunt ex.
* Nam imperdiet purus at ante efficitur, ut elementum lectus facilisis
* Cras non elit at mauris lacinia eleifend id a orci
* Nulla pretium odio non varius cursus.
Aliquam erat volutpat. Mauris aliquet nisi et metus porta pulvinar. Curabitur ornare eros eu posuere lacinia. Mauris et tortor gravida, ultrices massa ut, auctor ex. Donec non pharetra nisl. Maecenas augue nibh, hendrerit sed lobortis nec, malesuada eu metus. Proin sollicitudin fermentum tortor et tincidunt. Cras quis tristique odio. Aenean molestie iaculis ornare. Quisque ac nunc arcu. Suspendisse quis mollis est. Maecenas feugiat sit amet nulla vitae condimentum.
Vivamus dictum mi sit amet ultrices tristique. Quisque sit amet venenatis est. Donec vulputate malesuada purus sed finibus. Nunc id justo quis odio vulputate pellentesque a nec arcu. Etiam felis eros, placerat eget odio in, lobortis congue massa. Suspendisse elementum fermentum consectetur. Aliquam diam sapien, mattis sit amet volutpat id, gravida ac lorem. Vestibulum dignissim nibh eu porta sagittis. Aliquam facilisis rhoncus egestas. Sed semper vel eros at lobortis. Quisque non mi massa. Vestibulum feugiat diam ex, eu aliquam mi pharetra id. Nam faucibus sollicitudin nibh, et ultricies ligula porttitor ullamcorper.
| KlishGroup/prose-pogs | pogs/H/HTSBVHX/AWW/index.md | Markdown | mit | 2,344 |
file(REMOVE_RECURSE
"CMakeFiles/coverage_polymorphic.dir/polymorphic.cpp.o"
"../../../coverage/coverage_polymorphic.pdb"
"../../../coverage/coverage_polymorphic"
)
# Per-language clean rules from dependency scanning.
foreach(lang CXX)
include(CMakeFiles/coverage_polymorphic.dir/cmake_clean_${lang}.cmake OPTIONAL)
endforeach()
| noahhsmith/starid | libstarid/cereal-1.2.2/unittests/CMakeFiles/coverage_polymorphic.dir/cmake_clean.cmake | CMake | mit | 337 |
var formMode="detail"; /*formMode 页面模式 页面有三种模式 detail add modify*/
var panelType="form"; /*panelType 面板类型 form表单 search 查询 child 从表对象*/
var editIndex = undefined; /*datagrid 编辑对象的行号*/
var dg1EditIndex = undefined;
var objName=label.objName; /*页面管理对象名称*/
var lblDetailStr=label.detailStr; /*在不同的语种下应该不同*/
var lblAddStr=label.addStr; /*在不同的语种下应该不同*/
var lblEditStr=label.editStr; /*在不同的语种下应该不同*/
var pageName=null; /*根据pageName能够取得按钮定义*/
var pageHeight=0; /*pageHeight 页面高度*/
var topHeight=366; /*datagrid高度*/
var dgHeadHeight=28; /*datagrid 收缩后高度*/
var downHeight=30; /*底部高度*/
var paddingHeight=11; /*页面内补丁高度 paddingTop+paddingBottom*/
var gridToolbar = null; /*按钮定义 */
var dgConf=null; /*dgConf配置信息*/
var dg1Conf=null;
function initConf(){} /*在此初始化本页面的所有配置信息*/
function initButton(){
for(var i=0;i<gridToolbar.length;i++){
var b=gridToolbar[i];/*首次运行时所有按钮都是disable状态*/
$("#"+b.id).linkbutton({iconCls: b.iconCls,text:b.text,disabled:true,handler:b.handler,plain:1});
}
}
function initBtnDisabled() {
var btnDisabled=[{"id":"btn_refresh"},{"id":"btn_search"}];
for(var i=0;i<btnDisabled.length;i++) {
$('#'+btnDisabled[i].id).linkbutton('enable');
}
}
function component() {
initConf();
if(window.innerHeight) pageHeight=window.innerHeight;
else pageHeight=document.documentElement.clientHeight;
$('#middle').css("height",pageHeight-topHeight-downHeight-paddingHeight);
$('#tab').tabs({
onSelect:tab_select,
fit:true
});
/*这时候可能还没有key 所以不能直接绑定dom对象,只能使用dom id*/
installKey("btn_collapse",Keys.f1,null,null,null);
installKey("btn_edit",Keys.f2,null,null,null);
installKey("btn_search",Keys.f3,null,null,null);
installKey("btn_add",Keys.f4,null,null,null);
installKey("btn_delete",Keys.del,null,null,null);
installKey("btn2_save",Keys.s,true,null,null);
installKey("btn2_search",Keys.q,true,null,null);
installKey("btn2_edit",Keys.e,true,null,null);
document.onhelp=function(){return false}; /*为了屏蔽IE的F1按键*/
window.onhelp=function(){return false}; /*为了屏蔽IE的F1按键*/
$('#btn2_save').linkbutton({iconCls: 'icon-save'}).click(btn2_save);
$('#btn2_edit').linkbutton({iconCls: 'icon-save'}).click(btn2_update),
$('#btn2_search').linkbutton({iconCls: 'icon-search'}).click(btn2_search);
$('#btn2_addItem').linkbutton({iconCls: 'icon-add'}).click(btn2_addItem);
$('#btn2_editItem').linkbutton({iconCls: 'icon-edit'}).click(btn2_editItem);
$('#btn2_rmItem').linkbutton({iconCls: 'icon-remove'}).click(btn2_rmItem);
$('#btn2_ok').linkbutton({iconCls: 'icon-ok'}).click(btn2_ok);
dgConf.toolbar='#tb';
dgConf.onCollapse=dg_collapse;
dgConf.onSelect=dg_select;
dgConf.singleSelect=true;
dgConf.onLoadSuccess=dg_load;
dgConf.onClickRow=dg_click;
dgConf.onDblClickRow=dg_dbl;
dgConf.onExpand=dg_expand;
dgConf.collapsible=true;
dgConf.collapseID="btn_collapse";
dgConf.pagination=true;
dgConf.fit=true;
dgConf.rownumbers=true;
dgConf.singleSelect=true;
dg1Conf.onClickRow=dg1_click;
dg1Conf.onDblClickRow=dg1_dbl;
$("#dg").datagrid(dgConf);
initButton();
initBtnDisabled();
$('#top').css("height","auto");
lov_init();
$(".formChild").height(pageHeight-topHeight-downHeight-paddingHeight-dgHeadHeight-1);
//$("#ff1 input").attr("readonly",1); /*详细表单的输入框只读*/
}
function showChildGrid(param){/*dg 选中事件触发*/
$("#dg1").datagrid(dg1Conf);
}
function showForm(row){/*dg 选中事件触发*/
//$("#ff1").form("load",row);
//$("#ff2").form("load",row);;
}
function dg_collapse(){/*收缩后 总是要修改tabs 会触发tab_select事件 那么前面就需要将panel的selected属性设为true*/
var panel=$("#tab").tabs("getSelected"); /*先获取selected对象*/
if(panel!=null) panel.panel({selected:1});
$('#middle').css("height",pageHeight-dgHeadHeight-downHeight-paddingHeight);
$(".formChild").height(pageHeight-dgHeadHeight-downHeight-paddingHeight-dgHeadHeight-1);
$("#tab").tabs({fit:true,stopSelect:true});/*tab发生变化了 会触发tab_select事件 */
if(panel!=null) panel.panel({selected:0});
}
function dg_expand(){
var panel=$("#tab").tabs("getSelected");
if(panel!=null) panel.panel({selected:1});
$('#middle').css("height",pageHeight-topHeight-downHeight-paddingHeight);
$(".formChild").height(pageHeight-topHeight-downHeight-paddingHeight-dgHeadHeight-1);
$("#tab").tabs({fit:true,stopSelect:true});
if(panel!=null) panel.panel({selected:0});
}
function dg_load(){/*选中第一行*/
$('#mask').css('display', "none");
$('#dg').datagrid('selectRow', 0);
}
function dg_select(rowIndex, rowData){/*选中事件 填充ff1 ff2 dg1*/
showChildGrid(rowData);/*子表模式下,重绘子表列表*/
showForm(rowData,"add");
useDetailMode();
}
function dg_add(){/*列表新增按钮事件*/
useAddMode();
}
function dg_edit(){/*列表编辑按钮触发事件*/
var row=$('#dg').datagrid('getSelected');
if(row){
useEditMode();
}
else $.messager.alert('选择提示', '请选择您编辑的数据!',"info");
}
function dg_delete(){/*列表删除按钮触发事件*/
var confirmBack=function(r){
if(!r) return;
var p=$('#dg').datagrid('getRowIndex',$('#dg').datagrid('getSelected'));
/*执行服务器请求,完成服务端数据的删除 然后完成前端的删除*/
if (p == undefined){return}
$('#dg').datagrid('cancelEdit', p)
.datagrid('deleteRow', p);
/*删除成功后应该刷新页面 并把下一条选中*/
var currRows=$('#dg').datagrid('getRows').length;
if(p>=currRows) p--;
if(p>=0) $('#dg').datagrid('selectRow', p);/*如果已经到末尾则 选中p-1 */
}
var row=$('#dg').datagrid('getSelected');
if(row) $.messager.confirm('确认提示', '您确认要删除这条数据吗?', confirmBack);
else $.messager.alert('选择提示', '请选择您要删除的数据!',"info");
}
function dg_refresh(){/*列表刷新按钮事件*/
}
function dg_search(){/*列表搜索事件 search模式不再禁用其他面板*/
panelType="search";
$('#tab').tabs("select",1);
}
function dg_click(index){
/*切换回详细信息模式 首先判断tab的当前选项*/
if(panelType=="search"){
$('#tab').tabs("select",0);
}
}
function dg_dbl(){/*列表双击事件 双击进入编辑模式*/
document.getElementById("btn_edit").click();/*双击等同于点击编辑按钮*/
}
function tab_select(title,index){/*选项卡的切换 需要更改按钮的显示*/
$('#down a').css("display","none");
if(index==0){/*根据grid的状态来生成按钮 add edit*/
$('#btn2_addItem').css("display","inline-block");/*新增行按钮*/
$('#btn2_editItem').css("display","inline-block");/*删除行按钮*/
$('#btn2_rmItem').css("display","inline-block");/*删除行按钮*/
$('#btn2_ok').css("display","inline-block");/*commit按钮*/
}
else if(index==1){/*查询选项卡 切换到查询页签等同于按钮 search被点击*/
panelType="search";
$('#btn2_search').css("display","inline-block");/*搜索按钮*/
}
}
function useDetailMode(row){
//formMode="detail";
//$('#ff2').css("display","none");
//$('#ff1').css("display","block");
//if(panelType=="search") $('#tab').tabs("select",0);
//else tab_select();
}
function btn2_addItem(){
if(dg1_endEditing()){/*结束编辑状态成功*/
var p=$('#dg1').datagrid('getRowIndex',$('#dg1').datagrid('getSelected'));
/*执行服务器请求,完成服务端数据的删除 然后完成前端的删除*/
if (p == undefined){return}
$('#dg1').datagrid('unselectAll');
$('#dg1').datagrid('insertRow',{index:p+1,row:{}})
.datagrid('beginEdit', p+1)
.datagrid('selectRow', p+1);
dg1EditIndex=p+1;
}
else{
$('#dg1').datagrid('selectRow', dg1EditIndex);
}
}
function btn2_editItem(){
var index=$('#dg1').datagrid('getRowIndex', $('#dg1').datagrid('getSelected'));
if (dg1EditIndex != index){
if (dg1_endEditing()){
$('#dg1').datagrid('selectRow', index)
.datagrid('beginEdit', index);
dg1EditIndex = index;
} else {
$('#dg1').datagrid('selectRow', dg1EditIndex);
}
}
}
function btn2_rmItem(){
var confirmBack=function(r){
if(!r) return;
var p=$('#dg1').datagrid('getRowIndex',$('#dg1').datagrid('getSelected'));
if (p == undefined){return}
$('#dg1').datagrid('cancelEdit', p)
.datagrid('deleteRow', p);
var currRows=$('#dg1').datagrid('getRows').length;
if(p>=currRows) p--;
if(p>=0) $('#dg1').datagrid('selectRow', p);/*如果已经到末尾则 选中p-1 */
}
var row=$('#dg1').datagrid('getSelected');
if(row) $.messager.confirm('确认提示', '您确认要删除这条数据吗?', confirmBack);
else $.messager.alert('选择提示', '请选择您要删除的数据!',"info");
}
function dg1_endEditing(){
if (dg1EditIndex == undefined){return true}
var flag=$('#dg1').datagrid('validateRow',dg1EditIndex);
if(flag){/*如果校验通过 允许结束编辑状态*/
$('#dg1').datagrid('endEdit', dg1EditIndex);
dg1EditIndex = undefined;
return true;
}
return false;
}
function dg1_click(index){/*从表单击事件 在编辑模式下打开编辑*/
if (dg1EditIndex != index){
dg1_endEditing();
}
}
function dg1_dbl(index){/*从表双击事件 双击进入编辑模式*/
document.getElementById("btn2_editItem").click();/*双击等同于点击编辑按钮*/
}
function useAddMode(){};
function useEditMode(){};
function form_change(type){}/*type= add|edit*/
function removeValidate(){}/*type= enable|remove*/
function btn2_save(){}
function btn2_update(){}
function btn2_search(){}
function btn2_ok(){}
function lov_init(){}/*绑定值列表*/
| ldjking/wbscreen | web/wb/2tp/template/js/common/copy/a3.js | JavaScript | mit | 9,914 |
FROM ruby:2.3.3
RUN apt-get update && apt-get install -y \
#Packages
net-tools \
nodejs
#Install phantomjs
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
ca-certificates \
bzip2 \
libfontconfig \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
curl \
&& mkdir /tmp/phantomjs \
&& curl -L https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-2.1.1-linux-x86_64.tar.bz2 \
| tar -xj --strip-components=1 -C /tmp/phantomjs \
&& cd /tmp/phantomjs \
&& mv bin/phantomjs /usr/local/bin \
&& cd \
&& apt-get purge --auto-remove -y \
curl \
&& apt-get clean \
&& rm -rf /tmp/* /var/lib/apt/lists/*
#Install gems
RUN mkdir /app
WORKDIR /app
COPY Gemfile* /app/
RUN bundle install
RUN apt-get clean
#Upload source
COPY . /app
RUN useradd ruby
RUN chown -R ruby /app
USER ruby
# Database defaults
ENV DATABASE_NAME bookIT
ENV DATABASE_HOST db
ENV DATABASE_USER bookIT
ENV DATABASE_PASSWORD password
ENV DATABASE_ADAPTER mysql2
ENV ACCOUNT_ADDRESS https://gamma.chalmers.it
#In production, Host is set to naboo.chlamers.it
# Start server
ENV RAILS_ENV production
ENV RACK_ENV production
ENV SECRET_KEY_BASE secret
ENV PORT 3000
EXPOSE 3000
RUN rake assets:precompile
CMD ["sh", "start.sh"]
| cthit/bookIT | Dockerfile | Dockerfile | mit | 1,372 |
## Capistrano
[](http://travis-ci.org/capistrano/capistrano)[](https://codeclimate.com/github/capistrano/capistrano)
Capistrano is a utility and framework for executing commands in parallel on
multiple remote machines, via SSH. It uses a simple DSL (borrowed in part from
[Rake](http://rake.rubyforge.org/)) that allows you to define _tasks_, which may
be applied to machines in certain roles. It also supports tunneling connections
via some gateway machine to allow operations to be performed behind VPN's and
firewalls.
Capistrano was originally designed to simplify and automate deployment of web
applications to distributed environments, and originally came bundled with a set
of tasks designed for deploying Rails applications.
## Documentation
* [https://github.com/capistrano/capistrano/wiki](https://github.com/capistrano/capistrano/wiki)
## DEPENDENCIES
* [Net::SSH](http://net-ssh.rubyforge.org)
* [Net::SFTP](http://net-ssh.rubyforge.org)
* [Net::SCP](http://net-ssh.rubyforge.org)
* [Net::SSH::Gateway](http://net-ssh.rubyforge.org)
* [HighLine](http://highline.rubyforge.org)
* [Ruby](http://www.ruby-lang.org/en/) ≥ 1.8.7
If you want to run the tests, you'll also need to install the dependencies with
Bundler, see the `Gemfile` within .
## ASSUMPTIONS
Capistrano is "opinionated software", which means it has very firm ideas about
how things ought to be done, and tries to force those ideas on you. Some of the
assumptions behind these opinions are:
* You are using SSH to access the remote servers.
* You either have the same password to all target machines, or you have public
keys in place to allow passwordless access to them.
Do not expect these assumptions to change.
## USAGE
In general, you'll use Capistrano as follows:
* Create a recipe file ("capfile" or "Capfile").
* Use the `cap` script to execute your recipe.
Use the `cap` script as follows:
cap sometask
By default, the script will look for a file called one of `capfile` or
`Capfile`. The `sometask` text indicates which task to execute. You can do
"cap -h" to see all the available options and "cap -T" to see all the available
tasks.
## CONTRIBUTING:
* Fork Capistrano
* Create a topic branch - `git checkout -b my_branch`
* Rebase your branch so that all your changes are reflected in one
commit
* Push to your branch - `git push origin my_branch`
* Create a Pull Request from your branch, include as much documentation
as you can in the commit message/pull request, following these
[guidelines on writing a good commit message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html)
* That's it!
## LICENSE:
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
'Software'), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
| piousbox/microsites2-cities | vendor/ruby/1.9.1/gems/capistrano-2.15.4/README.md | Markdown | mit | 3,795 |
/*
* Jermit
*
* The MIT License (MIT)
*
* Copyright (C) 2018 Kevin Lamonte
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* @author Kevin Lamonte [[email protected]]
* @version 1
*/
package jermit.protocol.zmodem;
/**
* ZEofHeader represents the end of a file.
*/
class ZEofHeader extends Header {
// ------------------------------------------------------------------------
// Constructors -----------------------------------------------------------
// ------------------------------------------------------------------------
/**
* Public constructor.
*/
public ZEofHeader() {
this(0);
}
/**
* Public constructor.
*
* @param data the data field for this header
*/
public ZEofHeader(final int data) {
super(Type.ZEOF, (byte) 0x0B, "ZEOF", data);
}
// ------------------------------------------------------------------------
// Header -----------------------------------------------------------------
// ------------------------------------------------------------------------
// ------------------------------------------------------------------------
// ZEofHeader -------------------------------------------------------------
// ------------------------------------------------------------------------
/**
* Get the file size value.
*
* @return the value
*/
public int getFileSize() {
return data;
}
}
| klamonte/jermit | src/jermit/protocol/zmodem/ZEofHeader.java | Java | mit | 2,504 |
# fullstack-course4-submissions | aaronblair/fullstack-course4-submissions | README.md | Markdown | mit | 31 |
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
public class FormLoader {
public static String connectionString = "jdbc:hsqldb:file:db-data/teamsandplayers";
static Connection con;
public static void main(String[] args) throws Exception {
try {
Class.forName("org.hsqldb.jdbc.JDBCDriver");
} catch (ClassNotFoundException e) {
throw e;
}
MainTeamForm form = new MainTeamForm();
form.setVisible(true);
try {
// will create DB if does not exist
// "SA" is default user with hypersql
con = DriverManager.getConnection(connectionString, "SA", "");
} catch (SQLException e) {
throw e;
} finally {
con.close();
System.out.println("Program complete");
}
}
}
| a-r-d/java-1-class-demos | jframe-actionlistener-access-db-cxn/homework-start/Week13Assignment10/src/FormLoader.java | Java | mit | 788 |
<?php
namespace Memento\Test;
use Memento;
class SingleTest extends Harness
{
/** @dataProvider provideClients */
public function testStoreMethod(Memento\Client $client)
{
$success = $client->store($this->getKey(), array('foo' => 'bar'), $this->getExpires());
$this->assertTrue($success);
$this->assertEquals($this->getExpires(), $client->getExpires($this->getKey()));
$this->assertEquals($this->getExpires(), $client->getTtl($this->getKey())); // default should be the same as expires
// store with ttl
$success = $client->store($this->getKey(), array('foo' => 'bar'), $this->getExpires(), $this->getTtl());
$this->assertTrue($success);
$this->assertLessThanOrEqual($this->getExpires(), $client->getExpires($this->getKey()));
$this->assertLessThanOrEqual($this->getTtl(), $client->getTtl($this->getKey()));
}
/** @dataProvider provideClients */
public function testExists(Memento\Client $client)
{
$client->store($this->getKey(), true);
$exists = $client->exists($this->getKey());
$this->assertTrue($exists);
}
/** @dataProvider provideClients */
public function testRetrieve(Memento\Client $client)
{
$client->store($this->getKey(), array('foo' => 'bar'));
$data = $client->retrieve($this->getKey());
$this->assertEquals($data, array('foo' => 'bar'));
}
/** @dataProvider provideClients */
public function testInvalidRetrieve(Memento\Client $client)
{
$data = $client->retrieve(new Memento\Key(md5(time() . rand(0, 1000))));
$this->assertEquals($data, null);
}
/** @dataProvider provideClients */
public function testInvalidate(Memento\Client $client)
{
$client->store($this->getKey(), true);
$invalid = $client->invalidate($this->getKey());
$this->assertTrue($invalid);
$exists = $client->exists($this->getKey());
$this->assertFalse($exists);
}
/** @dataProvider provideClients */
public function testTerminate(Memento\Client $client)
{
$client->store($this->getKey(), true);
$terminated = $client->terminate($this->getKey());
$this->assertTrue($terminated);
$exists = $client->exists($this->getKey());
$this->assertFalse($exists);
}
/** @dataProvider provideClients */
public function testExpires(Memento\Client $client)
{
$client->store($this->getKey(), array('foo' => 'bar'), 1, $ttl = 5);
sleep(3);
$exists = $client->exists($this->getKey());
$this->assertFalse($exists);
// check if cache exists but include expired caches
$exists = $client->exists($this->getKey(), true);
$this->assertTrue($exists);
$client->store($this->getKey(), array('foo' => 'bar'), $this->getExpires(), $this->getTtl());
$this->assertTrue($client->exists($this->getKey()));
$client->expire($this->getKey());
sleep(1);
$this->assertFalse($client->exists($this->getKey()));
// check if cache exists but include expired caches
$exists = $client->exists($this->getKey(), true);
$this->assertTrue($exists);
}
}
| garyr/memento | test/Memento/Test/SingleTest.php | PHP | mit | 3,252 |
/*
Copyright (c) 2015 Shaps Mohsenin. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY Shaps Mohsenin `AS IS' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
EVENT SHALL Shaps Mohsenin OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
@import UIKit;
#import "SPXDataView.h"
/**
* Provides collectionView specific definitions of a dataView
*/
@interface UITableView (SPXDataViewAdditions) <SPXDataView>
/**
* Gets/sets the block to execute when the collectionView requests a cell
*/
@property (nonatomic, copy) UITableViewCell *(^viewForItemAtIndexPathBlock)(UITableView *tableView, id object, NSIndexPath *indexPath);
/**
* Gets/sets the block to execute when the collectionView requests the cell to be configured
*/
@property (nonatomic, copy) void (^configureViewForItemAtIndexPathBlock)(UITableView *tableView, UITableViewCell *cell, id object, NSIndexPath *indexPath);
/**
* Gets/sets the block to execute when the collectionView requests a section header
*/
@property (nonatomic, copy) NSString *(^titleForHeaderInSectionBlock)(UITableView *tableView, NSUInteger section);
/**
* Gets/sets the block to execute when the collectionView requests a section footer
*/
@property (nonatomic, copy) NSString *(^titleForFooterInSectionBlock)(UITableView *tableView, NSUInteger section);
/**
* Gets/sets the block to execute when the collectionView requests whether or not a cell can be moved
*/
@property (nonatomic, copy) BOOL (^canMoveItemAtIndexPathBlock)(UITableView *tableView, UITableViewCell *cell, id object, NSIndexPath *indexPath);
/**
* Gets/sets the block to execute when the collectionView requests whether or not a cell can be edited
*/
@property (nonatomic, copy) BOOL (^canEditItemAtIndexPathBlock)(UITableView *tableView, UITableViewCell *cell, id object, NSIndexPath *indexPath);
/**
* Gets/sets the block to execute when the collectionView commits an editing action for a cell
*/
@property (nonatomic, copy) void (^commitEditingStyleForItemAtIndexPathBlock)(UITableView *tableView, UITableViewCell *cell, id object, NSIndexPath *indexPath);
/**
* Gets/sets the block to execute when the collectionView moves a cell
*/
@property (nonatomic, copy) void (^moveItemAtSourceIndexPathToDestinationIndexPathBlock)(UITableView *tableView, NSIndexPath *sourceIndexPath, NSIndexPath *destinationIndexPath);
@end
| shaps80/SPXCore | Example/Pods/SPXDataSources/Pod/Classes/DataViews/UITableView+SPXDataViewAdditions.h | C | mit | 3,424 |
//
// Generated by class-dump 3.5 (64 bit).
//
// class-dump is Copyright (C) 1997-1998, 2000-2001, 2004-2013 by Steve Nygard.
//
#import "CDStructures.h"
@interface _IDEKitPrivateClassForFindingBundle : NSObject
{
}
@end
| kolinkrewinkel/Multiplex | Multiplex/IDEHeaders/IDEHeaders/IDEKit/_IDEKitPrivateClassForFindingBundle.h | C | mit | 234 |
<html lang="en">
<head>
<title>C - Debugging with GDB</title>
<meta http-equiv="Content-Type" content="text/html">
<meta name="description" content="Debugging with GDB">
<meta name="generator" content="makeinfo 4.8">
<link title="Top" rel="start" href="index.html#Top">
<link rel="up" href="Supported-Languages.html#Supported-Languages" title="Supported Languages">
<link rel="next" href="D.html#D" title="D">
<link href="http://www.gnu.org/software/texinfo/" rel="generator-home" title="Texinfo Homepage">
<!--
Copyright (C) 1988-2017 Free Software Foundation, Inc.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.3 or
any later version published by the Free Software Foundation; with the
Invariant Sections being ``Free Software'' and ``Free Software Needs
Free Documentation'', with the Front-Cover Texts being ``A GNU Manual,''
and with the Back-Cover Texts as in (a) below.
(a) The FSF's Back-Cover Text is: ``You are free to copy and modify
this GNU Manual. Buying copies from GNU Press supports the FSF in
developing GNU and promoting software freedom.''
-->
<meta http-equiv="Content-Style-Type" content="text/css">
<style type="text/css"><!--
pre.display { font-family:inherit }
pre.format { font-family:inherit }
pre.smalldisplay { font-family:inherit; font-size:smaller }
pre.smallformat { font-family:inherit; font-size:smaller }
pre.smallexample { font-size:smaller }
pre.smalllisp { font-size:smaller }
span.sc { font-variant:small-caps }
span.roman { font-family:serif; font-weight:normal; }
span.sansserif { font-family:sans-serif; font-weight:normal; }
--></style>
</head>
<body>
<div class="node">
<p>
<a name="C"></a>
Next: <a rel="next" accesskey="n" href="D.html#D">D</a>,
Up: <a rel="up" accesskey="u" href="Supported-Languages.html#Supported-Languages">Supported Languages</a>
<hr>
</div>
<h4 class="subsection">15.4.1 C and C<tt>++</tt></h4>
<p><a name="index-C-and-C_0040t_007b_002b_002b_007d-947"></a><a name="index-expressions-in-C-or-C_0040t_007b_002b_002b_007d-948"></a>
Since C and C<tt>++</tt> are so closely related, many features of <span class="sc">gdb</span> apply
to both languages. Whenever this is the case, we discuss those languages
together.
<p><a name="index-C_0040t_007b_002b_002b_007d-949"></a><a name="index-g_t_0040code_007bg_002b_002b_007d_002c-_0040sc_007bgnu_007d-C_0040t_007b_002b_002b_007d-compiler-950"></a><a name="index-g_t_0040sc_007bgnu_007d-C_0040t_007b_002b_002b_007d-951"></a>The C<tt>++</tt> debugging facilities are jointly implemented by the C<tt>++</tt>
compiler and <span class="sc">gdb</span>. Therefore, to debug your C<tt>++</tt> code
effectively, you must compile your C<tt>++</tt> programs with a supported
C<tt>++</tt> compiler, such as <span class="sc">gnu</span> <code>g++</code>, or the HP ANSI C<tt>++</tt>
compiler (<code>aCC</code>).
<ul class="menu">
<li><a accesskey="1" href="C-Operators.html#C-Operators">C Operators</a>: C and C<tt>++</tt> operators
<li><a accesskey="2" href="C-Constants.html#C-Constants">C Constants</a>: C and C<tt>++</tt> constants
<li><a accesskey="3" href="C-Plus-Plus-Expressions.html#C-Plus-Plus-Expressions">C Plus Plus Expressions</a>: C<tt>++</tt> expressions
<li><a accesskey="4" href="C-Defaults.html#C-Defaults">C Defaults</a>: Default settings for C and C<tt>++</tt>
<li><a accesskey="5" href="C-Checks.html#C-Checks">C Checks</a>: C and C<tt>++</tt> type and range checks
<li><a accesskey="6" href="Debugging-C.html#Debugging-C">Debugging C</a>: <span class="sc">gdb</span> and C
<li><a accesskey="7" href="Debugging-C-Plus-Plus.html#Debugging-C-Plus-Plus">Debugging C Plus Plus</a>: <span class="sc">gdb</span> features for C<tt>++</tt>
<li><a accesskey="8" href="Decimal-Floating-Point.html#Decimal-Floating-Point">Decimal Floating Point</a>: Numbers in Decimal Floating Point format
</ul>
</body></html>
| ChangsoonKim/STM32F7DiscTutor | toolchain/osx/gcc-arm-none-eabi-6-2017-q1-update/share/doc/gcc-arm-none-eabi/html/gdb/C.html | HTML | mit | 4,070 |
Answer these questions in your reflection:
What git concepts were you struggling with prior to the GPS session?
- Prior to the GPS session I was having trouble navigating between branches. I also was completely confused on remote and fetch. I thought that you could just use the command git pull which would fetch/merge in one.
What concepts were clarified during the GPS?
- Using git checkout moves between branches.
What questions did you ask your pair and the guide?
- I asked them questions on what was troubling me and that cleared things up. I am still a little fuzzy on fetch / remote but I know that will come with more practice. Git pull is also a compact way to fetch and merge in one.
What still confuses you about git?
- When using the remote I am still not completely sure on what it does. I will need to do more research and practice while I work on the HTML this week.
How was your first experience of pairing in a GPS?
- My first experience was great! I really enjoyed working with my partner and the guide had some great pointers. Once again my feelings toward DBC are getting better and better as the days go on. I am having a great time learning things that interest me. | mikelondon/phase-0-gps-1 | london-reflection.md | Markdown | mit | 1,202 |
#!/usr/bin/env python3
"""
Categorize and analyze user sessions.
Read in ecfs_obfuscated_filtered.gz file, output some fancy results.
"""
from collections import defaultdict
from collections import Counter
import sys
import time
import os
import resource
import json
import fnmatch
from pipes import Pipes
import operator
from operation import Operation
KB = 1024
MB = KB * 1024
GB = MB * 1024
TB = GB * 1024
PB = TB * 1024
MONITOR_LINES = 100000
class UserSession():
def __init__(self, user_id):
self.user_id = user_id
self.from_ts = 0
self.till_ts = 0
self.get_requests = 0
self.reget_requests = 0
self.put_requests = 0
self.get_bytes = 0
self.put_bytes = 0
self.rename_requests = 0
self.del_requests = 0
self.get_dirs = 0
self.put_dirs = 0
self.put_files_per_dir = 0.0
self.get_files_per_dir = 0.0
self.window_seconds = 0
self.file_cnt_gets = Counter()
self.file_cnt_puts = Counter()
self.dir_cnt_gets = Counter()
self.dir_cnt_puts = Counter()
self.num_ops = 0
self.last_ts = 0
def add_op(self, op):
self.num_ops += 1
if op.ts < self.last_ts:
raise Exception("Timestamp too old")
else:
self.last_ts = op.ts
if op.optype == 'g':
self.get_requests += 1
self.get_bytes += op.size
self.file_cnt_gets[op.obj_id] += 1
self.dir_cnt_gets[op.parent_dir_id] += 1
elif op.optype == 'p':
self.put_requests += 1
self.put_bytes += op.size
self.file_cnt_puts[op.obj_id] += 1
self.dir_cnt_puts[op.parent_dir_id] += 1
elif op.optype == 'd':
self.del_requests += 1
elif op.optype == 'r':
self.rename_requests += 1
#update last time stamp in the session
self.till_ts = op.ts + op.execution_time
def finish(self):
self.get_dirs = len(self.dir_cnt_gets)
if self.get_dirs > 0:
self.get_files_per_dir = float(self.get_requests) / self.get_dirs
self.put_dirs = len(self.dir_cnt_puts)
if self.put_dirs > 0:
self.put_files_per_dir = float(self.put_requests) / self.put_dirs
"""
set reget_counter
:param counter: contains [ 1, 1, 5] counts of objects. value > 1 is a re-retrieval.
:return:
"""
for c in self.file_cnt_gets.values():
if c > 1:
self.reget_requests += (c - 1)
# self.announce()
return ";".join([str(x) for x in [
self.user_id,
self.from_ts,
self.till_ts,
self.till_ts - self.from_ts,
self.get_requests,
self.reget_requests,
self.put_requests,
self.get_bytes,
self.put_bytes,
self.rename_requests,
self.del_requests,
self.get_dirs,
self.put_dirs,
self.put_files_per_dir,
self.get_files_per_dir,
self.window_seconds
]]
)
def announce(self):
print("closed session. gets: %r, regets: %r, puts: %r, dels: %r, renames: %r get_dirs: %r, put_dirs: %r, get_bytes: %r put_bytes: %r window_seconds: %d" % \
(self.get_requests, self.reget_requests, self.put_requests, self.del_requests, self.rename_requests, self.get_dirs, self.put_dirs, self.get_bytes, self.put_bytes, self.window_seconds))
def find_clusters(atimes):
foo = Counter()
bar = dict()
for i in xrange(120, 3660, 10):
clusters = get_clusters(atimes, i)
cs = len(clusters)
foo[cs] += 1
# note first occurance of this cluster size.
if cs not in bar:
bar[cs] = i
# print(len(atimes), i, cs)
return bar[foo.most_common()[0][0]]
def get_clusters(data, maxgap):
'''Arrange data into groups where successive elements
differ by no more than *maxgap*
>>> cluster([1, 6, 9, 100, 102, 105, 109, 134, 139], maxgap=10)
[[1, 6, 9], [100, 102, 105, 109], [134, 139]]
>>> cluster([1, 6, 9, 99, 100, 102, 105, 134, 139, 141], maxgap=10)
[[1, 6, 9], [99, 100, 102, 105], [134, 139, 141]]
'''
data.sort()
groups = [[data[0]]]
for x in data[1:]:
if abs(x - groups[-1][-1]) <= maxgap:
groups[-1].append(x)
else:
groups.append([x])
return groups
def analyze_user_session(user_session_file, out_pipeline, target_file_name):
with open(user_session_file, 'r') as sf:
ops = list()
atimes = list()
for line in sf:
op = Operation()
op.init(line.strip())
ops.append(op)
atimes.append(op.ts)
ops.sort(key=operator.attrgetter('ts'))
atimes.sort()
window_seconds = find_clusters(atimes)
session_counter = 1
uf = os.path.basename(user_session_file)
user_id = uf[:uf.find(".user_session.csv")]
session = UserSession(user_id)
session.window_seconds = window_seconds
for op in ops:
if session.from_ts == 0:
session.from_ts = op.ts
session.till_ts = op.ts + op.execution_time
if (session.till_ts + window_seconds) < op.ts:
# this session is over, so archive it.
out_pipeline.write_to(target_file_name, session.finish())
del session
session = UserSession(user_id)
session.window_seconds = window_seconds
session_counter += 1
session.add_op(op)
if session.num_ops > 0:
out_pipeline.write_to(target_file_name, session.finish())
print("sessions: %d with window_seconds: %d" %(session_counter, window_seconds))
if __name__ == "__main__":
source_dir = os.path.abspath(sys.argv[1])
result = os.path.abspath(sys.argv[2])
results_dir = os.path.dirname(result)
target_file_name = os.path.basename(result)
users_session_files = [os.path.join(dirpath, f)
for dirpath, dirnames, files in os.walk(source_dir)
for f in fnmatch.filter(files, '*.user_session.csv')]
#remove the old log file, as outpipe is append only.
if os.path.exists(os.path.join(results_dir, target_file_name)):
os.remove(os.path.join(results_dir, target_file_name))
out_pipe = Pipes(results_dir)
csv_header = ";".join(["user_id",
"from_ts",
"till_ts",
"session_lifetime",
"get_requests",
"reget_requests",
"put_requests",
"get_bytes",
"put_bytes",
"rename_requests",
"del_requests",
"get_dirs",
"put_dirs",
"put_files_per_dir",
"get_files_per_dir",
"window_seconds"
])
out_pipe.write_to(target_file_name, csv_header)
cnt = 0
for sf in users_session_files:
cnt += 1
print ("working on %d/%d" % (cnt, len(users_session_files)))
analyze_user_session(sf, out_pipe, target_file_name)
# if cnt >=20:
# break
out_pipe.close()
print("wrote results to %s: " % (os.path.join(results_dir, target_file_name)))
| zdvresearch/fast15-paper-extras | ecfs_user_sessions/src/analyze_user_sessions.py | Python | mit | 7,526 |
package esl
import (
"io"
"errors"
"unicode/utf8"
)
// Buffer ...
type buffer []byte
// MemoryReader ...
type memReader [ ]byte
// MemoryWriter ...
type memWriter [ ]byte
// ErrBufferSize indicates that memory cannot be allocated to store data in a buffer.
var ErrBufferSize = errors.New(`could not allocate memory`)
func newBuffer( size int ) *buffer {
buf := make([ ]byte, 0, size )
return (*buffer)(&buf)
}
func ( buf *buffer ) reader( ) *memReader {
n := len( *buf )
rbuf := ( *buf )[:n:n]
return ( *memReader )( &rbuf )
}
func ( buf *buffer ) writer( ) *memWriter {
return ( *memWriter )( buf )
}
func ( buf *buffer ) grow( n int ) error {
if ( len( *buf )+ n ) > cap( *buf ) {
// Not enough space to store [:+(n)]byte(s)
mbuf, err := makebuf( cap( *buf )+ n )
if ( err != nil ) {
return ( err )
}
copy( mbuf, *buf )
*( buf ) = mbuf
}
return nil
}
// allocates a byte slice of size.
// If the allocation fails, returns error
// indicating that memory cannot be allocated to store data in a buffer.
func makebuf( size int ) ( buf [ ]byte, memerr error ) {
defer func( ) {
// If the make fails, give a known error.
if ( recover( ) != nil ) {
( memerr ) = ErrBufferSize
}
}( )
return make( [ ]byte, 0, size ), nil
}
func ( buf *memReader ) Read( b [ ]byte ) ( n int, err error ) {
if len( *buf ) == 0 {
return ( 0 ), io.EOF
}
n, *buf = copy( b, *buf ), ( *buf )[ n: ]
return // n, nil
}
func ( buf *memReader ) ReadByte( ) ( c byte, err error ) {
if len(*buf) == 0 {
return ( 0 ), io.EOF
}
c, *buf = (*buf)[0], (*buf)[1:]
return // c, nil
}
func ( buf *memReader ) ReadRune( ) ( r rune, size int, err error ) {
if len(*buf) == 0 {
return 0, 0, io.EOF
}
r, size = utf8.DecodeRune(*buf)
*buf = (*buf)[size:]
return // r, size, nil
}
func ( buf *memReader ) WriteTo( w io.Writer ) ( n int64, err error ) {
for len( *buf ) > 0 {
rw, err := w.Write( *buf )
if ( rw > 0 ) {
n, *buf = n + int64( rw ), (*buf)[rw:]
}
if ( err != nil ) {
return n, err
}
}
return ( 0 ), io.EOF
}
func ( buf *memWriter ) Write( b []byte ) ( n int, err error ) {
*buf = append( *buf, b...)
return len( b ), nil
}
func ( buf *memWriter ) WriteByte( c byte ) error {
*buf = append( *buf, c )
return ( nil )
}
func ( buf *memWriter ) WriteRune( r rune ) error {
if ( r < utf8.RuneSelf ) {
return buf.WriteByte( byte( r ))
}
b := *buf
n := len( b )
if ( n + utf8.UTFMax ) > cap( b ) {
b = make( []byte, ( n + utf8.UTFMax ))
copy( b, *buf )
}
w := utf8.EncodeRune( b[ n:( n + utf8.UTFMax )], r )
*buf = b[ :( n + w )]
return nil
}
func ( buf *memWriter ) WriteString( s string ) ( n int, err error ) {
*buf = append( *buf, s...)
return len( s ), nil
}
// func (buf *memWriter) ReadFrom(r io.Reader) (n int64, err error) {
// // NOTE: indefinite allocation! Try to use io.WriterTo interface!
// } | navrotskyj/acr | src/pkg/esl/io.go | GO | mit | 2,905 |
#ifdef __OBJC__
#import <UIKit/UIKit.h>
#else
#ifndef FOUNDATION_EXPORT
#if defined(__cplusplus)
#define FOUNDATION_EXPORT extern "C"
#else
#define FOUNDATION_EXPORT extern
#endif
#endif
#endif
FOUNDATION_EXPORT double Pods_WZYUnlimitedScrollViewDemoVersionNumber;
FOUNDATION_EXPORT const unsigned char Pods_WZYUnlimitedScrollViewDemoVersionString[];
| CoderZYWang/WZYUnlimitedScrollView | WZYUnlimitedScrollViewDemo/Pods/Target Support Files/Pods-WZYUnlimitedScrollViewDemo/Pods-WZYUnlimitedScrollViewDemo-umbrella.h | C | mit | 354 |
package com.zimbra.cs.versioncheck;
import java.io.IOException;
import java.util.Iterator;
import java.util.List;
import java.util.Date;
import org.apache.commons.cli.CommandLine;
import org.apache.commons.cli.Options;
import org.apache.commons.cli.ParseException;
import com.zimbra.common.util.ZimbraLog;
import com.zimbra.common.account.Key;
import com.zimbra.common.account.Key.ServerBy;
import com.zimbra.common.service.ServiceException;
import com.zimbra.common.soap.AdminConstants;
import com.zimbra.common.soap.SoapFaultException;
import com.zimbra.common.soap.SoapTransport;
import com.zimbra.common.util.CliUtil;
import com.zimbra.cs.account.Config;
import com.zimbra.cs.account.Provisioning;
import com.zimbra.cs.account.Server;
import com.zimbra.cs.client.LmcSession;
import com.zimbra.cs.client.soap.LmcSoapClientException;
import com.zimbra.cs.client.soap.LmcVersionCheckRequest;
import com.zimbra.cs.client.soap.LmcVersionCheckResponse;
import com.zimbra.cs.util.BuildInfo;
import com.zimbra.cs.util.SoapCLI;
import com.zimbra.common.util.DateUtil;
/**
* @author Greg Solovyev
*/
public class VersionCheckUtil extends SoapCLI {
private static final String OPT_CHECK_VERSION = "c";
private static final String OPT_MANUAL_CHECK_VERSION = "m";
private static final String SHOW_LAST_STATUS = "r";
protected VersionCheckUtil() throws ServiceException {
super();
}
public static void main(String[] args) {
CliUtil.toolSetup();
SoapTransport.setDefaultUserAgent("zmcheckversion", BuildInfo.VERSION);
VersionCheckUtil util = null;
try {
util = new VersionCheckUtil();
} catch (ServiceException e) {
System.err.println(e.getMessage());
System.exit(1);
}
try {
util.setupCommandLineOptions();
CommandLine cl = null;
try {
cl = util.getCommandLine(args);
} catch (ParseException e) {
System.out.println(e.getMessage());
util.usage();
System.exit(1);
}
if (cl == null) {
System.exit(1);
}
if (cl.hasOption(OPT_CHECK_VERSION)) {
//check schedule
Provisioning prov = Provisioning.getInstance();
Config config;
config = prov.getConfig();
String updaterServerId = config.getAttr(Provisioning.A_zimbraVersionCheckServer);
if (updaterServerId != null) {
Server server = prov.get(Key.ServerBy.id, updaterServerId);
if (server != null) {
Server localServer = prov.getLocalServer();
if (localServer!=null) {
if(!localServer.getId().equalsIgnoreCase(server.getId())) {
System.out.println("Wrong server");
System.exit(0);
}
}
}
}
String versionInterval = config.getAttr(Provisioning.A_zimbraVersionCheckInterval);
if(versionInterval == null || versionInterval.length()==0 || versionInterval.equalsIgnoreCase("0")) {
System.out.println("Automatic updates are disabled");
System.exit(0);
} else {
long checkInterval = DateUtil.getTimeIntervalSecs(versionInterval,0);
String lastAttempt = config.getAttr(Provisioning.A_zimbraVersionCheckLastAttempt);
if(lastAttempt != null) {
Date lastChecked = DateUtil.parseGeneralizedTime(config.getAttr(Provisioning.A_zimbraVersionCheckLastAttempt));
Date now = new Date();
if (now.getTime()/1000- lastChecked.getTime()/1000 >= checkInterval) {
util.doVersionCheck();
} else {
System.out.println("Too early");
System.exit(0);
}
} else {
util.doVersionCheck();
}
}
} else if (cl.hasOption(OPT_MANUAL_CHECK_VERSION)) {
util.doVersionCheck();
} else if (cl.hasOption(SHOW_LAST_STATUS)) {
util.doResult();
System.exit(0);
} else {
util.usage();
System.exit(1);
}
} catch (Exception e) {
System.err.println(e.getMessage());
ZimbraLog.extensions.error("Error in versioncheck util", e);
util.usage(null);
System.exit(1);
}
}
private void doVersionCheck() throws SoapFaultException, IOException, ServiceException, LmcSoapClientException {
LmcSession session = auth();
LmcVersionCheckRequest req = new LmcVersionCheckRequest();
req.setAction(AdminConstants.VERSION_CHECK_CHECK);
req.setSession(session);
req.invoke(getServerUrl());
}
private void doResult() throws SoapFaultException, IOException, ServiceException, LmcSoapClientException {
try {
LmcSession session = auth();
LmcVersionCheckRequest req = new LmcVersionCheckRequest();
req.setAction(AdminConstants.VERSION_CHECK_STATUS);
req.setSession(session);
LmcVersionCheckResponse res = (LmcVersionCheckResponse) req.invoke(getServerUrl());
List <VersionUpdate> updates = res.getUpdates();
for(Iterator <VersionUpdate> iter = updates.iterator();iter.hasNext();){
VersionUpdate update = iter.next();
String critical;
if(update.isCritical()) {
critical = "critical";
} else {
critical = "not critical";
}
System.out.println(
String.format("Found a %s update. Update is %s . Update version: %s. For more info visit: %s",
update.getType(),critical,update.getVersion(),update.getUpdateURL())
);
}
} catch (SoapFaultException soape) {
System.out.println("Cought SoapFaultException");
System.out.println(soape.getStackTrace().toString());
throw (soape);
} catch (LmcSoapClientException lmce) {
System.out.println("Cought LmcSoapClientException");
System.out.println(lmce.getStackTrace().toString());
throw (lmce);
} catch (ServiceException se) {
System.out.println("Cought ServiceException");
System.out.println(se.getStackTrace().toString());
throw (se);
} catch (IOException ioe) {
System.out.println("Cought IOException");
System.out.println(ioe.getStackTrace().toString());
throw (ioe);
}
}
protected void setupCommandLineOptions() {
// super.setupCommandLineOptions();
Options options = getOptions();
Options hiddenOptions = getHiddenOptions();
hiddenOptions.addOption(OPT_CHECK_VERSION, "autocheck", false, "Initiate version check request (exits if zimbraVersionCheckInterval==0)");
options.addOption(SHOW_LAST_STATUS, "result", false, "Show results of last version check.");
options.addOption(OPT_MANUAL_CHECK_VERSION, "manual", false, "Initiate version check request.");
}
protected String getCommandUsage() {
return "zmcheckversion <options>";
}
}
| nico01f/z-pec | ZimbraAdminVersionCheck/src/java/com/zimbra/cs/versioncheck/VersionCheckUtil.java | Java | mit | 7,334 |
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace EmployeeFinder.Models
{
public enum Position
{
Bartender,
Waiter,
Bellboy,
Receptionist,
Manager,
Housekeeper,
Chef,
Manintanace
}
}
| GeorgiNik/EmployeeFinder | EmployeeFinder.Models/Position.cs | C# | mit | 340 |
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Domain
{
public class Meeting
{
public int ConsultantId { get; set; }
public Consultant Consultant { get; set; }
public int UserId { get; set; }
public User User { get; set; }
public DateTime BeginTime { get; set; }
public DateTime EndTime { get; set; }
public override string ToString()
{
return $"{BeginTime} -> {EndTime}";
}
}
}
| rohansen/Code-Examples | Database/TransactionScopeWithGUI/Domain/Meeting.cs | C# | mit | 559 |
module PiwikAnalytics
module Helpers
def piwik_tracking_tag
config = PiwikAnalytics.configuration
return if config.disabled?
if config.use_async?
file = "piwik_analytics/piwik_tracking_tag_async"
else
file = "piwik_analytics/piwik_tracking_tag"
end
render({
:file => file,
:locals => {:url => config.url, :id_site => config.id_site}
})
end
end
end
| piwik/piwik-ruby-tracking | lib/piwik_analytics/helpers.rb | Ruby | mit | 435 |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
@author Stephan Reith
@date 31.08.2016
This is a simple example to demonstrate how the ROS Spinnaker Interface can be used.
You will also need a ROS Listener and a ROS Talker to send and receive data.
Make sure they communicate over the same ROS topics and std_msgs.Int64 ROS Messages used in here.
"""
import spynnaker.pyNN as pynn
from ros_spinnaker_interface import ROS_Spinnaker_Interface
# import transfer_functions as tf
from ros_spinnaker_interface import SpikeSourcePoisson
from ros_spinnaker_interface import SpikeSinkSmoothing
ts = 0.1
n_neurons = 1
simulation_time = 10000 # ms
pynn.setup(timestep=ts, min_delay=ts, max_delay=2.0*ts)
pop = pynn.Population(size=n_neurons, cellclass=pynn.IF_curr_exp, cellparams={}, label='pop')
# The ROS_Spinnaker_Interface just needs to be initialised. The following parameters are possible:
ros_interface = ROS_Spinnaker_Interface(
n_neurons_source=n_neurons, # number of neurons of the injector population
Spike_Source_Class=SpikeSourcePoisson, # the transfer function ROS Input -> Spikes you want to use.
Spike_Sink_Class=SpikeSinkSmoothing, # the transfer function Spikes -> ROS Output you want to use.
# You can choose from the transfer_functions module
# or write one yourself.
output_population=pop, # the pynn population you wish to receive the
# live spikes from.
ros_topic_send='to_spinnaker', # the ROS topic used for the incoming ROS values.
ros_topic_recv='from_spinnaker', # the ROS topic used for the outgoing ROS values.
clk_rate=1000, # mainloop clock (update) rate in Hz.
ros_output_rate=10) # number of ROS messages send out per second.
# Build your network, run the simulation and optionally record the spikes and voltages.
pynn.Projection(ros_interface, pop, pynn.OneToOneConnector(weights=5, delays=1))
pop.record()
pop.record_v()
pynn.run(simulation_time)
spikes = pop.getSpikes()
pynn.end()
# Plot
import pylab
spike_times = [spike[1] for spike in spikes]
spike_ids = [spike[0] for spike in spikes]
pylab.plot(spike_times, spike_ids, ".")
pylab.xlabel('Time (ms)')
pylab.ylabel('Neuron ID')
pylab.title('Spike Plot')
pylab.xlim(xmin=0)
pylab.show()
| reiths/ros_spinnaker_interface | examples/example_ros_spinnaker_interface.py | Python | mit | 2,533 |
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>area-method: Not compatible 👼</title>
<link rel="shortcut icon" type="image/png" href="../../../../../favicon.png" />
<link href="../../../../../bootstrap.min.css" rel="stylesheet">
<link href="../../../../../bootstrap-custom.css" rel="stylesheet">
<link href="//maxcdn.bootstrapcdn.com/font-awesome/4.2.0/css/font-awesome.min.css" rel="stylesheet">
<script src="../../../../../moment.min.js"></script>
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
<!--[if lt IE 9]>
<script src="https://oss.maxcdn.com/html5shiv/3.7.2/html5shiv.min.js"></script>
<script src="https://oss.maxcdn.com/respond/1.4.2/respond.min.js"></script>
<![endif]-->
</head>
<body>
<div class="container">
<div class="navbar navbar-default" role="navigation">
<div class="container-fluid">
<div class="navbar-header">
<a class="navbar-brand" href="../../../../.."><i class="fa fa-lg fa-flag-checkered"></i> Coq bench</a>
</div>
<div id="navbar" class="collapse navbar-collapse">
<ul class="nav navbar-nav">
<li><a href="../..">clean / released</a></li>
<li class="active"><a href="">8.13.1 / area-method - 8.5.0</a></li>
</ul>
</div>
</div>
</div>
<div class="article">
<div class="row">
<div class="col-md-12">
<a href="../..">« Up</a>
<h1>
area-method
<small>
8.5.0
<span class="label label-info">Not compatible 👼</span>
</small>
</h1>
<p>📅 <em><script>document.write(moment("2022-02-04 18:52:19 +0000", "YYYY-MM-DD HH:mm:ss Z").fromNow());</script> (2022-02-04 18:52:19 UTC)</em><p>
<h2>Context</h2>
<pre># Packages matching: installed
# Name # Installed # Synopsis
base-bigarray base
base-threads base
base-unix base
conf-findutils 1 Virtual package relying on findutils
conf-gmp 4 Virtual package relying on a GMP lib system installation
coq 8.13.1 Formal proof management system
num 1.4 The legacy Num library for arbitrary-precision integer and rational arithmetic
ocaml 4.07.1 The OCaml compiler (virtual package)
ocaml-base-compiler 4.07.1 Official release 4.07.1
ocaml-config 1 OCaml Switch Configuration
ocamlfind 1.9.3 A library manager for OCaml
zarith 1.12 Implements arithmetic and logical operations over arbitrary-precision integers
# opam file:
opam-version: "2.0"
maintainer: "[email protected]"
homepage: "https://github.com/coq-contribs/area-method"
license: "Proprietary"
build: [make "-j%{jobs}%"]
install: [make "install"]
remove: ["rm" "-R" "%{lib}%/coq/user-contrib/AreaMethod"]
depends: [
"ocaml"
"coq" {>= "8.5" & < "8.6~"}
]
tags: [ "keyword:geometry" "keyword:chou gao zhang area method" "keyword:decision procedure" "category:Mathematics/Geometry/AutomatedDeduction" "date:2004-2010" ]
authors: [ "Julien Narboux <>" ]
bug-reports: "https://github.com/coq-contribs/area-method/issues"
dev-repo: "git+https://github.com/coq-contribs/area-method.git"
synopsis: "The Chou, Gao and Zhang area method"
description: """
This contribution is the implementation of the Chou, Gao and Zhang's area method decision procedure for euclidean plane geometry.
This development contains a partial formalization of the book "Machine Proofs in Geometry, Automated Production of Readable Proofs for Geometry Theorems" by Chou, Gao and Zhang.
The examples shown automatically (there are more than 100 examples) includes the Ceva, Desargues, Menelaus, Pascal, Centroïd, Pappus, Gauss line, Euler line, Napoleon theorems.
Changelog
2.1 : remove some not needed assumptions in some elimination lemmas (2010)
2.0 : extension implementation to Euclidean geometry (2009-2010)
1.0 : first implementation for affine geometry (2004)"""
flags: light-uninstall
url {
src: "https://github.com/coq-contribs/area-method/archive/v8.5.0.tar.gz"
checksum: "md5=ba9772aa2056aa4bc9ccc051a9a76a7f"
}
</pre>
<h2>Lint</h2>
<dl class="dl-horizontal">
<dt>Command</dt>
<dd><code>true</code></dd>
<dt>Return code</dt>
<dd>0</dd>
</dl>
<h2>Dry install 🏜️</h2>
<p>Dry install with the current Coq version:</p>
<dl class="dl-horizontal">
<dt>Command</dt>
<dd><code>opam install -y --show-action coq-area-method.8.5.0 coq.8.13.1</code></dd>
<dt>Return code</dt>
<dd>5120</dd>
<dt>Output</dt>
<dd><pre>[NOTE] Package coq is already installed (current version is 8.13.1).
The following dependencies couldn't be met:
- coq-area-method -> coq < 8.6~ -> ocaml < 4.06.0
base of this switch (use `--unlock-base' to force)
No solution found, exiting
</pre></dd>
</dl>
<p>Dry install without Coq/switch base, to test if the problem was incompatibility with the current Coq/OCaml version:</p>
<dl class="dl-horizontal">
<dt>Command</dt>
<dd><code>opam remove -y coq; opam install -y --show-action --unlock-base coq-area-method.8.5.0</code></dd>
<dt>Return code</dt>
<dd>0</dd>
</dl>
<h2>Install dependencies</h2>
<dl class="dl-horizontal">
<dt>Command</dt>
<dd><code>true</code></dd>
<dt>Return code</dt>
<dd>0</dd>
<dt>Duration</dt>
<dd>0 s</dd>
</dl>
<h2>Install 🚀</h2>
<dl class="dl-horizontal">
<dt>Command</dt>
<dd><code>true</code></dd>
<dt>Return code</dt>
<dd>0</dd>
<dt>Duration</dt>
<dd>0 s</dd>
</dl>
<h2>Installation size</h2>
<p>No files were installed.</p>
<h2>Uninstall 🧹</h2>
<dl class="dl-horizontal">
<dt>Command</dt>
<dd><code>true</code></dd>
<dt>Return code</dt>
<dd>0</dd>
<dt>Missing removes</dt>
<dd>
none
</dd>
<dt>Wrong removes</dt>
<dd>
none
</dd>
</dl>
</div>
</div>
</div>
<hr/>
<div class="footer">
<p class="text-center">
Sources are on <a href="https://github.com/coq-bench">GitHub</a> © Guillaume Claret 🐣
</p>
</div>
</div>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<script src="../../../../../bootstrap.min.js"></script>
</body>
</html>
| coq-bench/coq-bench.github.io | clean/Linux-x86_64-4.07.1-2.0.6/released/8.13.1/area-method/8.5.0.html | HTML | mit | 7,612 |
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>ieee754: Not compatible</title>
<link rel="shortcut icon" type="image/png" href="../../../../../favicon.png" />
<link href="../../../../../bootstrap.min.css" rel="stylesheet">
<link href="../../../../../bootstrap-custom.css" rel="stylesheet">
<link href="//maxcdn.bootstrapcdn.com/font-awesome/4.2.0/css/font-awesome.min.css" rel="stylesheet">
<script src="../../../../../moment.min.js"></script>
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
<!--[if lt IE 9]>
<script src="https://oss.maxcdn.com/html5shiv/3.7.2/html5shiv.min.js"></script>
<script src="https://oss.maxcdn.com/respond/1.4.2/respond.min.js"></script>
<![endif]-->
</head>
<body>
<div class="container">
<div class="navbar navbar-default" role="navigation">
<div class="container-fluid">
<div class="navbar-header">
<a class="navbar-brand" href="../../../../.."><i class="fa fa-lg fa-flag-checkered"></i> Coq bench</a>
</div>
<div id="navbar" class="collapse navbar-collapse">
<ul class="nav navbar-nav">
<li><a href="../..">clean / released</a></li>
<li class="active"><a href="">8.9.1 / ieee754 - 8.7.0</a></li>
</ul>
</div>
</div>
</div>
<div class="article">
<div class="row">
<div class="col-md-12">
<a href="../..">« Up</a>
<h1>
ieee754
<small>
8.7.0
<span class="label label-info">Not compatible</span>
</small>
</h1>
<p><em><script>document.write(moment("2020-08-24 17:47:06 +0000", "YYYY-MM-DD HH:mm:ss Z").fromNow());</script> (2020-08-24 17:47:06 UTC)</em><p>
<h2>Context</h2>
<pre># Packages matching: installed
# Name # Installed # Synopsis
base-bigarray base
base-num base Num library distributed with the OCaml compiler
base-threads base
base-unix base
camlp5 7.12 Preprocessor-pretty-printer of OCaml
conf-findutils 1 Virtual package relying on findutils
conf-m4 1 Virtual package relying on m4
coq 8.9.1 Formal proof management system
num 0 The Num library for arbitrary-precision integer and rational arithmetic
ocaml 4.05.0 The OCaml compiler (virtual package)
ocaml-base-compiler 4.05.0 Official 4.05.0 release
ocaml-config 1 OCaml Switch Configuration
ocamlfind 1.8.1 A library manager for OCaml
# opam file:
opam-version: "2.0"
maintainer: "[email protected]"
homepage: "https://github.com/coq-contribs/ieee754"
license: "LGPL 2.1"
build: [make "-j%{jobs}%"]
install: [make "install"]
remove: ["rm" "-R" "%{lib}%/coq/user-contrib/IEEE754"]
depends: [
"ocaml"
"coq" {>= "8.7" & < "8.8~"}
]
tags: [ "keyword: floating-point arithmetic" "keyword: floats" "keyword: IEEE" "category: Computer Science/Data Types and Data Structures" "category: Computer Science/Semantics and Compilation/Semantics" "date: 1997" ]
authors: [ "Patrick Loiseleur" ]
bug-reports: "https://github.com/coq-contribs/ieee754/issues"
dev-repo: "git+https://github.com/coq-contribs/ieee754.git"
synopsis: "A formalisation of the IEEE754 norm on floating-point arithmetic"
description: """
This library contains a non-verified implementation of
binary floating-point addition and multiplication operators inspired
by the IEEE-754 standard. It is today outdated.
See the attached 1997 report rapport-stage-dea.ps.gz for a discussion
(in French) of this work.
For the state of the art at the time of updating this notice, see
e.g. "Flocq: A Unified Library for Proving Floating-point Algorithms
in Coq" by S. Boldo and G. Melquiond, 2011."""
flags: light-uninstall
url {
src: "https://github.com/coq-contribs/ieee754/archive/v8.7.0.tar.gz"
checksum: "md5=c79fabb9831e0231bc5ce75f3be6aad7"
}
</pre>
<h2>Lint</h2>
<dl class="dl-horizontal">
<dt>Command</dt>
<dd><code>true</code></dd>
<dt>Return code</dt>
<dd>0</dd>
</dl>
<h2>Dry install</h2>
<p>Dry install with the current Coq version:</p>
<dl class="dl-horizontal">
<dt>Command</dt>
<dd><code>opam install -y --show-action coq-ieee754.8.7.0 coq.8.9.1</code></dd>
<dt>Return code</dt>
<dd>5120</dd>
<dt>Output</dt>
<dd><pre>[NOTE] Package coq is already installed (current version is 8.9.1).
The following dependencies couldn't be met:
- coq-ieee754 -> coq < 8.8~ -> ocaml < 4.03.0
base of this switch (use `--unlock-base' to force)
Your request can't be satisfied:
- No available version of coq satisfies the constraints
No solution found, exiting
</pre></dd>
</dl>
<p>Dry install without Coq/switch base, to test if the problem was incompatibility with the current Coq/OCaml version:</p>
<dl class="dl-horizontal">
<dt>Command</dt>
<dd><code>opam remove -y coq; opam install -y --show-action --unlock-base coq-ieee754.8.7.0</code></dd>
<dt>Return code</dt>
<dd>0</dd>
</dl>
<h2>Install dependencies</h2>
<dl class="dl-horizontal">
<dt>Command</dt>
<dd><code>true</code></dd>
<dt>Return code</dt>
<dd>0</dd>
<dt>Duration</dt>
<dd>0 s</dd>
</dl>
<h2>Install</h2>
<dl class="dl-horizontal">
<dt>Command</dt>
<dd><code>true</code></dd>
<dt>Return code</dt>
<dd>0</dd>
<dt>Duration</dt>
<dd>0 s</dd>
</dl>
<h2>Installation size</h2>
<p>No files were installed.</p>
<h2>Uninstall</h2>
<dl class="dl-horizontal">
<dt>Command</dt>
<dd><code>true</code></dd>
<dt>Return code</dt>
<dd>0</dd>
<dt>Missing removes</dt>
<dd>
none
</dd>
<dt>Wrong removes</dt>
<dd>
none
</dd>
</dl>
</div>
</div>
</div>
<hr/>
<div class="footer">
<p class="text-center">
<small>Sources are on <a href="https://github.com/coq-bench">GitHub</a>. © Guillaume Claret.</small>
</p>
</div>
</div>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<script src="../../../../../bootstrap.min.js"></script>
</body>
</html>
| coq-bench/coq-bench.github.io | clean/Linux-x86_64-4.05.0-2.0.6/released/8.9.1/ieee754/8.7.0.html | HTML | mit | 7,485 |
/**
* @license
* Copyright Google Inc. All Rights Reserved.
*
* Use of this source code is governed by an MIT-style license that can be
* found in the LICENSE file at https://angular.io/license
*/
import {CompileDirectiveMetadata, CompileStylesheetMetadata, CompileTemplateMetadata, templateSourceUrl} from './compile_metadata';
import {CompilerConfig, preserveWhitespacesDefault} from './config';
import {ViewEncapsulation} from './core';
import * as html from './ml_parser/ast';
import {HtmlParser} from './ml_parser/html_parser';
import {InterpolationConfig} from './ml_parser/interpolation_config';
import {ParseTreeResult as HtmlParseTreeResult} from './ml_parser/parser';
import {ResourceLoader} from './resource_loader';
import {extractStyleUrls, isStyleUrlResolvable} from './style_url_resolver';
import {PreparsedElementType, preparseElement} from './template_parser/template_preparser';
import {UrlResolver} from './url_resolver';
import {isDefined, stringify, SyncAsync, syntaxError} from './util';
export interface PrenormalizedTemplateMetadata {
ngModuleType: any;
componentType: any;
moduleUrl: string;
template: string|null;
templateUrl: string|null;
styles: string[];
styleUrls: string[];
interpolation: [string, string]|null;
encapsulation: ViewEncapsulation|null;
animations: any[];
preserveWhitespaces: boolean|null;
}
export class DirectiveNormalizer {
private _resourceLoaderCache = new Map<string, SyncAsync<string>>();
constructor(
private _resourceLoader: ResourceLoader, private _urlResolver: UrlResolver,
private _htmlParser: HtmlParser, private _config: CompilerConfig) {}
clearCache(): void {
this._resourceLoaderCache.clear();
}
clearCacheFor(normalizedDirective: CompileDirectiveMetadata): void {
if (!normalizedDirective.isComponent) {
return;
}
const template = normalizedDirective.template !;
this._resourceLoaderCache.delete(template.templateUrl!);
template.externalStylesheets.forEach((stylesheet) => {
this._resourceLoaderCache.delete(stylesheet.moduleUrl!);
});
}
private _fetch(url: string): SyncAsync<string> {
let result = this._resourceLoaderCache.get(url);
if (!result) {
result = this._resourceLoader.get(url);
this._resourceLoaderCache.set(url, result);
}
return result;
}
normalizeTemplate(prenormData: PrenormalizedTemplateMetadata):
SyncAsync<CompileTemplateMetadata> {
if (isDefined(prenormData.template)) {
if (isDefined(prenormData.templateUrl)) {
throw syntaxError(`'${
stringify(prenormData
.componentType)}' component cannot define both template and templateUrl`);
}
if (typeof prenormData.template !== 'string') {
throw syntaxError(`The template specified for component ${
stringify(prenormData.componentType)} is not a string`);
}
} else if (isDefined(prenormData.templateUrl)) {
if (typeof prenormData.templateUrl !== 'string') {
throw syntaxError(`The templateUrl specified for component ${
stringify(prenormData.componentType)} is not a string`);
}
} else {
throw syntaxError(
`No template specified for component ${stringify(prenormData.componentType)}`);
}
if (isDefined(prenormData.preserveWhitespaces) &&
typeof prenormData.preserveWhitespaces !== 'boolean') {
throw syntaxError(`The preserveWhitespaces option for component ${
stringify(prenormData.componentType)} must be a boolean`);
}
return SyncAsync.then(
this._preParseTemplate(prenormData),
(preparsedTemplate) => this._normalizeTemplateMetadata(prenormData, preparsedTemplate));
}
private _preParseTemplate(prenomData: PrenormalizedTemplateMetadata):
SyncAsync<PreparsedTemplate> {
let template: SyncAsync<string>;
let templateUrl: string;
if (prenomData.template != null) {
template = prenomData.template;
templateUrl = prenomData.moduleUrl;
} else {
templateUrl = this._urlResolver.resolve(prenomData.moduleUrl, prenomData.templateUrl!);
template = this._fetch(templateUrl);
}
return SyncAsync.then(
template, (template) => this._preparseLoadedTemplate(prenomData, template, templateUrl));
}
private _preparseLoadedTemplate(
prenormData: PrenormalizedTemplateMetadata, template: string,
templateAbsUrl: string): PreparsedTemplate {
const isInline = !!prenormData.template;
const interpolationConfig = InterpolationConfig.fromArray(prenormData.interpolation!);
const templateUrl = templateSourceUrl(
{reference: prenormData.ngModuleType}, {type: {reference: prenormData.componentType}},
{isInline, templateUrl: templateAbsUrl});
const rootNodesAndErrors = this._htmlParser.parse(
template, templateUrl, {tokenizeExpansionForms: true, interpolationConfig});
if (rootNodesAndErrors.errors.length > 0) {
const errorString = rootNodesAndErrors.errors.join('\n');
throw syntaxError(`Template parse errors:\n${errorString}`);
}
const templateMetadataStyles = this._normalizeStylesheet(new CompileStylesheetMetadata(
{styles: prenormData.styles, moduleUrl: prenormData.moduleUrl}));
const visitor = new TemplatePreparseVisitor();
html.visitAll(visitor, rootNodesAndErrors.rootNodes);
const templateStyles = this._normalizeStylesheet(new CompileStylesheetMetadata(
{styles: visitor.styles, styleUrls: visitor.styleUrls, moduleUrl: templateAbsUrl}));
const styles = templateMetadataStyles.styles.concat(templateStyles.styles);
const inlineStyleUrls = templateMetadataStyles.styleUrls.concat(templateStyles.styleUrls);
const styleUrls = this
._normalizeStylesheet(new CompileStylesheetMetadata(
{styleUrls: prenormData.styleUrls, moduleUrl: prenormData.moduleUrl}))
.styleUrls;
return {
template,
templateUrl: templateAbsUrl,
isInline,
htmlAst: rootNodesAndErrors,
styles,
inlineStyleUrls,
styleUrls,
ngContentSelectors: visitor.ngContentSelectors,
};
}
private _normalizeTemplateMetadata(
prenormData: PrenormalizedTemplateMetadata,
preparsedTemplate: PreparsedTemplate): SyncAsync<CompileTemplateMetadata> {
return SyncAsync.then(
this._loadMissingExternalStylesheets(
preparsedTemplate.styleUrls.concat(preparsedTemplate.inlineStyleUrls)),
(externalStylesheets) => this._normalizeLoadedTemplateMetadata(
prenormData, preparsedTemplate, externalStylesheets));
}
private _normalizeLoadedTemplateMetadata(
prenormData: PrenormalizedTemplateMetadata, preparsedTemplate: PreparsedTemplate,
stylesheets: Map<string, CompileStylesheetMetadata>): CompileTemplateMetadata {
// Algorithm:
// - produce exactly 1 entry per original styleUrl in
// CompileTemplateMetadata.externalStylesheets with all styles inlined
// - inline all styles that are referenced by the template into CompileTemplateMetadata.styles.
// Reason: be able to determine how many stylesheets there are even without loading
// the template nor the stylesheets, so we can create a stub for TypeScript always synchronously
// (as resource loading may be async)
const styles = [...preparsedTemplate.styles];
this._inlineStyles(preparsedTemplate.inlineStyleUrls, stylesheets, styles);
const styleUrls = preparsedTemplate.styleUrls;
const externalStylesheets = styleUrls.map(styleUrl => {
const stylesheet = stylesheets.get(styleUrl)!;
const styles = [...stylesheet.styles];
this._inlineStyles(stylesheet.styleUrls, stylesheets, styles);
return new CompileStylesheetMetadata({moduleUrl: styleUrl, styles: styles});
});
let encapsulation = prenormData.encapsulation;
if (encapsulation == null) {
encapsulation = this._config.defaultEncapsulation;
}
if (encapsulation === ViewEncapsulation.Emulated && styles.length === 0 &&
styleUrls.length === 0) {
encapsulation = ViewEncapsulation.None;
}
return new CompileTemplateMetadata({
encapsulation,
template: preparsedTemplate.template,
templateUrl: preparsedTemplate.templateUrl,
htmlAst: preparsedTemplate.htmlAst,
styles,
styleUrls,
ngContentSelectors: preparsedTemplate.ngContentSelectors,
animations: prenormData.animations,
interpolation: prenormData.interpolation,
isInline: preparsedTemplate.isInline,
externalStylesheets,
preserveWhitespaces: preserveWhitespacesDefault(
prenormData.preserveWhitespaces, this._config.preserveWhitespaces),
});
}
private _inlineStyles(
styleUrls: string[], stylesheets: Map<string, CompileStylesheetMetadata>,
targetStyles: string[]) {
styleUrls.forEach(styleUrl => {
const stylesheet = stylesheets.get(styleUrl)!;
stylesheet.styles.forEach(style => targetStyles.push(style));
this._inlineStyles(stylesheet.styleUrls, stylesheets, targetStyles);
});
}
private _loadMissingExternalStylesheets(
styleUrls: string[],
loadedStylesheets:
Map<string, CompileStylesheetMetadata> = new Map<string, CompileStylesheetMetadata>()):
SyncAsync<Map<string, CompileStylesheetMetadata>> {
return SyncAsync.then(
SyncAsync.all(styleUrls.filter((styleUrl) => !loadedStylesheets.has(styleUrl))
.map(
styleUrl => SyncAsync.then(
this._fetch(styleUrl),
(loadedStyle) => {
const stylesheet =
this._normalizeStylesheet(new CompileStylesheetMetadata(
{styles: [loadedStyle], moduleUrl: styleUrl}));
loadedStylesheets.set(styleUrl, stylesheet);
return this._loadMissingExternalStylesheets(
stylesheet.styleUrls, loadedStylesheets);
}))),
(_) => loadedStylesheets);
}
private _normalizeStylesheet(stylesheet: CompileStylesheetMetadata): CompileStylesheetMetadata {
const moduleUrl = stylesheet.moduleUrl!;
const allStyleUrls = stylesheet.styleUrls.filter(isStyleUrlResolvable)
.map(url => this._urlResolver.resolve(moduleUrl, url));
const allStyles = stylesheet.styles.map(style => {
const styleWithImports = extractStyleUrls(this._urlResolver, moduleUrl, style);
allStyleUrls.push(...styleWithImports.styleUrls);
return styleWithImports.style;
});
return new CompileStylesheetMetadata(
{styles: allStyles, styleUrls: allStyleUrls, moduleUrl: moduleUrl});
}
}
interface PreparsedTemplate {
template: string;
templateUrl: string;
isInline: boolean;
htmlAst: HtmlParseTreeResult;
styles: string[];
inlineStyleUrls: string[];
styleUrls: string[];
ngContentSelectors: string[];
}
class TemplatePreparseVisitor implements html.Visitor {
ngContentSelectors: string[] = [];
styles: string[] = [];
styleUrls: string[] = [];
ngNonBindableStackCount: number = 0;
visitElement(ast: html.Element, context: any): any {
const preparsedElement = preparseElement(ast);
switch (preparsedElement.type) {
case PreparsedElementType.NG_CONTENT:
if (this.ngNonBindableStackCount === 0) {
this.ngContentSelectors.push(preparsedElement.selectAttr);
}
break;
case PreparsedElementType.STYLE:
let textContent = '';
ast.children.forEach(child => {
if (child instanceof html.Text) {
textContent += child.value;
}
});
this.styles.push(textContent);
break;
case PreparsedElementType.STYLESHEET:
this.styleUrls.push(preparsedElement.hrefAttr);
break;
default:
break;
}
if (preparsedElement.nonBindable) {
this.ngNonBindableStackCount++;
}
html.visitAll(this, ast.children);
if (preparsedElement.nonBindable) {
this.ngNonBindableStackCount--;
}
return null;
}
visitExpansion(ast: html.Expansion, context: any): any {
html.visitAll(this, ast.cases);
}
visitExpansionCase(ast: html.ExpansionCase, context: any): any {
html.visitAll(this, ast.expression);
}
visitComment(ast: html.Comment, context: any): any {
return null;
}
visitAttribute(ast: html.Attribute, context: any): any {
return null;
}
visitText(ast: html.Text, context: any): any {
return null;
}
}
| matsko/angular | packages/compiler/src/directive_normalizer.ts | TypeScript | mit | 12,863 |
<?php
/* TwigBundle:Exception:error.atom.twig */
class __TwigTemplate_405349459f7f2e8922747537b1c12aa2323bb61b0265aaf549db7e51eafd66f4 extends Twig_Template
{
public function __construct(Twig_Environment $env)
{
parent::__construct($env);
$this->parent = false;
$this->blocks = array(
);
}
protected function doDisplay(array $context, array $blocks = array())
{
// line 1
$this->env->loadTemplate("TwigBundle:Exception:error.xml.twig")->display(array_merge($context, array("exception" => (isset($context["exception"]) ? $context["exception"] : $this->getContext($context, "exception")))));
}
public function getTemplateName()
{
return "TwigBundle:Exception:error.atom.twig";
}
public function isTraitable()
{
return false;
}
public function getDebugInfo()
{
return array ( 19 => 1, 79 => 21, 72 => 13, 69 => 12, 47 => 18, 40 => 11, 37 => 10, 22 => 1, 246 => 32, 157 => 56, 145 => 46, 139 => 45, 131 => 42, 123 => 41, 120 => 40, 115 => 39, 111 => 38, 108 => 37, 101 => 33, 98 => 32, 96 => 31, 83 => 25, 74 => 14, 66 => 11, 55 => 16, 52 => 21, 50 => 14, 43 => 9, 41 => 8, 35 => 9, 32 => 4, 29 => 6, 209 => 82, 203 => 78, 199 => 76, 193 => 73, 189 => 71, 187 => 70, 182 => 68, 176 => 64, 173 => 63, 168 => 62, 164 => 58, 162 => 57, 154 => 54, 149 => 51, 147 => 50, 144 => 49, 141 => 48, 133 => 42, 130 => 41, 125 => 38, 122 => 37, 116 => 36, 112 => 35, 109 => 34, 106 => 36, 103 => 32, 99 => 30, 95 => 28, 92 => 29, 86 => 24, 82 => 22, 80 => 24, 73 => 19, 64 => 19, 60 => 6, 57 => 12, 54 => 22, 51 => 10, 48 => 9, 45 => 17, 42 => 16, 39 => 6, 36 => 5, 33 => 4, 30 => 3,);
}
}
| Mchichou/UEOptionnelles | app/cache/dev/twig/40/53/49459f7f2e8922747537b1c12aa2323bb61b0265aaf549db7e51eafd66f4.php | PHP | mit | 1,789 |
<HTML>
<!-- Created by HTTrack Website Copier/3.49-2 [XR&CO'2014] -->
<!-- Mirrored from thevillagedanang.com/?p=62 by HTTrack Website Copier/3.x [XR&CO'2014], Thu, 02 Nov 2017 14:46:03 GMT -->
<!-- Added by HTTrack --><meta http-equiv="content-type" content="text/html;charset=UTF-8" /><!-- /Added by HTTrack -->
<HEAD>
<META HTTP-EQUIV="Content-Type" CONTENT="text/html;charset=UTF-8"><META HTTP-EQUIV="Refresh" CONTENT="0; URL=about-us/index.html"><TITLE>Page has moved</TITLE>
</HEAD>
<BODY>
<A HREF="about-us/index.html"><h3>Click here...</h3></A>
</BODY>
<!-- Created by HTTrack Website Copier/3.49-2 [XR&CO'2014] -->
<!-- Mirrored from thevillagedanang.com/?p=62 by HTTrack Website Copier/3.x [XR&CO'2014], Thu, 02 Nov 2017 14:46:03 GMT -->
</HTML>
| hoangphuc1494/sourd_codeigniter | assest/thevillage/index5c36.html | HTML | mit | 758 |
/**
* HTTP.test
*/
"use strict";
/* Node modules */
/* Third-party modules */
var steeplejack = require("steeplejack");
/* Files */
describe("HTTPError test", function () {
var HTTPError;
beforeEach(function () {
injector(function (_HTTPError_) {
HTTPError = _HTTPError_;
});
});
describe("Instantation tests", function () {
it("should extend the steeplejack Fatal exception", function () {
var obj = new HTTPError("text");
expect(obj).to.be.instanceof(HTTPError)
.to.be.instanceof(steeplejack.Exceptions.Fatal);
expect(obj.type).to.be.equal("HTTPError");
expect(obj.message).to.be.equal("text");
expect(obj.httpCode).to.be.equal(500);
expect(obj.getHttpCode()).to.be.equal(500);
});
it("should set the HTTP code in the first input", function () {
var obj = new HTTPError(401);
expect(obj.httpCode).to.be.equal(401);
expect(obj.getHttpCode()).to.be.equal(401);
});
});
});
| riggerthegeek/steeplejack-errors | test/unit/errors/HTTP.test.js | JavaScript | mit | 1,103 |
require File.join(File.dirname(__FILE__), './scribd-carrierwave/version')
require File.join(File.dirname(__FILE__), './scribd-carrierwave/config')
require 'carrierwave'
require 'rscribd'
require 'configatron'
module ScribdCarrierWave
class << self
def included(base)
base.extend ClassMethods
end
def upload uploader
file_path = full_path(uploader)
args = { file: file_path, access: ( uploader.class.public? ? 'public' : 'private' )}
type = File.extname(file_path)
if type
type = type.gsub(/^\./, '').gsub(/\?.*$/, '')
args.merge!(type: type) if type != ''
end
scribd_user.upload(args)
end
def destroy uploader
document = scribd_user.find_document(uploader.ipaper_id) rescue nil
document.destroy if !document.nil?
end
def load_ipaper_document(id)
scribd_user.find_document(id) rescue nil
end
def full_path uploader
if uploader.url =~ /^http(s?):\/\//
uploader.url
else
uploader.root + uploader.url
end
end
module ClassMethods
def public?
@public
end
def has_ipaper(public = false)
include InstanceMethods
after :store, :upload_to_scribd
before :remove, :delete_from_scribd
@public = !!public
end
end
module InstanceMethods
def self.included(base)
base.extend ClassMethods
end
def upload_to_scribd files
res = ScribdCarrierWave::upload(self)
set_params res
end
def delete_from_scribd
ScribdCarrierWave::destroy(self)
end
def display_ipaper(options = {})
id = options.delete(:id)
<<-END
<script type="text/javascript" src="//www.scribd.com/javascripts/scribd_api.js"></script>
<div id="embedded_doc#{id}">#{options.delete(:alt)}</div>
<script type="text/javascript">
var scribd_doc = scribd.Document.getDoc(#{ipaper_id}, '#{ipaper_access_key}');
scribd_doc.addParam( 'jsapi_version', 2 );
#{options.map do |k,v|
" scribd_doc.addParam('#{k.to_s}', #{v.is_a?(String) ? "'#{v.to_s}'" : v.to_s});"
end.join("\n")}
scribd_doc.write("embedded_doc#{id}");
</script>
END
end
def fullscreen_url
"http://www.scribd.com/fullscreen/#{ipaper_id}?access_key=#{ipaper_access_key}"
end
def ipaper_id
self.model.send("#{self.mounted_as.to_s}_ipaper_id")
end
def ipaper_access_key
self.model.send("#{self.mounted_as.to_s}_ipaper_access_key")
end
# Responds the Scribd::Document associated with this model, or nil if it does not exist.
def ipaper_document
@document ||= ScribdCarrierWave::load_ipaper_document(ipaper_id)
end
private
def set_params res
self.model.update_attributes({"#{self.mounted_as}_ipaper_id" => res.doc_id,
"#{self.mounted_as}_ipaper_access_key" => res.access_key})
end
end
private
def scribd_user
Scribd::API.instance.key = ScribdCarrierWave.config.key
Scribd::API.instance.secret = ScribdCarrierWave.config.secret
@scribd_user = Scribd::User.login(ScribdCarrierWave.config.username, ScribdCarrierWave.config.password)
end
end
end
CarrierWave::Uploader::Base.send(:include, ScribdCarrierWave) if Object.const_defined?("CarrierWave")
| milkfarm/scribd-carrierwave | lib/scribd-carrierwave.rb | Ruby | mit | 3,507 |
var gulp = require('gulp');
var babel = require('gulp-babel');
var concat = require('gulp-concat');
var merge = require('merge-stream');
var stylus = require('gulp-stylus');
var rename = require("gulp-rename");
var uglify = require("gulp-uglify");
var cssmin = require("gulp-cssmin");
var ngAnnotate = require('gulp-ng-annotate');
var nib = require("nib");
var watch = require('gulp-watch');
function compileJs(devOnly) {
var othersUmd = gulp.src(['src/**/*.js', '!src/main.js'])
.pipe(babel({
modules: 'umdStrict',
moduleRoot: 'angular-chatbar',
moduleIds: true
})),
mainUmd = gulp.src('src/main.js')
.pipe(babel({
modules: 'umdStrict',
moduleIds: true,
moduleId: 'angular-chatbar'
})),
stream = merge(othersUmd, mainUmd)
.pipe(concat('angular-chatbar.umd.js'))
.pipe(gulp.dest('dist'))
;
if (!devOnly) {
stream = stream
.pipe(ngAnnotate())
.pipe(uglify())
.pipe(rename('angular-chatbar.umd.min.js'))
.pipe(gulp.dest('dist'));
}
return stream;
}
function compileCss(name, devOnly) {
var stream = gulp.src('styles/' + name + '.styl')
.pipe(stylus({use: nib()}))
.pipe(rename('angular-' + name + '.css'))
.pipe(gulp.dest('dist'))
;
if (!devOnly) {
stream = stream.pipe(cssmin())
.pipe(rename('angular-' + name + '.min.css'))
.pipe(gulp.dest('dist'));
}
return stream;
}
function compileAllCss(devOnly) {
var streams = [];
['chatbar', 'chatbar.default-theme', 'chatbar.default-animations'].forEach(function (name) {
streams.push(compileCss(name, devOnly));
});
return merge.apply(null, streams);
}
gulp.task('default', function() {
return merge.apply(compileJs(), compileAllCss());
});
gulp.task('_watch', function() {
watch('styles/**/*.styl', function () {
compileAllCss(true);
});
watch('src/**/*.js', function () {
compileJs(true);
});
});
gulp.task('watch', ['default', '_watch']);
| jlowcs/angular-chatbar | gulpfile.js | JavaScript | mit | 1,878 |
# -*- coding: utf-8 -*-
""" Resource Import Tools
@copyright: 2011-12 (c) Sahana Software Foundation
@license: MIT
Permission is hereby granted, free of charge, to any person
obtaining a copy of this software and associated documentation
files (the "Software"), to deal in the Software without
restriction, including without limitation the rights to use,
copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
"""
# @todo: remove all interactive error reporting out of the _private methods, and raise exceptions instead.
__all__ = ["S3Importer", "S3ImportJob", "S3ImportItem"]
import os
import sys
import cPickle
import tempfile
from datetime import datetime
from copy import deepcopy
try:
from cStringIO import StringIO # Faster, where available
except:
from StringIO import StringIO
try:
from lxml import etree
except ImportError:
print >> sys.stderr, "ERROR: lxml module needed for XML handling"
raise
try:
import json # try stdlib (Python 2.6)
except ImportError:
try:
import simplejson as json # try external module
except:
import gluon.contrib.simplejson as json # fallback to pure-Python module
from gluon import *
from gluon.serializers import json as jsons
from gluon.storage import Storage, Messages
from gluon.tools import callback
from s3utils import SQLTABLES3
from s3crud import S3CRUD
from s3xml import S3XML
from s3utils import s3_mark_required, s3_has_foreign_key, s3_get_foreign_key
DEBUG = False
if DEBUG:
print >> sys.stderr, "S3IMPORTER: DEBUG MODE"
def _debug(m):
print >> sys.stderr, m
else:
_debug = lambda m: None
# =============================================================================
class S3Importer(S3CRUD):
"""
Transformable formats (XML, JSON, CSV) import handler
"""
UPLOAD_TABLE_NAME = "s3_import_upload"
# -------------------------------------------------------------------------
def apply_method(self, r, **attr):
"""
Apply CRUD methods
@param r: the S3Request
@param attr: dictionary of parameters for the method handler
@returns: output object to send to the view
Known means of communicating with this module:
It expects a URL of the form: /prefix/name/import
It will interpret the http requests as follows:
GET will trigger the upload
POST will trigger either commits or display the import details
DELETE will trigger deletes
It will accept one of the following control vars:
item: to specify a single item in the import job
job: to specify a job
It should not receive both so job takes precedent over item
For CSV imports, the calling controller can add extra fields
to the upload form to add columns to each row in the CSV. To add
the extra fields, pass a named parameter "csv_extra_fields" to the
s3_rest_controller call (or the S3Request call, respectively):
s3_rest_controller(module, resourcename,
csv_extra_fields=[
dict(label="ColumnLabelInTheCSV",
field=field_instance)
])
The Field instance "field" will be added to the upload form, and
the user input will be added to each row of the CSV under the
label as specified. If the "field" validator has options, the
input value will be translated into the option representation,
otherwise the value will be used as-is.
Note that the "label" in the dict is the column label in the CSV,
whereas the field label for the form is to be set in the Field
instance passed as "field".
You can add any arbitrary number of csv_extra_fields to the list.
Additionally, you may want to allow the user to choose whether
the import shall first remove all existing data in the target
table. To do so, pass a label for the "replace_option" to the
request:
s3_rest_controller(module, resourcename,
replace_option=T("Remove existing data before import"))
This will add the respective checkbox to the upload form.
You may also want to provide a link to download a CSV template from
the upload form. To do that, add the resource name to the request
attributes:
s3_rest_controller(module, resourcename,
csv_template="<resourcename>")
This will provide a link to:
- static/formats/s3csv/<controller>/<resourcename>.csv
at the top of the upload form.
"""
_debug("S3Importer.apply_method(%s)" % r)
# Messages
T = current.T
messages = self.messages = Messages(T)
messages.download_template = "Download Template"
messages.invalid_file_format = "Invalid File Format"
messages.unsupported_file_type = "Unsupported file type of %s"
messages.stylesheet_not_found = "No Stylesheet %s could be found to manage the import file."
messages.no_file = "No file submitted"
messages.file_open_error = "Unable to open the file %s"
messages.file_not_found = "The file to upload is missing"
messages.no_records_to_import = "No records to import"
messages.no_job_to_delete = "No job to delete, maybe it has already been deleted."
messages.title_job_read = "Details of the selected import job"
messages.title_job_list = "List of import items"
messages.file_uploaded = "Import file uploaded"
messages.upload_submit_btn = "Upload Data File"
messages.open_btn = "Open"
messages.view_btn = "View"
messages.delete_btn = "Delete"
messages.item_show_details = "Display Details"
messages.job_total_records = "Total records in the Import Job"
messages.job_records_selected = "Records selected"
messages.job_deleted = "Import job deleted"
messages.job_completed = "Job run on %s. With result of (%s)"
messages.import_file = "Import File"
messages.import_file_comment = "Upload a file formatted according to the Template."
messages.user_name = "User Name"
messages.commit_total_records_imported = "%s records imported"
messages.commit_total_records_ignored = "%s records ignored"
messages.commit_total_errors = "%s records in error"
try:
self.uploadTitle = current.response.s3.crud_strings[self.tablename].title_upload
except:
self.uploadTitle = T("Upload a %s import file" % r.function)
# @todo: correct to switch this off for the whole session?
current.session.s3.ocr_enabled = False
# Reset all errors/warnings
self.error = None
self.warning = None
# CSV upload configuration
if "csv_stylesheet" in attr:
self.csv_stylesheet = attr["csv_stylesheet"]
else:
self.csv_stylesheet = None
self.csv_extra_fields = None
self.csv_extra_data = None
# Environment
self.controller = r.controller
self.function = r.function
# Target table for the data import
self.controller_resource = self.resource
self.controller_table = self.table
self.controller_tablename = self.tablename
# Table for uploads
self.__define_table()
self.upload_resource = None
self.item_resource = None
# XSLT Path
self.xslt_path = os.path.join(r.folder, r.XSLT_PATH)
self.xslt_extension = r.XSLT_EXTENSION
# Check authorization
authorised = self.permit("create", self.upload_tablename) and \
self.permit("create", self.controller_tablename)
if not authorised:
if r.method is not None:
r.unauthorised()
else:
return dict(form=None)
# @todo: clean this up
source = None
transform = None
upload_id = None
items = None
# @todo get the data from either get_vars or post_vars appropriately
# for post -> commit_items would need to add the uploadID
if "transform" in r.get_vars:
transform = r.get_vars["transform"]
if "filename" in r.get_vars:
source = r.get_vars["filename"]
if "job" in r.post_vars:
upload_id = r.post_vars["job"]
elif "job" in r.get_vars:
upload_id = r.get_vars["job"]
items = self._process_item_list(upload_id, r.vars)
if "delete" in r.get_vars:
r.http = "DELETE"
# If we have an upload ID, then get upload and import job
self.upload_id = upload_id
query = (self.upload_table.id == upload_id)
self.upload_job = current.db(query).select(limitby=(0, 1)).first()
if self.upload_job:
self.job_id = self.upload_job.job_id
else:
self.job_id = None
# Now branch off to the appropriate controller function
if r.http == "GET":
if source != None:
self.commit(source, transform)
output = self.upload(r, **attr)
if upload_id != None:
output = self.display_job(upload_id)
else:
output = self.upload(r, **attr)
elif r.http == "POST":
if items != None:
output = self.commit_items(upload_id, items)
else:
output = self.generate_job(r, **attr)
elif r.http == "DELETE":
if upload_id != None:
output = self.delete_job(upload_id)
else:
r.error(405, current.manager.ERROR.BAD_METHOD)
return output
# -------------------------------------------------------------------------
def upload(self, r, **attr):
"""
This will display the upload form
It will ask for a file to be uploaded or for a job to be selected.
If a file is uploaded then it will guess at the file type and
ask for the transform file to be used. The transform files will
be in a dataTable with the module specific files shown first and
after those all other known transform files. Once the transform
file is selected the import process can be started which will
generate an importJob, and a "POST" method will occur
If a job is selected it will have two actions, open and delete.
Open will mean that a "GET" method will occur, with the job details
passed in.
Whilst the delete action will trigger a "DELETE" method.
"""
_debug("S3Importer.upload()")
request = self.request
form = self._upload_form(r, **attr)
output = self._create_upload_dataTable()
if request.representation == "aadata":
return output
output.update(form=form, title=self.uploadTitle)
return output
# -------------------------------------------------------------------------
def generate_job(self, r, **attr):
"""
Generate an ImportJob from the submitted upload form
"""
_debug("S3Importer.display()")
response = current.response
s3 = response.s3
db = current.db
table = self.upload_table
title=self.uploadTitle
form = self._upload_form(r, **attr)
r = self.request
r.read_body()
sfilename = form.vars.file
try:
ofilename = r.post_vars["file"].filename
except:
form.errors.file = self.messages.no_file
if form.errors:
response.flash = ""
output = self._create_upload_dataTable()
output.update(form=form, title=title)
elif not sfilename or \
ofilename not in r.files or r.files[ofilename] is None:
response.flash = ""
response.error = self.messages.file_not_found
output = self._create_upload_dataTable()
output.update(form=form, title=title)
else:
output = dict()
query = (table.file == sfilename)
db(query).update(controller=self.controller,
function=self.function,
filename=ofilename,
user_id=current.session.auth.user.id)
# must commit here to separate this transaction from
# the trial import phase which will be rolled back.
db.commit()
extension = ofilename.rsplit(".", 1).pop()
if extension not in ("csv", "xls"):
response.flash = None
response.error = self.messages.invalid_file_format
return self.upload(r, **attr)
upload_file = r.files[ofilename]
if extension == "xls":
if "xls_parser" in s3:
upload_file.seek(0)
upload_file = s3.xls_parser(upload_file.read())
extension = "csv"
if upload_file is None:
response.flash = None
response.error = self.messages.file_not_found
return self.upload(r, **attr)
else:
upload_file.seek(0)
row = db(query).select(table.id, limitby=(0, 1)).first()
upload_id = row.id
if "single_pass" in r.vars:
single_pass = r.vars["single_pass"]
else:
single_pass = None
self._generate_import_job(upload_id,
upload_file,
extension,
commit_job = single_pass)
if upload_id is None:
row = db(query).update(status = 2) # in error
if self.error != None:
response.error = self.error
if self.warning != None:
response.warning = self.warning
response.flash = ""
return self.upload(r, **attr)
else:
if single_pass:
current.session.flash = self.messages.file_uploaded
# For a single pass retain the vars from the original URL
next_URL = URL(r=self.request,
f=self.function,
args=["import"],
vars=current.request.get_vars
)
redirect(next_URL)
s3.dataTable_vars = {"job" : upload_id}
return self.display_job(upload_id)
return output
# -------------------------------------------------------------------------
def display_job(self, upload_id):
"""
@todo: docstring?
"""
_debug("S3Importer.display_job()")
request = self.request
response = current.response
db = current.db
table = self.upload_table
job_id = self.job_id
output = dict()
if job_id == None:
# redirect to the start page (removes all vars)
query = (table.id == upload_id)
row = db(query).update(status = 2) # in error
current.session.warning = self.messages.no_records_to_import
redirect(URL(r=request, f=self.function, args=["import"]))
# Get the status of the upload job
query = (table.id == upload_id)
row = db(query).select(table.status,
table.modified_on,
table.summary_added,
table.summary_error,
table.summary_ignored,
limitby=(0, 1)).first()
status = row.status
# completed display details
if status == 3: # Completed
# @todo currently this is an unnecessary server call,
# change for completed records to be a display details
# and thus avoid the round trip.
# but keep this code to protect against hand-crafted URLs
# (and the 'go back' syndrome on the browser)
result = (row.summary_added,
row.summary_error,
row.summary_ignored,
)
self._display_completed_job(result, row.modified_on)
redirect(URL(r=request, f=self.function, args=["import"]))
# otherwise display import items
response.view = self._view(request, "list.html")
output = self._create_import_item_dataTable(upload_id, job_id)
if request.representation == "aadata":
return output
if response.s3.error_report:
error_report = "Errors|" + "|".join(response.s3.error_report)
error_tip = A("All Errors",
_class="errortip",
_title=error_report)
else:
# @todo: restore the error tree from all items?
error_tip = ""
rowcount = len(self._get_all_items(upload_id))
rheader = DIV(TABLE(
TR(
TH("%s: " % self.messages.job_total_records),
TD(rowcount, _id="totalAvaliable"),
TH("%s: " % self.messages.job_records_selected),
TD(0, _id="totalSelected"),
TH(error_tip)
),
))
output["title"] = self.messages.title_job_read
output["rheader"] = rheader
output["subtitle"] = self.messages.title_job_list
return output
# -------------------------------------------------------------------------
def commit(self, source, transform):
"""
@todo: docstring?
"""
_debug("S3Importer.commit(%s, %s)" % (source, transform))
db = current.db
session = current.session
request = self.request
try:
openFile = open(source, "r")
except:
session.error = self.messages.file_open_error % source
redirect(URL(r=request, f=self.function))
# @todo: manage different file formats
# @todo: find file format from request.extension
fileFormat = "csv"
# insert data in the table and get the ID
try:
user = session.auth.user.id
except:
user = None
upload_id = self.upload_table.insert(controller=self.controller,
function=self.function,
filename = source,
user_id = user,
status = 1)
db.commit()
# create the import job
result = self._generate_import_job(upload_id,
openFile,
fileFormat,
stylesheet=transform
)
if result == None:
if self.error != None:
if session.error == None:
session.error = self.error
else:
session.error += self.error
if self.warning != None:
if session.warning == None:
session.warning = self.warning
else:
session.warning += self.warning
else:
items = self._get_all_items(upload_id, True)
# commit the import job
self._commit_import_job(upload_id, items)
result = self._update_upload_job(upload_id)
# get the results and display
msg = "%s : %s %s %s" % (source,
self.messages.commit_total_records_imported,
self.messages.commit_total_errors,
self.messages.commit_total_records_ignored)
msg = msg % result
if session.flash == None:
session.flash = msg
else:
session.flash += msg
# @todo: return the upload_id?
# -------------------------------------------------------------------------
def commit_items(self, upload_id, items):
"""
@todo: docstring?
"""
_debug("S3Importer.commit_items(%s, %s)" % (upload_id, items))
# Save the import items
self._commit_import_job(upload_id, items)
# Update the upload table
# change the status to completed
# record the summary details
# delete the upload file
result = self._update_upload_job(upload_id)
# redirect to the start page (removes all vars)
self._display_completed_job(result)
redirect(URL(r=self.request, f=self.function, args=["import"]))
# -------------------------------------------------------------------------
def delete_job(self, upload_id):
"""
Delete an uploaded file and the corresponding import job
@param upload_id: the upload ID
"""
_debug("S3Importer.delete_job(%s)" % (upload_id))
db = current.db
request = self.request
resource = request.resource # use self.resource?
response = current.response
# Get the import job ID
job_id = self.job_id
# Delete the import job (if any)
if job_id:
result = resource.import_xml(None,
id = None,
tree = None,
job_id = job_id,
delete_job = True)
# @todo: check result
# now delete the upload entry
query = (self.upload_table.id == upload_id)
count = db(query).delete()
# @todo: check that the record has been deleted
# Now commit the changes
db.commit()
result = count
# return to the main import screen
# @todo: check result properly
if result == False:
response.warning = self.messages.no_job_to_delete
else:
response.flash = self.messages.job_deleted
# redirect to the start page (remove all vars)
self.next = self.request.url(vars=dict())
return
# ========================================================================
# Utility methods
# ========================================================================
def _upload_form(self, r, **attr):
"""
Create and process the upload form, including csv_extra_fields
"""
EXTRA_FIELDS = "csv_extra_fields"
TEMPLATE = "csv_template"
REPLACE_OPTION = "replace_option"
session = current.session
response = current.response
s3 = response.s3
request = self.request
table = self.upload_table
formstyle = s3.crud.formstyle
response.view = self._view(request, "list_create.html")
if REPLACE_OPTION in attr:
replace_option = attr[REPLACE_OPTION]
if replace_option is not None:
table.replace_option.readable = True
table.replace_option.writable = True
table.replace_option.label = replace_option
fields = [f for f in table if f.readable or f.writable and not f.compute]
if EXTRA_FIELDS in attr:
extra_fields = attr[EXTRA_FIELDS]
if extra_fields is not None:
fields.extend([f["field"] for f in extra_fields if "field" in f])
self.csv_extra_fields = extra_fields
labels, required = s3_mark_required(fields)
if required:
s3.has_required = True
form = SQLFORM.factory(table_name=self.UPLOAD_TABLE_NAME,
labels=labels,
formstyle=formstyle,
upload = os.path.join(request.folder, "uploads", "imports"),
separator = "",
message=self.messages.file_uploaded,
*fields)
args = ["s3csv"]
template = attr.get(TEMPLATE, True)
if template is True:
args.extend([self.controller, "%s.csv" % self.function])
elif isinstance(template, basestring):
args.extend([self.controller, "%s.csv" % template])
elif isinstance(template, (tuple, list)):
args.extend(template[:-1])
args.append("%s.csv" % template[-1])
else:
template = None
if template is not None:
url = URL(r=request, c="static", f="formats", args=args)
try:
# only add the download link if the template can be opened
open("%s/../%s" % (r.folder, url))
form[0][0].insert(0, TR(TD(A(self.messages.download_template,
_href=url)),
_id="template__row"))
except:
pass
if form.accepts(r.post_vars, session,
formname="upload_form"):
upload_id = table.insert(**table._filter_fields(form.vars))
if self.csv_extra_fields:
self.csv_extra_data = Storage()
for f in self.csv_extra_fields:
label = f.get("label", None)
if not label:
continue
field = f.get("field", None)
value = f.get("value", None)
if field:
if field.name in form.vars:
data = form.vars[field.name]
else:
data = field.default
value = data
requires = field.requires
if not isinstance(requires, (list, tuple)):
requires = [requires]
if requires:
requires = requires[0]
if isinstance(requires, IS_EMPTY_OR):
requires = requires.other
try:
options = requires.options()
except:
pass
else:
for k, v in options:
if k == str(data):
value = v
elif value is None:
continue
self.csv_extra_data[label] = value
s3.no_formats = True
return form
# -------------------------------------------------------------------------
def _create_upload_dataTable(self):
"""
List of previous Import jobs
"""
db = current.db
request = self.request
controller = self.controller
function = self.function
s3 = current.response.s3
table = self.upload_table
s3.filter = (table.controller == controller) & \
(table.function == function)
fields = ["id",
"filename",
"created_on",
"user_id",
"replace_option",
"status"]
self._use_upload_table()
# Hide the list of prior uploads for now
#output = self._dataTable(fields, sort_by = [[2,"desc"]])
output = dict()
self._use_controller_table()
if request.representation == "aadata":
return output
query = (table.status != 3) # Status of Pending or in-Error
rows = db(query).select(table.id)
restrictOpen = [str(row.id) for row in rows]
query = (table.status == 3) # Status of Completed
rows = db(query).select(table.id)
restrictView = [str(row.id) for row in rows]
s3.actions = [
dict(label=str(self.messages.open_btn),
_class="action-btn",
url=URL(r=request,
c=controller,
f=function,
args=["import"],
vars={"job":"[id]"}),
restrict = restrictOpen
),
dict(label=str(self.messages.view_btn),
_class="action-btn",
url=URL(r=request,
c=controller,
f=function,
args=["import"],
vars={"job":"[id]"}),
restrict = restrictView
),
dict(label=str(self.messages.delete_btn),
_class="delete-btn",
url=URL(r=request,
c=controller,
f=function,
args=["import"],
vars={"job":"[id]",
"delete":"True"
}
)
),
]
# Display an Error if no job is attached with this record
query = (table.status == 1) # Pending
rows = db(query).select(table.id)
s3.dataTableStyleAlert = [str(row.id) for row in rows]
query = (table.status == 2) # in error
rows = db(query).select(table.id)
s3.dataTableStyleWarning = [str(row.id) for row in rows]
return output
# -------------------------------------------------------------------------
def _create_import_item_dataTable(self, upload_id, job_id):
"""
@todo: docstring?
"""
s3 = current.response.s3
represent = {"element" : self._item_element_represent}
self._use_import_item_table(job_id)
# Add a filter to the dataTable query
s3.filter = (self.table.job_id == job_id) & \
(self.table.tablename == self.controller_tablename)
# Get a list of the records that have an error of None
query = (self.table.job_id == job_id) & \
(self.table.tablename == self.controller_tablename)
rows = current.db(query).select(self.table.id, self.table.error)
select_list = []
error_list = []
for row in rows:
if row.error:
error_list.append(str(row.id))
else:
select_list.append("%s" % row.id)
select_id = ",".join(select_list)
output = self._dataTable(["id", "element", "error"],
sort_by = [[1, "asc"]],
represent=represent)
self._use_controller_table()
if self.request.representation == "aadata":
return output
# Highlight rows in error in red
s3.dataTableStyleWarning = error_list
s3.dataTableSelectable = True
s3.dataTablePostMethod = True
table = output["items"]
job = INPUT(_type="hidden", _id="importUploadID", _name="job",
_value="%s" % upload_id)
mode = INPUT(_type="hidden", _id="importMode", _name="mode",
_value="Inclusive")
# only select the rows with no errors
selected = INPUT(_type="hidden", _id="importSelected",
_name="selected", _value="[%s]" % select_id)
form = FORM(table, job, mode, selected)
output["items"] = form
s3.dataTableSelectSubmitURL = "import?job=%s&" % upload_id
s3.actions = [
dict(label= str(self.messages.item_show_details),
_class="action-btn",
_jqclick="$('.importItem.'+id).toggle();",
),
]
return output
# -------------------------------------------------------------------------
def _generate_import_job(self,
upload_id,
openFile,
fileFormat,
stylesheet=None,
commit_job=False):
"""
This will take a s3_import_upload record and
generate the importJob
@param uploadFilename: The name of the uploaded file
@todo: complete parameter descriptions
"""
_debug("S3Importer._generate_import_job(%s, %s, %s, %s)" % (upload_id,
openFile,
fileFormat,
stylesheet
)
)
db = current.db
request = self.request
resource = request.resource
# ---------------------------------------------------------------------
# CSV
if fileFormat == "csv" or fileFormat == "comma-separated-values":
fmt = "csv"
src = openFile
# ---------------------------------------------------------------------
# XML
# @todo: implement
#elif fileFormat == "xml":
# ---------------------------------------------------------------------
# S3JSON
# @todo: implement
#elif fileFormat == "s3json":
# ---------------------------------------------------------------------
# PDF
# @todo: implement
#elif fileFormat == "pdf":
# ---------------------------------------------------------------------
# Unsupported Format
else:
msg = self.messages.unsupported_file_type % fileFormat
self.error = msg
_debug(msg)
return None
# Get the stylesheet
if stylesheet == None:
stylesheet = self._get_stylesheet()
if stylesheet == None:
return None
# before calling import tree ensure the db.table is the controller_table
self.table = self.controller_table
self.tablename = self.controller_tablename
# Pass stylesheet arguments
args = Storage()
mode = request.get_vars.get("xsltmode", None)
if mode is not None:
args.update(mode=mode)
# Generate the import job
resource.import_xml(src,
format=fmt,
extra_data=self.csv_extra_data,
stylesheet=stylesheet,
ignore_errors = True,
commit_job = commit_job,
**args)
job = resource.job
if job is None:
if resource.error:
# Error
self.error = resource.error
return None
else:
# Nothing to import
self.warning = self.messages.no_records_to_import
return None
else:
# Job created
job_id = job.job_id
errors = current.xml.collect_errors(job)
if errors:
current.response.s3.error_report = errors
query = (self.upload_table.id == upload_id)
result = db(query).update(job_id=job_id)
# @todo: add check that result == 1, if not we are in error
# Now commit the changes
db.commit()
self.job_id = job_id
return True
# -------------------------------------------------------------------------
def _get_stylesheet(self, file_format="csv"):
"""
Get the stylesheet for transformation of the import
@param file_format: the import source file format
"""
if file_format == "csv":
xslt_path = os.path.join(self.xslt_path, "s3csv")
else:
xslt_path = os.path.join(self.xslt_path, file_format, "import.xsl")
return xslt_path
# Use the "csv_stylesheet" parameter to override the CSV stylesheet subpath
# and filename, e.g.
# s3_rest_controller(module, resourcename,
# csv_stylesheet=("inv", "inv_item.xsl"))
if self.csv_stylesheet:
if isinstance(self.csv_stylesheet, (tuple, list)):
stylesheet = os.path.join(xslt_path,
*self.csv_stylesheet)
else:
stylesheet = os.path.join(xslt_path,
self.controller,
self.csv_stylesheet)
else:
xslt_filename = "%s.%s" % (self.function, self.xslt_extension)
stylesheet = os.path.join(xslt_path,
self.controller,
xslt_filename)
if os.path.exists(stylesheet) is False:
msg = self.messages.stylesheet_not_found % stylesheet
self.error = msg
_debug(msg)
return None
return stylesheet
# -------------------------------------------------------------------------
def _commit_import_job(self, upload_id, items):
"""
This will save all of the selected import items
@todo: parameter descriptions?
"""
_debug("S3Importer._commit_import_job(%s, %s)" % (upload_id, items))
db = current.db
resource = self.request.resource
# Load the items from the s3_import_item table
self.importDetails = dict()
table = self.upload_table
query = (table.id == upload_id)
row = db(query).select(table.job_id,
table.replace_option,
limitby=(0, 1)).first()
if row is None:
return False
else:
job_id = row.job_id
current.response.s3.import_replace = row.replace_option
itemTable = S3ImportJob.define_item_table()
if itemTable != None:
#****************************************************************
# EXPERIMENTAL
# This doesn't delete related items
# but import_tree will tidy it up later
#****************************************************************
# get all the items selected for import
rows = self._get_all_items(upload_id, as_string=True)
# loop through each row and delete the items not required
self._store_import_details(job_id, "preDelete")
for id in rows:
if str(id) not in items:
# @todo: replace with a helper method from the API
_debug("Deleting item.id = %s" % id)
query = (itemTable.id == id)
db(query).delete()
#****************************************************************
# EXPERIMENTAL
#****************************************************************
# set up the table we will import data into
self.table = self.controller_table
self.tablename = self.controller_tablename
self._store_import_details(job_id, "preImportTree")
# Now commit the remaining items
msg = resource.import_xml(None,
job_id = job_id,
ignore_errors = True)
return resource.error is None
# -------------------------------------------------------------------------
def _store_import_details(self, job_id, key):
"""
This will store the details from an importJob
@todo: parameter descriptions?
"""
_debug("S3Importer._store_import_details(%s, %s)" % (job_id, key))
itemTable = S3ImportJob.define_item_table()
query = (itemTable.job_id == job_id) & \
(itemTable.tablename == self.controller_tablename)
rows = current.db(query).select(itemTable.data, itemTable.error)
items = [dict(data=row.data, error=row.error) for row in rows]
self.importDetails[key] = items
# -------------------------------------------------------------------------
def _update_upload_job(self, upload_id):
"""
This will record the results from the import, and change the
status of the upload job
@todo: parameter descriptions?
@todo: report errors in referenced records, too
"""
_debug("S3Importer._update_upload_job(%s)" % (upload_id))
request = self.request
resource = request.resource
db = current.db
totalPreDelete = len(self.importDetails["preDelete"])
totalPreImport = len(self.importDetails["preImportTree"])
totalIgnored = totalPreDelete - totalPreImport
if resource.error_tree is None:
totalErrors = 0
else:
totalErrors = len(resource.error_tree.findall(
"resource[@name='%s']" % resource.tablename))
totalRecords = totalPreImport - totalErrors
if totalRecords < 0:
totalRecords = 0
query = (self.upload_table.id == upload_id)
result = db(query).update(summary_added=totalRecords,
summary_error=totalErrors,
summary_ignored = totalIgnored,
status = 3)
# Now commit the changes
db.commit()
return (totalRecords, totalErrors, totalIgnored)
# -------------------------------------------------------------------------
def _display_completed_job(self, totals, timestmp=None):
"""
Generate a summary flash message for a completed import job
@param totals: the job totals as tuple
(total imported, total errors, total ignored)
@param timestmp: the timestamp of the completion
"""
session = current.session
msg = "%s - %s - %s" % \
(self.messages.commit_total_records_imported,
self.messages.commit_total_errors,
self.messages.commit_total_records_ignored)
msg = msg % totals
if timestmp != None:
session.flash = self.messages.job_completed % \
(self.date_represent(timestmp), msg)
elif totals[1] is not 0:
session.error = msg
elif totals[2] is not 0:
session.warning = msg
else:
session.flash = msg
# -------------------------------------------------------------------------
def _dataTable(self,
list_fields = [],
sort_by = [[1, "asc"]],
represent={},
):
"""
Method to get the data for the dataTable
This can be either a raw html representation or
and ajax call update
Additional data will be cached to limit calls back to the server
@param list_fields: list of field names
@param sort_by: list of sort by columns
@param represent: a dict of field callback functions used
to change how the data will be displayed
@return: a dict()
In html representations this will be a table of the data
plus the sortby instructions
In ajax this will be a json response
In addition the following values will be made available:
totalRecords Number of records in the filtered data set
totalDisplayRecords Number of records to display
start Start point in the ordered data set
limit Number of records in the ordered set
NOTE: limit - totalDisplayRecords = total cached
"""
# ********************************************************************
# Common tasks
# ********************************************************************
db = current.db
session = current.session
request = self.request
response = current.response
resource = self.resource
s3 = response.s3
representation = request.representation
table = self.table
tablename = self.tablename
vars = request.get_vars
output = dict()
# Check permission to read this table
authorised = self.permit("read", tablename)
if not authorised:
request.unauthorised()
# List of fields to select from
# fields is a list of Field objects
# list_field is a string list of field names
if list_fields == []:
fields = resource.readable_fields()
else:
fields = [table[f] for f in list_fields if f in table.fields]
if not fields:
fields = []
# attach any represent callbacks
for f in fields:
if f.name in represent:
f.represent = represent[f.name]
# Make sure that we have the table id as the first column
if fields[0].name != table.fields[0]:
fields.insert(0, table[table.fields[0]])
list_fields = [f.name for f in fields]
# Filter
if s3.filter is not None:
self.resource.add_filter(s3.filter)
# ********************************************************************
# ajax call
# ********************************************************************
if representation == "aadata":
start = vars.get("iDisplayStart", None)
limit = vars.get("iDisplayLength", None)
if limit is not None:
try:
start = int(start)
limit = int(limit)
except ValueError:
start = None
limit = None # use default
else:
start = None # use default
# Using the sort variables sent from dataTables
if vars.iSortingCols:
orderby = self.ssp_orderby(resource, list_fields)
# Echo
sEcho = int(vars.sEcho or 0)
# Get the list
items = resource.sqltable(fields=list_fields,
start=start,
limit=limit,
orderby=orderby,
download_url=self.download_url,
as_page=True) or []
# Ugly hack to change any occurrence of [id] with the true id
# Needed because the represent doesn't know the id
for i in range(len(items)):
id = items[i][0]
for j in range(len(items[i])):
new = items[i][j].replace("[id]",id)
items[i][j] = new
totalrows = self.resource.count()
result = dict(sEcho = sEcho,
iTotalRecords = totalrows,
iTotalDisplayRecords = totalrows,
aaData = items)
output = jsons(result)
# ********************************************************************
# html 'initial' call
# ********************************************************************
else: # catch all
start = 0
limit = 1
# Sort by
vars["iSortingCols"] = len(sort_by)
# generate the dataTables.js variables for sorting
index = 0
for col in sort_by:
colName = "iSortCol_%s" % str(index)
colValue = col[0]
dirnName = "sSortDir_%s" % str(index)
if len(col) > 1:
dirnValue = col[1]
else:
dirnValue = "asc"
vars[colName] = colValue
vars[dirnName] = dirnValue
# Now using these sort variables generate the order by statement
orderby = self.ssp_orderby(resource, list_fields)
del vars["iSortingCols"]
for col in sort_by:
del vars["iSortCol_%s" % str(index)]
del vars["sSortDir_%s" % str(index)]
# Get the first row for a quick up load
items = resource.sqltable(fields=list_fields,
start=start,
limit=1,
orderby=orderby,
download_url=self.download_url)
totalrows = resource.count()
if items:
if totalrows:
if s3.dataTable_iDisplayLength:
limit = 2 * s3.dataTable_iDisplayLength
else:
limit = 50
# Add a test on the first call here:
# Now get the limit rows for ajax style update of table
sqltable = resource.sqltable(fields=list_fields,
start=start,
limit=limit,
orderby=orderby,
download_url=self.download_url,
as_page=True)
aadata = dict(aaData = sqltable or [])
# Ugly hack to change any occurrence of [id] with the true id
# Needed because the represent doesn't know the id
for i in range(len(aadata["aaData"])):
id = aadata["aaData"][i][0]
for j in range(len(aadata["aaData"][i])):
new = aadata["aaData"][i][j].replace("[id]",id)
aadata["aaData"][i][j] = new
aadata.update(iTotalRecords=totalrows,
iTotalDisplayRecords=totalrows)
response.aadata = jsons(aadata)
s3.start = 0
s3.limit = limit
else: # No items in database
# s3import tables don't have a delete field but kept for the record
if "deleted" in table:
available_records = db(table.deleted == False)
else:
available_records = db(table.id > 0)
# check for any records on an unfiltered table
if available_records.select(table.id,
limitby=(0, 1)).first():
items = self.crud_string(tablename, "msg_no_match")
else:
items = self.crud_string(tablename, "msg_list_empty")
output.update(items=items, sortby=sort_by)
# Value to be added to the dataTable ajax call
s3.dataTable_Method = "import"
return output
# -------------------------------------------------------------------------
def _item_element_represent(self, value):
"""
Represent the element in an import item for dataTable display
@param value: the string containing the element
"""
T = current.T
db = current.db
value = S3XML.xml_decode(value)
try:
element = etree.fromstring(value)
except:
# XMLSyntaxError: return the element as-is
return DIV(value)
tablename = element.get("name")
table = current.db[tablename]
output = DIV()
details = TABLE(_class="importItem [id]")
header, rows = self._add_item_details(element.findall("data"), table)
if header is not None:
output.append(header)
# Add components, if present
components = element.findall("resource")
for component in components:
ctablename = component.get("name")
ctable = db[ctablename]
self._add_item_details(component.findall("data"), ctable,
details=rows, prefix=True)
if rows:
details.append(TBODY(rows))
# Add error messages, if present
errors = current.xml.collect_errors(element)
if errors:
details.append(TFOOT(TR(TH("%s:" % T("Errors")),
TD(UL([LI(e) for e in errors])))))
if rows == [] and components == []:
# At this stage we don't have anything to display to see if we can
# find something to show. This could be the case when a table being
# imported is a resolver for a many to many relationship
refdetail = TABLE(_class="importItem [id]")
references = element.findall("reference")
for reference in references:
tuid = reference.get("tuid")
resource = reference.get("resource")
refdetail.append(TR(TD(resource), TD(tuid)))
output.append(refdetail)
else:
output.append(details)
return str(output)
# -------------------------------------------------------------------------
@staticmethod
def _add_item_details(data, table, details=None, prefix=False):
"""
Add details of the item element
@param data: the list of data elements in the item element
@param table: the table for the data
@param details: the existing details rows list (to append to)
"""
tablename = table._tablename
if details is None:
details = []
first = None
firstString = None
header = None
for child in data:
f = child.get("field", None)
if f not in table.fields:
continue
elif f == "wkt":
# Skip bulky WKT fields
continue
field = table[f]
ftype = str(field.type)
value = child.get("value", None)
if not value:
value = child.text
try:
value = S3Importer._decode_data(field, value)
except:
pass
if value:
value = S3XML.xml_encode(unicode(value))
else:
value = ""
if f != None and value != None:
headerText = P(B("%s: " % f), value)
if not first:
first = headerText
if ftype == "string" and not firstString:
firstString = headerText
if f == "name":
header = headerText
if prefix:
details.append(TR(TH("%s.%s:" % (tablename, f)), TD(value)))
else:
details.append(TR(TH("%s:" % f), TD(value)))
if not header:
if firstString:
header = firstString
else:
header = first
return (header, details)
# -------------------------------------------------------------------------
@staticmethod
def _decode_data(field, value):
"""
Try to decode string data into their original type
@param field: the Field instance
@param value: the stringified value
@todo: replace this by ordinary decoder
"""
if field.type == "string" or \
field.type == "string" or \
field.type == "password" or \
field.type == "upload" or \
field.type == "text":
return value
elif field.type == "integer" or field.type == "id":
return int(value)
elif field.type == "double" or field.type == "decimal":
return double(value)
elif field.type == 'boolean':
if value and not str(value)[:1].upper() in ["F", "0"]:
return "T"
else:
return "F"
elif field.type == "date":
return value # @todo fix this to get a date
elif field.type == "time":
return value # @todo fix this to get a time
elif field.type == "datetime":
return value # @todo fix this to get a datetime
else:
return value
# -------------------------------------------------------------------------
@staticmethod
def date_represent(date_obj):
"""
Represent a datetime object as string
@param date_obj: the datetime object
@todo: replace by S3DateTime method?
"""
return date_obj.strftime("%d %B %Y, %I:%M%p")
# -------------------------------------------------------------------------
def _process_item_list(self, upload_id, vars):
"""
Get the list of IDs for the selected items from the "mode"
and "selected" request variables
@param upload_id: the upload_id
@param vars: the request variables
"""
items = None
if "mode" in vars:
mode = vars["mode"]
if "selected" in vars:
selected = vars["selected"].split(",")
else:
selected = []
if mode == "Inclusive":
items = selected
elif mode == "Exclusive":
all_items = self._get_all_items(upload_id, as_string=True)
items = [i for i in all_items if i not in selected]
return items
# -------------------------------------------------------------------------
def _get_all_items(self, upload_id, as_string=False):
""" Get a list of the record IDs of all import items for
the the given upload ID
@param upload_id: the upload ID
@param as_string: represent each ID as string
"""
item_table = S3ImportJob.define_item_table()
upload_table = self.upload_table
query = (upload_table.id == upload_id) & \
(item_table.job_id == upload_table.job_id) & \
(item_table.tablename == self.controller_tablename)
rows = current.db(query).select(item_table.id)
if as_string:
items = [str(row.id) for row in rows]
else:
items = [row.id for row in rows]
return items
# -------------------------------------------------------------------------
def _use_upload_table(self):
"""
Set the resource and the table to being s3_import_upload
"""
if self.upload_resource == None:
from s3resource import S3Resource
(prefix, name) = self.UPLOAD_TABLE_NAME.split("_",1)
self.upload_resource = S3Resource(prefix, name)
self.resource = self.upload_resource
self.table = self.upload_table
self.tablename = self.upload_tablename
# -------------------------------------------------------------------------
def _use_controller_table(self):
"""
Set the resource and the table to be the imported resource
"""
self.resource = self.controller_resource
self.table = self.controller_table
self.tablename = self.controller_tablename
# -------------------------------------------------------------------------
def _use_import_item_table(self, job_id):
"""
Set the resource and the table to being s3_import_item
"""
if self.item_resource == None:
from s3resource import S3Resource
(prefix, name) = S3ImportJob.ITEM_TABLE_NAME.split("_",1)
self.item_resource = S3Resource(prefix, name)
self.resource = self.item_resource
self.tablename = S3ImportJob.ITEM_TABLE_NAME
self.table = S3ImportJob.define_item_table()
# -------------------------------------------------------------------------
def __define_table(self):
""" Configures the upload table """
_debug("S3Importer.__define_table()")
T = current.T
db = current.db
request = current.request
self.upload_tablename = self.UPLOAD_TABLE_NAME
import_upload_status = {
1: T("Pending"),
2: T("In error"),
3: T("Completed"),
}
def user_name_represent(id):
# @todo: use s3_present_user?
rep_str = "-"
table = db.auth_user
query = (table.id == id)
row = db(query).select(table.first_name,
table.last_name,
limitby=(0, 1)).first()
if row:
rep_str = "%s %s" % (row.first_name, row.last_name)
return rep_str
def status_represent(index):
if index == None:
return "Unknown" # @todo: use messages (internationalize)
else:
return import_upload_status[index]
now = request.utcnow
table = self.define_upload_table()
table.file.upload_folder = os.path.join(request.folder,
"uploads",
#"imports"
)
table.file.comment = DIV(_class="tooltip",
_title="%s|%s" %
(self.messages.import_file,
self.messages.import_file_comment))
table.file.label = self.messages.import_file
table.status.requires = IS_IN_SET(import_upload_status, zero=None)
table.status.represent = status_represent
table.user_id.label = self.messages.user_name
table.user_id.represent = user_name_represent
table.created_on.default = now
table.created_on.represent = self.date_represent
table.modified_on.default = now
table.modified_on.update = now
table.modified_on.represent = self.date_represent
table.replace_option.label = T("Replace")
self.upload_table = db[self.UPLOAD_TABLE_NAME]
# -------------------------------------------------------------------------
@classmethod
def define_upload_table(cls):
""" Defines the upload table """
db = current.db
uploadfolder = os.path.join(current.request.folder,
"uploads",
)
if cls.UPLOAD_TABLE_NAME not in db:
upload_table = db.define_table(cls.UPLOAD_TABLE_NAME,
Field("controller",
readable=False,
writable=False),
Field("function",
readable=False,
writable=False),
Field("file", "upload",
uploadfolder=os.path.join(current.request.folder, "uploads", "imports"),
autodelete=True),
Field("filename",
readable=False,
writable=False),
Field("status", "integer",
default=1,
readable=False,
writable=False),
Field("extra_data",
readable=False,
writable=False),
Field("replace_option", "boolean",
default=False,
readable=False,
writable=False),
Field("job_id",
length=128,
readable=False,
writable=False),
Field("user_id", "integer",
readable=False,
writable=False),
Field("created_on", "datetime",
readable=False,
writable=False),
Field("modified_on", "datetime",
readable=False,
writable=False),
Field("summary_added", "integer",
readable=False,
writable=False),
Field("summary_error", "integer",
readable=False,
writable=False),
Field("summary_ignored", "integer",
readable=False,
writable=False),
Field("completed_details", "text",
readable=False,
writable=False))
else:
upload_table = db[cls.UPLOAD_TABLE_NAME]
return upload_table
# =============================================================================
class S3ImportItem(object):
""" Class representing an import item (=a single record) """
METHOD = Storage(
CREATE="create",
UPDATE="update",
DELETE="delete"
)
POLICY = Storage(
THIS="THIS", # keep local instance
OTHER="OTHER", # update unconditionally
NEWER="NEWER", # update if import is newer
MASTER="MASTER" # update if import is master
)
# -------------------------------------------------------------------------
def __init__(self, job):
"""
Constructor
@param job: the import job this item belongs to
"""
self.job = job
self.ERROR = current.manager.ERROR
# Locking and error handling
self.lock = False
self.error = None
# Identification
import uuid
self.item_id = uuid.uuid4() # unique ID for this item
self.id = None
self.uid = None
# Data elements
self.table = None
self.tablename = None
self.element = None
self.data = None
self.original = None
self.components = []
self.references = []
self.load_components = []
self.load_references = []
self.parent = None
self.skip = False
# Conflict handling
self.mci = 2
self.mtime = datetime.utcnow()
self.modified = True
self.conflict = False
# Allowed import methods
self.strategy = job.strategy
# Update and conflict resolution policies
self.update_policy = job.update_policy
self.conflict_policy = job.conflict_policy
# Actual import method
self.method = None
self.onvalidation = None
self.onaccept = None
# Item import status flags
self.accepted = None
self.permitted = False
self.committed = False
# Writeback hook for circular references:
# Items which need a second write to update references
self.update = []
# -------------------------------------------------------------------------
def __repr__(self):
""" Helper method for debugging """
_str = "<S3ImportItem %s {item_id=%s uid=%s id=%s error=%s data=%s}>" % \
(self.table, self.item_id, self.uid, self.id, self.error, self.data)
return _str
# -------------------------------------------------------------------------
def parse(self,
element,
original=None,
table=None,
tree=None,
files=None):
"""
Read data from a <resource> element
@param element: the element
@param table: the DB table
@param tree: the import tree
@param files: uploaded files
@returns: True if successful, False if not (sets self.error)
"""
db = current.db
xml = current.xml
manager = current.manager
validate = manager.validate
s3db = current.s3db
self.element = element
if table is None:
tablename = element.get(xml.ATTRIBUTE.name, None)
try:
table = s3db[tablename]
except:
self.error = self.ERROR.BAD_RESOURCE
element.set(xml.ATTRIBUTE.error, self.error)
return False
self.table = table
self.tablename = table._tablename
if original is None:
original = manager.original(table, element)
data = xml.record(table, element,
files=files,
original=original,
validate=validate)
if data is None:
self.error = self.ERROR.VALIDATION_ERROR
self.accepted = False
if not element.get(xml.ATTRIBUTE.error, False):
element.set(xml.ATTRIBUTE.error, str(self.error))
return False
self.data = data
if original is not None:
self.original = original
self.id = original[table._id.name]
if xml.UID in original:
self.uid = original[xml.UID]
self.data.update({xml.UID:self.uid})
elif xml.UID in data:
self.uid = data[xml.UID]
if xml.MTIME in data:
self.mtime = data[xml.MTIME]
if xml.MCI in data:
self.mci = data[xml.MCI]
_debug("New item: %s" % self)
return True
# -------------------------------------------------------------------------
def deduplicate(self):
RESOLVER = "deduplicate"
if self.id:
return
table = self.table
if table is None:
return
if self.original is not None:
original = self.original
else:
original = current.manager.original(table, self.data)
if original is not None:
self.original = original
self.id = original[table._id.name]
UID = current.xml.UID
if UID in original:
self.uid = original[UID]
self.data.update({UID:self.uid})
self.method = self.METHOD.UPDATE
else:
resolve = current.s3db.get_config(self.tablename, RESOLVER)
if self.data and resolve:
resolve(self)
return
# -------------------------------------------------------------------------
def authorize(self):
"""
Authorize the import of this item, sets self.permitted
"""
db = current.db
manager = current.manager
authorize = manager.permit
self.permitted = False
if not self.table:
return False
prefix = self.tablename.split("_", 1)[0]
if prefix in manager.PROTECTED:
return False
if not authorize:
self.permitted = True
self.method = self.METHOD.CREATE
if self.id:
if self.data.deleted is True:
self.method = self.METHOD.DELETE
self.accepted = True
else:
if not self.original:
query = (self.table.id == self.id)
self.original = db(query).select(limitby=(0, 1)).first()
if self.original:
self.method = self.METHOD.UPDATE
if self.method == self.METHOD.CREATE:
self.id = 0
if authorize:
self.permitted = authorize(self.method,
self.tablename,
record_id=self.id)
return self.permitted
# -------------------------------------------------------------------------
def validate(self):
"""
Validate this item (=record onvalidation), sets self.accepted
"""
if self.accepted is not None:
return self.accepted
if self.data is None or not self.table:
self.accepted = False
return False
form = Storage()
form.method = self.method
form.vars = self.data
if self.id:
form.vars.id = self.id
form.errors = Storage()
tablename = self.tablename
key = "%s_onvalidation" % self.method
s3db = current.s3db
onvalidation = s3db.get_config(tablename, key,
s3db.get_config(tablename, "onvalidation"))
if onvalidation:
try:
callback(onvalidation, form, tablename=tablename)
except:
pass # @todo need a better handler here.
self.accepted = True
if form.errors:
error = current.xml.ATTRIBUTE.error
for k in form.errors:
e = self.element.findall("data[@field='%s']" % k)
if not e:
e = self.element.findall("reference[@field='%s']" % k)
if not e:
e = self.element
form.errors[k] = "[%s] %s" % (k, form.errors[k])
else:
e = e[0]
e.set(error,
str(form.errors[k]).decode("utf-8"))
self.error = self.ERROR.VALIDATION_ERROR
self.accepted = False
return self.accepted
# -------------------------------------------------------------------------
def commit(self, ignore_errors=False):
"""
Commit this item to the database
@param ignore_errors: skip invalid components
(still reports errors)
"""
db = current.db
s3db = current.s3db
xml = current.xml
manager = current.manager
table = self.table
# Check if already committed
if self.committed:
# already committed
return True
# If the parent item gets skipped, then skip this item as well
if self.parent is not None and self.parent.skip:
return True
_debug("Committing item %s" % self)
# Resolve references
self._resolve_references()
# Validate
if not self.validate():
_debug("Validation error: %s (%s)" % (self.error, xml.tostring(self.element, pretty_print=True)))
self.skip = True
return ignore_errors
elif self.components:
for component in self.components:
if not component.validate():
if hasattr(component, "tablename"):
tn = component.tablename
else:
tn = None
_debug("Validation error, component=%s" % tn)
component.skip = True
# Skip this item on any component validation errors
# unless ignore_errors is True
if ignore_errors:
continue
else:
self.skip = True
return False
# De-duplicate
self.deduplicate()
# Log this item
if manager.log is not None:
manager.log(self)
# Authorize item
if not self.authorize():
_debug("Not authorized - skip")
self.error = manager.ERROR.NOT_PERMITTED
self.skip = True
return ignore_errors
_debug("Method: %s" % self.method)
# Check if import method is allowed in strategy
if not isinstance(self.strategy, (list, tuple)):
self.strategy = [self.strategy]
if self.method not in self.strategy:
_debug("Method not in strategy - skip")
self.error = manager.ERROR.NOT_PERMITTED
self.skip = True
return True
this = self.original
if not this and self.id and \
self.method in (self.METHOD.UPDATE, self.METHOD.DELETE):
query = (table.id == self.id)
this = db(query).select(limitby=(0, 1)).first()
this_mtime = None
this_mci = 0
if this:
if xml.MTIME in table.fields:
this_mtime = xml.as_utc(this[xml.MTIME])
if xml.MCI in table.fields:
this_mci = this[xml.MCI]
self.mtime = xml.as_utc(self.mtime)
# Conflict detection
this_modified = True
self.modified = True
self.conflict = False
last_sync = xml.as_utc(self.job.last_sync)
if last_sync:
if this_mtime and this_mtime < last_sync:
this_modified = False
if self.mtime and self.mtime < last_sync:
self.modified = False
if self.modified and this_modified:
self.conflict = True
if self.conflict and \
self.method in (self.METHOD.UPDATE, self.METHOD.DELETE):
_debug("Conflict: %s" % self)
if self.job.onconflict:
self.job.onconflict(self)
if self.data is not None:
data = Storage(self.data)
else:
data = Storage()
# Update existing record
if self.method == self.METHOD.UPDATE:
if this:
if "deleted" in this and this.deleted:
policy = self._get_update_policy(None)
if policy == self.POLICY.NEWER and \
this_mtime and this_mtime > self.mtime or \
policy == self.POLICY.MASTER and \
(this_mci == 0 or self.mci != 1):
self.skip = True
return True
fields = data.keys()
for f in fields:
if f not in this:
continue
if isinstance(this[f], datetime):
if xml.as_utc(data[f]) == xml.as_utc(this[f]):
del data[f]
continue
else:
if data[f] == this[f]:
del data[f]
continue
remove = False
policy = self._get_update_policy(f)
if policy == self.POLICY.THIS:
remove = True
elif policy == self.POLICY.NEWER:
if this_mtime and this_mtime > self.mtime:
remove = True
elif policy == self.POLICY.MASTER:
if this_mci == 0 or self.mci != 1:
remove = True
if remove:
del data[f]
self.data.update({f:this[f]})
if "deleted" in this and this.deleted:
# Undelete re-imported records:
data.update(deleted=False)
if "deleted_fk" in table:
data.update(deleted_fk="")
if "created_by" in table:
data.update(created_by=table.created_by.default)
if "modified_by" in table:
data.update(modified_by=table.modified_by.default)
if not self.skip and not self.conflict and \
(len(data) or self.components or self.references):
if self.uid and xml.UID in table:
data.update({xml.UID:self.uid})
if xml.MTIME in table:
data.update({xml.MTIME: self.mtime})
if xml.MCI in data:
# retain local MCI on updates
del data[xml.MCI]
query = (table._id == self.id)
try:
success = db(query).update(**dict(data))
except:
self.error = sys.exc_info()[1]
self.skip = True
return False
if success:
self.committed = True
else:
# Nothing to update
self.committed = True
# Create new record
elif self.method == self.METHOD.CREATE:
# Do not apply field policy to UID and MCI
UID = xml.UID
if UID in data:
del data[UID]
MCI = xml.MCI
if MCI in data:
del data[MCI]
for f in data:
policy = self._get_update_policy(f)
if policy == self.POLICY.MASTER and self.mci != 1:
del data[f]
if len(data) or self.components or self.references:
# Restore UID and MCI
if self.uid and UID in table.fields:
data.update({UID:self.uid})
if MCI in table.fields:
data.update({MCI:self.mci})
# Insert the new record
try:
success = table.insert(**dict(data))
except:
self.error = sys.exc_info()[1]
self.skip = True
return False
if success:
self.id = success
self.committed = True
else:
# Nothing to create
self.skip = True
return True
# Delete local record
elif self.method == self.METHOD.DELETE:
if this:
if this.deleted:
self.skip = True
policy = self._get_update_policy(None)
if policy == self.POLICY.THIS:
self.skip = True
elif policy == self.POLICY.NEWER and \
(this_mtime and this_mtime > self.mtime):
self.skip = True
elif policy == self.POLICY.MASTER and \
(this_mci == 0 or self.mci != 1):
self.skip = True
else:
self.skip = True
if not self.skip and not self.conflict:
prefix, name = self.tablename.split("_", 1)
resource = manager.define_resource(prefix, name, id=self.id)
ondelete = s3db.get_config(self.tablename, "ondelete")
success = resource.delete(ondelete=ondelete,
cascade=True)
if resource.error:
self.error = resource.error
self.skip = True
return ignore_errors
_debug("Success: %s, id=%s %sd" % (self.tablename, self.id,
self.skip and "skippe" or \
self.method))
return True
# Audit + onaccept on successful commits
if self.committed:
form = Storage()
form.method = self.method
form.vars = self.data
tablename = self.tablename
prefix, name = tablename.split("_", 1)
if self.id:
form.vars.id = self.id
if manager.audit is not None:
manager.audit(self.method, prefix, name,
form=form,
record=self.id,
representation="xml")
s3db.update_super(table, form.vars)
if self.method == self.METHOD.CREATE:
current.auth.s3_set_record_owner(table, self.id)
key = "%s_onaccept" % self.method
onaccept = s3db.get_config(tablename, key,
s3db.get_config(tablename, "onaccept"))
if onaccept:
callback(onaccept, form, tablename=self.tablename)
# Update referencing items
if self.update and self.id:
for u in self.update:
item = u.get("item", None)
if not item:
continue
field = u.get("field", None)
if isinstance(field, (list, tuple)):
pkey, fkey = field
query = table.id == self.id
row = db(query).select(table[pkey],
limitby=(0, 1)).first()
if row:
item._update_reference(fkey, row[pkey])
else:
item._update_reference(field, self.id)
_debug("Success: %s, id=%s %sd" % (self.tablename, self.id,
self.skip and "skippe" or \
self.method))
return True
# -------------------------------------------------------------------------
def _get_update_policy(self, field):
"""
Get the update policy for a field (if the item will
update an existing record)
@param field: the name of the field
"""
if isinstance(self.update_policy, dict):
r = self.update_policy.get(field,
self.update_policy.get("__default__", self.POLICY.THIS))
else:
r = self.update_policy
if not r in self.POLICY.values():
r = self.POLICY.THIS
return r
# -------------------------------------------------------------------------
def _resolve_references(self):
"""
Resolve the references of this item (=look up all foreign
keys from other items of the same job). If a foreign key
is not yet available, it will be scheduled for later update.
"""
if not self.table:
return
items = self.job.items
for reference in self.references:
item = None
field = reference.field
entry = reference.entry
if not entry:
continue
# Resolve key tuples
if isinstance(field, (list,tuple)):
pkey, fkey = field
else:
pkey, fkey = ("id", field)
# Resolve the key table name
ktablename, key, multiple = s3_get_foreign_key(self.table[fkey])
if not ktablename:
if self.tablename == "auth_user" and \
fkey == "organisation_id":
ktablename = "org_organisation"
else:
continue
if entry.tablename:
ktablename = entry.tablename
try:
ktable = current.s3db[ktablename]
except:
continue
# Resolve the foreign key (value)
fk = entry.id
if entry.item_id:
item = items[entry.item_id]
if item:
fk = item.id
if fk and pkey != "id":
row = current.db(ktable._id == fk).select(ktable[pkey],
limitby=(0, 1)).first()
if not row:
fk = None
continue
else:
fk = row[pkey]
# Update record data
if fk:
if multiple:
val = self.data.get(fkey, [])
if fk not in val:
val.append(fk)
self.data[fkey] = val
else:
self.data[fkey] = fk
else:
if fkey in self.data and not multiple:
del self.data[fkey]
if item:
item.update.append(dict(item=self, field=fkey))
# -------------------------------------------------------------------------
def _update_reference(self, field, value):
"""
Helper method to update a foreign key in an already written
record. Will be called by the referenced item after (and only
if) it has been committed. This is only needed if the reference
could not be resolved before commit due to circular references.
@param field: the field name of the foreign key
@param value: the value of the foreign key
"""
if not value or not self.table:
return
db = current.db
if self.id and self.permitted:
fieldtype = str(self.table[field].type)
if fieldtype.startswith("list:reference"):
query = (self.table.id == self.id)
record = db(query).select(self.table[field],
limitby=(0,1)).first()
if record:
values = record[field]
if value not in values:
values.append(value)
db(self.table.id == self.id).update(**{field:values})
else:
db(self.table.id == self.id).update(**{field:value})
# -------------------------------------------------------------------------
def store(self, item_table=None):
"""
Store this item in the DB
"""
_debug("Storing item %s" % self)
if item_table is None:
return None
db = current.db
query = item_table.item_id == self.item_id
row = db(query).select(item_table.id, limitby=(0, 1)).first()
if row:
record_id = row.id
else:
record_id = None
record = Storage(job_id = self.job.job_id,
item_id = self.item_id,
tablename = self.tablename,
record_uid = self.uid,
error = self.error)
if self.element is not None:
element_str = current.xml.tostring(self.element,
xml_declaration=False)
record.update(element=element_str)
if self.data is not None:
data = Storage()
for f in self.data.keys():
table = self.table
if f not in table.fields:
continue
fieldtype = str(self.table[f].type)
if fieldtype == "id" or s3_has_foreign_key(self.table[f]):
continue
data.update({f:self.data[f]})
data_str = cPickle.dumps(data)
record.update(data=data_str)
ritems = []
for reference in self.references:
field = reference.field
entry = reference.entry
store_entry = None
if entry:
if entry.item_id is not None:
store_entry = dict(field=field,
item_id=str(entry.item_id))
elif entry.uid is not None:
store_entry = dict(field=field,
tablename=entry.tablename,
uid=str(entry.uid))
if store_entry is not None:
ritems.append(json.dumps(store_entry))
if ritems:
record.update(ritems=ritems)
citems = [c.item_id for c in self.components]
if citems:
record.update(citems=citems)
if self.parent:
record.update(parent=self.parent.item_id)
if record_id:
db(item_table.id == record_id).update(**record)
else:
record_id = item_table.insert(**record)
_debug("Record ID=%s" % record_id)
return record_id
# -------------------------------------------------------------------------
def restore(self, row):
"""
Restore an item from a item table row. This does not restore
the references (since this can not be done before all items
are restored), must call job.restore_references() to do that
@param row: the item table row
"""
xml = current.xml
self.item_id = row.item_id
self.accepted = None
self.permitted = False
self.committed = False
tablename = row.tablename
self.id = None
self.uid = row.record_uid
if row.data is not None:
self.data = cPickle.loads(row.data)
else:
self.data = Storage()
data = self.data
if xml.MTIME in data:
self.mtime = data[xml.MTIME]
if xml.MCI in data:
self.mci = data[xml.MCI]
UID = xml.UID
if UID in data:
self.uid = data[UID]
self.element = etree.fromstring(row.element)
if row.citems:
self.load_components = row.citems
if row.ritems:
self.load_references = [json.loads(ritem) for ritem in row.ritems]
self.load_parent = row.parent
try:
table = current.s3db[tablename]
except:
self.error = self.ERROR.BAD_RESOURCE
return False
else:
self.table = table
self.tablename = tablename
original = current.manager.original(table, self.data)
if original is not None:
self.original = original
self.id = original[table._id.name]
if UID in original:
self.uid = original[UID]
self.data.update({UID:self.uid})
self.error = row.error
if self.error and not self.data:
# Validation error
return False
return True
# =============================================================================
class S3ImportJob():
"""
Class to import an element tree into the database
"""
JOB_TABLE_NAME = "s3_import_job"
ITEM_TABLE_NAME = "s3_import_item"
# -------------------------------------------------------------------------
def __init__(self, manager, table,
tree=None,
files=None,
job_id=None,
strategy=None,
update_policy=None,
conflict_policy=None,
last_sync=None,
onconflict=None):
"""
Constructor
@param manager: the S3RequestManager instance performing this job
@param tree: the element tree to import
@param files: files attached to the import (for upload fields)
@param job_id: restore job from database (record ID or job_id)
@param strategy: the import strategy
@param update_policy: the update policy
@param conflict_policy: the conflict resolution policy
@param last_sync: the last synchronization time stamp (datetime)
@param onconflict: custom conflict resolver function
"""
self.error = None # the last error
self.error_tree = etree.Element(current.xml.TAG.root)
self.table = table
self.tree = tree
self.files = files
self.directory = Storage()
self.elements = Storage()
self.items = Storage()
self.references = []
self.job_table = None
self.item_table = None
self.count = 0 # total number of records imported
self.created = [] # IDs of created records
self.updated = [] # IDs of updated records
self.deleted = [] # IDs of deleted records
# Import strategy
self.strategy = strategy
if self.strategy is None:
self.strategy = [S3ImportItem.METHOD.CREATE,
S3ImportItem.METHOD.UPDATE,
S3ImportItem.METHOD.DELETE]
if not isinstance(self.strategy, (tuple, list)):
self.strategy = [self.strategy]
# Update policy (default=always update)
self.update_policy = update_policy
if not self.update_policy:
self.update_policy = S3ImportItem.POLICY.OTHER
# Conflict resolution policy (default=always update)
self.conflict_policy = conflict_policy
if not self.conflict_policy:
self.conflict_policy = S3ImportItem.POLICY.OTHER
# Synchronization settings
self.mtime = None
self.last_sync = last_sync
self.onconflict = onconflict
if job_id:
self.__define_tables()
jobtable = self.job_table
if str(job_id).isdigit():
query = jobtable.id == job_id
else:
query = jobtable.job_id == job_id
row = current.db(query).select(limitby=(0, 1)).first()
if not row:
raise SyntaxError("Job record not found")
self.job_id = row.job_id
if not self.table:
tablename = row.tablename
try:
table = current.s3db[tablename]
except:
pass
else:
import uuid
self.job_id = uuid.uuid4() # unique ID for this job
# -------------------------------------------------------------------------
def add_item(self,
element=None,
original=None,
components=None,
parent=None,
joinby=None):
"""
Parse and validate an XML element and add it as new item
to the job.
@param element: the element
@param original: the original DB record (if already available,
will otherwise be looked-up by this function)
@param components: a dictionary of components (as in S3Resource)
to include in the job (defaults to all
defined components)
@param parent: the parent item (if this is a component)
@param joinby: the component join key(s) (if this is a component)
@returns: a unique identifier for the new item, or None if there
was an error. self.error contains the last error, and
self.error_tree an element tree with all failing elements
including error attributes.
"""
if element in self.elements:
# element has already been added to this job
return self.elements[element]
# Parse the main element
item = S3ImportItem(self)
# Update lookup lists
item_id = item.item_id
self.items[item_id] = item
if element is not None:
self.elements[element] = item_id
if not item.parse(element,
original=original,
files=self.files):
self.error = item.error
item.accepted = False
if parent is None:
self.error_tree.append(deepcopy(item.element))
else:
# Now parse the components
table = item.table
components = current.s3db.get_components(table, names=components)
cnames = Storage()
cinfos = Storage()
for alias in components:
component = components[alias]
pkey = component.pkey
if component.linktable:
ctable = component.linktable
fkey = component.lkey
else:
ctable = component.table
fkey = component.fkey
ctablename = ctable._tablename
if ctablename in cnames:
cnames[ctablename].append(alias)
else:
cnames[ctablename] = [alias]
cinfos[(ctablename, alias)] = Storage(component = component,
ctable = ctable,
pkey = pkey,
fkey = fkey,
original = None,
uid = None)
add_item = self.add_item
xml = current.xml
for celement in xml.components(element, names=cnames.keys()):
# Get the component tablename
ctablename = celement.get(xml.ATTRIBUTE.name, None)
if not ctablename:
continue
# Get the component alias (for disambiguation)
calias = celement.get(xml.ATTRIBUTE.alias, None)
if calias is None:
if ctablename not in cnames:
continue
aliases = cnames[ctablename]
if len(aliases) == 1:
calias = aliases[0]
else:
# ambiguous components *must* use alias
continue
if (ctablename, calias) not in cinfos:
continue
else:
cinfo = cinfos[(ctablename, calias)]
component = cinfo.component
original = cinfo.original
ctable = cinfo.ctable
pkey = cinfo.pkey
fkey = cinfo.fkey
if not component.multiple:
if cinfo.uid is not None:
continue
if original is None and item.id:
query = (table.id == item.id) & \
(table[pkey] == ctable[fkey])
original = current.db(query).select(ctable.ALL,
limitby=(0, 1)).first()
if original:
cinfo.uid = uid = original.get(xml.UID, None)
celement.set(xml.UID, uid)
cinfo.original = original
item_id = add_item(element=celement,
original=original,
parent=item,
joinby=(pkey, fkey))
if item_id is None:
item.error = self.error
self.error_tree.append(deepcopy(item.element))
else:
citem = self.items[item_id]
citem.parent = item
item.components.append(citem)
# Handle references
table = item.table
tree = self.tree
if tree is not None:
fields = [table[f] for f in table.fields]
rfields = filter(s3_has_foreign_key, fields)
item.references = self.lookahead(element,
table=table,
fields=rfields,
tree=tree,
directory=self.directory)
for reference in item.references:
entry = reference.entry
if entry and entry.element is not None:
item_id = add_item(element=entry.element)
if item_id:
entry.update(item_id=item_id)
# Parent reference
if parent is not None:
entry = Storage(item_id=parent.item_id,
element=parent.element,
tablename=parent.tablename)
item.references.append(Storage(field=joinby,
entry=entry))
return item.item_id
# -------------------------------------------------------------------------
def lookahead(self,
element,
table=None,
fields=None,
tree=None,
directory=None):
"""
Find referenced elements in the tree
@param element: the element
@param table: the DB table
@param fields: the FK fields in the table
@param tree: the import tree
@param directory: a dictionary to lookup elements in the tree
(will be filled in by this function)
"""
db = current.db
s3db = current.s3db
xml = current.xml
import_uid = xml.import_uid
ATTRIBUTE = xml.ATTRIBUTE
TAG = xml.TAG
UID = xml.UID
reference_list = []
root = None
if tree is not None:
if isinstance(tree, etree._Element):
root = tree
else:
root = tree.getroot()
references = element.findall("reference")
for reference in references:
field = reference.get(ATTRIBUTE.field, None)
# Ignore references without valid field-attribute
if not field or field not in fields:
continue
# Find the key table
multiple = False
fieldtype = str(table[field].type)
if fieldtype.startswith("reference"):
ktablename = fieldtype[10:]
elif fieldtype.startswith("list:reference"):
ktablename = fieldtype[15:]
multiple = True
else:
# ignore if the field is not a reference type
continue
try:
ktable = s3db[ktablename]
except:
# Invalid tablename - skip
continue
tablename = reference.get(ATTRIBUTE.resource, None)
# Ignore references to tables without UID field:
if UID not in ktable.fields:
continue
# Fall back to key table name if tablename is not specified:
if not tablename:
tablename = ktablename
# Super-entity references must use the super-key:
if tablename != ktablename:
field = (ktable._id.name, field)
# Ignore direct references to super-entities:
if tablename == ktablename and ktable._id.name != "id":
continue
# Get the foreign key
uids = reference.get(UID, None)
attr = UID
if not uids:
uids = reference.get(ATTRIBUTE.tuid, None)
attr = ATTRIBUTE.tuid
if uids and multiple:
uids = json.loads(uids)
elif uids:
uids = [uids]
# Find the elements and map to DB records
relements = []
# Create a UID<->ID map
id_map = Storage()
if attr == UID and uids:
_uids = map(import_uid, uids)
query = ktable[UID].belongs(_uids)
records = db(query).select(ktable.id,
ktable[UID])
id_map = dict([(r[UID], r.id) for r in records])
if not uids:
# Anonymous reference: <resource> inside the element
expr = './/%s[@%s="%s"]' % (TAG.resource,
ATTRIBUTE.name,
tablename)
relements = reference.xpath(expr)
if relements and not multiple:
relements = [relements[0]]
elif root is not None:
for uid in uids:
entry = None
# Entry already in directory?
if directory is not None:
entry = directory.get((tablename, attr, uid), None)
if not entry:
expr = ".//%s[@%s='%s' and @%s='%s']" % (
TAG.resource,
ATTRIBUTE.name,
tablename,
attr,
uid)
e = root.xpath(expr)
if e:
# Element in the source => append to relements
relements.append(e[0])
else:
# No element found, see if original record exists
_uid = import_uid(uid)
if _uid and _uid in id_map:
_id = id_map[_uid]
entry = Storage(tablename=tablename,
element=None,
uid=uid,
id=_id,
item_id=None)
reference_list.append(Storage(field=field,
entry=entry))
else:
continue
else:
reference_list.append(Storage(field=field,
entry=entry))
# Create entries for all newly found elements
for relement in relements:
uid = relement.get(attr, None)
if attr == UID:
_uid = import_uid(uid)
id = _uid and id_map and id_map.get(_uid, None) or None
else:
_uid = None
id = None
entry = Storage(tablename=tablename,
element=relement,
uid=uid,
id=id,
item_id=None)
# Add entry to directory
if uid and directory is not None:
directory[(tablename, attr, uid)] = entry
# Append the entry to the reference list
reference_list.append(Storage(field=field, entry=entry))
return reference_list
# -------------------------------------------------------------------------
def load_item(self, row):
"""
Load an item from the item table (counterpart to add_item
when restoring a job from the database)
"""
item = S3ImportItem(self)
if not item.restore(row):
self.error = item.error
if item.load_parent is None:
self.error_tree.append(deepcopy(item.element))
# Update lookup lists
item_id = item.item_id
self.items[item_id] = item
return item_id
# -------------------------------------------------------------------------
def resolve(self, item_id, import_list):
"""
Resolve the reference list of an item
@param item_id: the import item UID
@param import_list: the ordered list of items (UIDs) to import
"""
item = self.items[item_id]
if item.lock or item.accepted is False:
return False
references = []
for reference in item.references:
ritem_id = reference.entry.item_id
if ritem_id and ritem_id not in import_list:
references.append(ritem_id)
for ritem_id in references:
item.lock = True
if self.resolve(ritem_id, import_list):
import_list.append(ritem_id)
item.lock = False
return True
# -------------------------------------------------------------------------
def commit(self, ignore_errors=False):
"""
Commit the import job to the DB
@param ignore_errors: skip any items with errors
(does still report the errors)
"""
ATTRIBUTE = current.xml.ATTRIBUTE
# Resolve references
import_list = []
for item_id in self.items:
self.resolve(item_id, import_list)
if item_id not in import_list:
import_list.append(item_id)
# Commit the items
items = self.items
count = 0
mtime = None
created = []
cappend = created.append
updated = []
deleted = []
tablename = self.table._tablename
for item_id in import_list:
item = items[item_id]
error = None
success = item.commit(ignore_errors=ignore_errors)
error = item.error
if error:
self.error = error
element = item.element
if element is not None:
if not element.get(ATTRIBUTE.error, False):
element.set(ATTRIBUTE.error, str(self.error))
self.error_tree.append(deepcopy(element))
if not ignore_errors:
return False
elif item.tablename == tablename:
count += 1
if mtime is None or item.mtime > mtime:
mtime = item.mtime
if item.id:
if item.method == item.METHOD.CREATE:
cappend(item.id)
elif item.method == item.METHOD.UPDATE:
updated.append(item.id)
elif item.method == item.METHOD.DELETE:
deleted.append(item.id)
self.count = count
self.mtime = mtime
self.created = created
self.updated = updated
self.deleted = deleted
return True
# -------------------------------------------------------------------------
def __define_tables(self):
"""
Define the database tables for jobs and items
"""
self.job_table = self.define_job_table()
self.item_table = self.define_item_table()
# -------------------------------------------------------------------------
@classmethod
def define_job_table(cls):
db = current.db
if cls.JOB_TABLE_NAME not in db:
job_table = db.define_table(cls.JOB_TABLE_NAME,
Field("job_id", length=128,
unique=True,
notnull=True),
Field("tablename"),
Field("timestmp", "datetime",
default=datetime.utcnow()))
else:
job_table = db[cls.JOB_TABLE_NAME]
return job_table
# -------------------------------------------------------------------------
@classmethod
def define_item_table(cls):
db = current.db
if cls.ITEM_TABLE_NAME not in db:
item_table = db.define_table(cls.ITEM_TABLE_NAME,
Field("item_id", length=128,
unique=True,
notnull=True),
Field("job_id", length=128),
Field("tablename", length=128),
#Field("record_id", "integer"),
Field("record_uid"),
Field("error", "text"),
Field("data", "text"),
Field("element", "text"),
Field("ritems", "list:string"),
Field("citems", "list:string"),
Field("parent", length=128))
else:
item_table = db[cls.ITEM_TABLE_NAME]
return item_table
# -------------------------------------------------------------------------
def store(self):
"""
Store this job and all its items in the job table
"""
db = current.db
_debug("Storing Job ID=%s" % self.job_id)
self.__define_tables()
jobtable = self.job_table
query = jobtable.job_id == self.job_id
row = db(query).select(jobtable.id, limitby=(0, 1)).first()
if row:
record_id = row.id
else:
record_id = None
record = Storage(job_id=self.job_id)
try:
tablename = self.table._tablename
except:
pass
else:
record.update(tablename=tablename)
for item in self.items.values():
item.store(item_table=self.item_table)
if record_id:
db(jobtable.id == record_id).update(**record)
else:
record_id = jobtable.insert(**record)
_debug("Job record ID=%s" % record_id)
return record_id
# -------------------------------------------------------------------------
def get_tree(self):
"""
Reconstruct the element tree of this job
"""
if self.tree is not None:
return tree
else:
xml = current.xml
root = etree.Element(xml.TAG.root)
for item in self.items.values():
if item.element is not None and not item.parent:
if item.tablename == self.table._tablename or \
item.element.get(xml.UID, None) or \
item.element.get(xml.ATTRIBUTE.tuid, None):
root.append(deepcopy(item.element))
return etree.ElementTree(root)
# -------------------------------------------------------------------------
def delete(self):
"""
Delete this job and all its items from the job table
"""
db = current.db
_debug("Deleting job ID=%s" % self.job_id)
self.__define_tables()
item_table = self.item_table
query = item_table.job_id == self.job_id
db(query).delete()
job_table = self.job_table
query = job_table.job_id == self.job_id
db(query).delete()
# -------------------------------------------------------------------------
def restore_references(self):
"""
Restore the job's reference structure after loading items
from the item table
"""
db = current.db
UID = current.xml.UID
for item in self.items.values():
for citem_id in item.load_components:
if citem_id in self.items:
item.components.append(self.items[citem_id])
item.load_components = []
for ritem in item.load_references:
field = ritem["field"]
if "item_id" in ritem:
item_id = ritem["item_id"]
if item_id in self.items:
_item = self.items[item_id]
entry = Storage(tablename=_item.tablename,
element=_item.element,
uid=_item.uid,
id=_item.id,
item_id=item_id)
item.references.append(Storage(field=field,
entry=entry))
else:
_id = None
uid = ritem.get("uid", None)
tablename = ritem.get("tablename", None)
if tablename and uid:
try:
table = current.s3db[tablename]
except:
continue
if UID not in table.fields:
continue
query = table[UID] == uid
row = db(query).select(table._id,
limitby=(0, 1)).first()
if row:
_id = row[table._id.name]
else:
continue
entry = Storage(tablename = ritem["tablename"],
element=None,
uid = ritem["uid"],
id = _id,
item_id = None)
item.references.append(Storage(field=field,
entry=entry))
item.load_references = []
if item.load_parent is not None:
item.parent = self.items[item.load_parent]
item.load_parent = None
# END =========================================================================
| ashwyn/eden-message_parser | modules/s3/s3import.py | Python | mit | 123,322 |
package br.ufrj.g2matricula.domain;
import org.springframework.data.elasticsearch.annotations.Document;
import javax.persistence.*;
import javax.validation.constraints.*;
import java.io.Serializable;
import java.util.Objects;
import br.ufrj.g2matricula.domain.enumeration.MatriculaStatus;
/**
* A Matricula.
*/
@Entity
@Table(name = "matricula")
@Document(indexName = "matricula")
public class Matricula implements Serializable {
private static final long serialVersionUID = 1L;
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
@NotNull
@Enumerated(EnumType.STRING)
@Column(name = "status", nullable = false)
private MatriculaStatus status;
@ManyToOne
private Aluno dreAluno;
@ManyToOne
private Curso curso;
// jhipster-needle-entity-add-field - JHipster will add fields here, do not remove
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public MatriculaStatus getStatus() {
return status;
}
public Matricula status(MatriculaStatus status) {
this.status = status;
return this;
}
public void setStatus(MatriculaStatus status) {
this.status = status;
}
public Aluno getDreAluno() {
return dreAluno;
}
public Matricula dreAluno(Aluno aluno) {
this.dreAluno = aluno;
return this;
}
public void setDreAluno(Aluno aluno) {
this.dreAluno = aluno;
}
public Curso getCurso() {
return curso;
}
public Matricula curso(Curso curso) {
this.curso = curso;
return this;
}
public void setCurso(Curso curso) {
this.curso = curso;
}
// jhipster-needle-entity-add-getters-setters - JHipster will add getters and setters here, do not remove
@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
Matricula matricula = (Matricula) o;
if (matricula.getId() == null || getId() == null) {
return false;
}
return Objects.equals(getId(), matricula.getId());
}
@Override
public int hashCode() {
return Objects.hashCode(getId());
}
@Override
public String toString() {
return "Matricula{" +
"id=" + getId() +
", status='" + getStatus() + "'" +
"}";
}
}
| DamascenoRafael/cos482-qualidade-de-software | www/src/main/java/br/ufrj/g2matricula/domain/Matricula.java | Java | mit | 2,535 |
# rhodecode-ce-dockerized
Docker container for RhodeCode Community Edition repository management platform
# WIP
| darneta/rhodecode-ce-dockerized | README.md | Markdown | mit | 113 |
<?php
class Admin_GeneralModel extends CI_Model {
public function GetAdminModuleCategoryList()
{
$this->db->select('CID, CategoryName');
$this->db->from('admin_module_category');
$this->db->order_by('Ordering');
$query = $this->db->get();
if($query->num_rows())
return $query;
else
return FALSE;
}
public function GetAdminModuleList()
{
$this->db->select('MID, CID, ModuleName, DisplayName');
$this->db->from('admin_module');
$this->db->order_by('Ordering');
$query = $this->db->get();
if($query->num_rows())
return $query;
else
return FALSE;
}
public function GetAdminModuleActions($MID = NULL)
{
$this->db->select('AID, MID, Action');
$this->db->from('admin_module_action');
if($MID != NULL)
$this->db->where('MID', $MID);
$query = $this->db->get();
if($query->num_rows())
return $query->result();
else
return FALSE;
}
}
?> | dernst91/deCMS | application/models/Admin_GeneralModel.php | PHP | mit | 895 |
#ifndef SYMTAB_H
#define SYMTAB_H
#include "symbol.h"
void symtab_init();
void push_scope();
void pop_scope();
symbol *bind_symbol(char *name);
symbol *lookup_symbol(char *name);
void print_symtab();
#endif
| rayiner/ccpoc | symtab.h | C | mit | 213 |
RSpec.describe("executables", skip_db_cleaner: true) do
include SharedSpecSetup
before do
#migrations don't work if we are still connected to the db
ActiveRecord::Base.remove_connection
end
it "extracts the schema" do
output = `bin/extract #{config_filename} production #{schema_filename} 2>&1`
expect(output).to match(/extracted to/)
expect(output).to match(/#{schema_filename}/)
end
it "transfers the schema" do
output = `bin/transfer-schema #{config_filename} production test config/include_tables.txt 2>&1`
expect(output).to match(/transferred schema from production to test/)
end
end
| ifad/sybase-schema-extractor | spec/bin_spec.rb | Ruby | mit | 636 |
## S3proxy - serve S3 files simply
S3proxy is a simple flask-based REST web application which can expose files (keys) stored in the AWS Simple Storage Service (S3) via a simple REST api.
### What does this do?
S3proxy takes a set of AWS credentials and an S3 bucket name and provides GET and HEAD endpoints on the files within the bucket. It uses the [boto][boto] library for internal access to S3. For example, if your bucket has the following file:
s3://mybucket/examples/path/to/myfile.txt
then running S3proxy on a localhost server (port 5000) would enable you read (GET) this file at:
http://localhost:5000/files/examples/path/to/myfile.txt
Support exists in S3proxy for the `byte-range` header in a GET request. This means that the API can provide arbitrary parts of S3 files if requested/supported by the application making the GET request.
### Why do this?
S3proxy simplifies access to private S3 objects. While S3 already provides [a complete REST API][s3_api], this API requires signed authentication headers or parameters that are not always obtainable within existing applications (see below), or overly complex for simple development/debugging tasks.
In fact, however, S3proxy was specifically designed to provide a compatability layer for viewing DNA sequencing data in(`.bam` files) using [IGV][igv]. While IGV already includes an interface for reading bam files from an HTTP endpoint, it does not support creating signed requests as required by the AWS S3 API (IGV does support HTTP Basic Authentication, a feature that I would like to include in S3proxy in the near future). Though it is in principal possible to provide a signed AWS-compatible URL to IGV, IGV will still not be able to create its own signed URLs necessary for accessing `.bai` index files, usually located in the same directory as the `.bam` file. Using S3proxy you can expose the S3 objects via a simplified HTTP API which IGV can understand and access directly.
This project is in many ways similar to [S3Auth][s3auth], a hosted service which provides a much more complete API to a private S3 bucket. I wrote S3proxy as a faster, simpler solution-- and because S3Auth requires a domain name and access to the `CNAME` record in order to function. If you want a more complete API (read: more than just GET/HEAD at the moment) should check them out!
### Features
- Serves S3 file objects via standard GET request, optionally providing only a part of a file using the `byte-range` header.
- Easy to configure via a the `config.yaml` file-- S3 keys and bucket name is all you need!
- Limited support for simple url-rewriting where necessary.
- Uses the werkzeug [`SimpleCache` module][simplecache] to cache S3 object identifiers (but not data) in order to reduce latency and lookup times.
### Usage
#### Requirements
To run S3proxy, you will need:
- [Flask][flask]
- [boto][boto]
- [PyYAML][pyyaml]
- An Amazon AWS account and keys with appropriate S3 access
#### Installation/Configuration
At the moment, there is no installation. Simply put your AWS keys and bucket name into the config.yaml file:
```yaml
AWS_ACCESS_KEY_ID: ''
AWS_SECRET_ACCESS_KEY: ''
bucket_name: ''
```
You may also optionally specify a number of "rewrite" rules. These are simple pairs of a regular expression and a replacement string which can be used to internally redirect (Note, the API does not actually currently send a REST 3XX redirect header) file paths. The example in the config.yaml file reads:
```yaml
rewrite_rules:
bai_rule:
from: ".bam.bai$"
to: ".bai"
```
... which will match all url/filenames ending with ".bam.bai" and rewrite this to ".bai".
If you do not wish to use any rewrite_rules, simply leave this commented out.
#### Running S3cmd:
Once you have filled out the config.yaml file, you can test out S3proxy simply by running on the command line:
python app.py
*Note*: Running using the built-in flask server is not recommended for anything other than debugging. Refer to [these deployment options][wsgi_server] for instructions on how to set up a flask applicaiton in a WSGI framework.
#### Options
If you wish to see more debug-level output (headers, etc.), use the `--debug` option. You may also specify a yaml configuration file to load using the `--config` parameter.
### Important considerations and caveats
S3proxy should not be used in production-level or open/exposed servers! There is currently no security provided by S3proxy (though I may add basic HTTP authentication later). Once given the AWS credentials, S3proxy will serve any path available to it. And, although I restrict requests to GET and HEAD only, I cannot currently guarantee that a determined person would not be able to execute a PUT/UPDATE/DELETE request using this service. Finally, I highly recommend you create a separate [IAM role][iam_roles] in AWS with limited access and permisisons to S3 only for use with S3proxy.
### Future development
- Implement HTTP Basic Authentication to provide some level of security.
- Implement other error codes and basic REST responses.
- Add ability to log to a file and specify a `--log-level` (use the Python logging module)
[boto]: http://boto.readthedocs.org/
[flask]: http://flask.pocoo.org/
[pyyaml]: http://pyyaml.org/wiki/PyYAML
[s3_api]: http://docs.aws.amazon.com/AmazonS3/latest/API/APIRest.html
[igv]: http://www.broadinstitute.org/igv/home
[wsgi_server]: http://flask.pocoo.org/docs/deploying/
[iam_roles]: http://aws.amazon.com/iam/
[simplecache]: http://flask.pocoo.org/docs/patterns/caching/
[s3auth]: http://www.s3auth.com/
| nkrumm/s3proxy | README.md | Markdown | mit | 5,632 |
Title: Survey data
Template: survey
Slug: survey/data
Github: True
The code to clean and process the survey data is available in the [GitHub repository](https://github.com/andrewheiss/From-the-Trenches-Anti-TIP-NGOs-and-US) for Andrew Heiss and Judith G. Kelley. 2016. "From the Trenches: A Global Survey of Anti-TIP NGOs and their Views of US Efforts." *Journal of Human Trafficking*. [doi:10.1080/23322705.2016.1199241](https://dx.doi.org/10.1080/23322705.2016.1199241)
<div class="row">
<div class="col-xs-12 col-sm-10 col-md-8 col-sm-offset-1 col-md-offset-2">
<div class="github-widget" data-repo="andrewheiss/From-the-Trenches-Anti-TIP-NGOs-and-US"></div>
</div>
</div>
The free response answers for respondents who requested anonymity have been
redacted.
- CSV file: [`responses_full_anonymized.csv`](/files/data/responses_full_anonymized.csv)
- R file: [`responses_full_anonymized.rds`](/files/data/responses_full_anonymized.rds)
| andrewheiss/scorecarddiplomacy-org | content/pages/survey-stuff/data.md | Markdown | mit | 952 |
// @flow
import { StyleSheet } from 'react-native';
import { colors } from '../../themes';
const styles = StyleSheet.create({
divider: {
height: 1,
marginHorizontal: 0,
backgroundColor: colors.darkDivider,
},
});
export default styles;
| Dennitz/Timetable | src/components/styles/HorizontalDividerList.styles.js | JavaScript | mit | 254 |
## Testing testing, 1, 2, 3
Let's see how *[this](https://github.com/imathis/jekyll-markdown-block)* does.
puts 'awesome' unless not_awesome?
- One item
- Two item
- Three Item
- Four!
And… scene!
| imathis/jekyll-markdown-block | test/source/_includes/test.md | Markdown | mit | 208 |
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<!--NewPage-->
<HTML>
<HEAD>
<!-- Generated by javadoc (build 1.6.0_27) on Wed Nov 21 16:03:26 EST 2012 -->
<TITLE>
ResourceXmlPropertyEmitterInterface
</TITLE>
<META NAME="date" CONTENT="2012-11-21">
<LINK REL ="stylesheet" TYPE="text/css" HREF="../../../../stylesheet.css" TITLE="Style">
<SCRIPT type="text/javascript">
function windowTitle()
{
if (location.href.indexOf('is-external=true') == -1) {
parent.document.title="ResourceXmlPropertyEmitterInterface";
}
}
</SCRIPT>
<NOSCRIPT>
</NOSCRIPT>
</HEAD>
<BODY BGCOLOR="white" onload="windowTitle();">
<HR>
<!-- ========= START OF TOP NAVBAR ======= -->
<A NAME="navbar_top"><!-- --></A>
<A HREF="#skip-navbar_top" title="Skip navigation links"></A>
<TABLE BORDER="0" WIDTH="100%" CELLPADDING="1" CELLSPACING="0" SUMMARY="">
<TR>
<TD COLSPAN=2 BGCOLOR="#EEEEFF" CLASS="NavBarCell1">
<A NAME="navbar_top_firstrow"><!-- --></A>
<TABLE BORDER="0" CELLPADDING="0" CELLSPACING="3" SUMMARY="">
<TR ALIGN="center" VALIGN="top">
<TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../overview-summary.html"><FONT CLASS="NavBarFont1"><B>Overview</B></FONT></A> </TD>
<TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="package-summary.html"><FONT CLASS="NavBarFont1"><B>Package</B></FONT></A> </TD>
<TD BGCOLOR="#FFFFFF" CLASS="NavBarCell1Rev"> <FONT CLASS="NavBarFont1Rev"><B>Class</B></FONT> </TD>
<TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="class-use/ResourceXmlPropertyEmitterInterface.html"><FONT CLASS="NavBarFont1"><B>Use</B></FONT></A> </TD>
<TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="package-tree.html"><FONT CLASS="NavBarFont1"><B>Tree</B></FONT></A> </TD>
<TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../deprecated-list.html"><FONT CLASS="NavBarFont1"><B>Deprecated</B></FONT></A> </TD>
<TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../index-files/index-1.html"><FONT CLASS="NavBarFont1"><B>Index</B></FONT></A> </TD>
<TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../help-doc.html"><FONT CLASS="NavBarFont1"><B>Help</B></FONT></A> </TD>
</TR>
</TABLE>
</TD>
<TD ALIGN="right" VALIGN="top" ROWSPAN=3><EM>
</EM>
</TD>
</TR>
<TR>
<TD BGCOLOR="white" CLASS="NavBarCell2"><FONT SIZE="-2">
<A HREF="../../../../org/pentaho/di/resource/ResourceUtil.html" title="class in org.pentaho.di.resource"><B>PREV CLASS</B></A>
<A HREF="../../../../org/pentaho/di/resource/SequenceResourceNaming.html" title="class in org.pentaho.di.resource"><B>NEXT CLASS</B></A></FONT></TD>
<TD BGCOLOR="white" CLASS="NavBarCell2"><FONT SIZE="-2">
<A HREF="../../../../index.html?org/pentaho/di/resource/ResourceXmlPropertyEmitterInterface.html" target="_top"><B>FRAMES</B></A>
<A HREF="ResourceXmlPropertyEmitterInterface.html" target="_top"><B>NO FRAMES</B></A>
<SCRIPT type="text/javascript">
<!--
if(window==top) {
document.writeln('<A HREF="../../../../allclasses-noframe.html"><B>All Classes</B></A>');
}
//-->
</SCRIPT>
<NOSCRIPT>
<A HREF="../../../../allclasses-noframe.html"><B>All Classes</B></A>
</NOSCRIPT>
</FONT></TD>
</TR>
<TR>
<TD VALIGN="top" CLASS="NavBarCell3"><FONT SIZE="-2">
SUMMARY: NESTED | FIELD | CONSTR | <A HREF="#method_summary">METHOD</A></FONT></TD>
<TD VALIGN="top" CLASS="NavBarCell3"><FONT SIZE="-2">
DETAIL: FIELD | CONSTR | <A HREF="#method_detail">METHOD</A></FONT></TD>
</TR>
</TABLE>
<A NAME="skip-navbar_top"></A>
<!-- ========= END OF TOP NAVBAR ========= -->
<HR>
<!-- ======== START OF CLASS DATA ======== -->
<H2>
<FONT SIZE="-1">
org.pentaho.di.resource</FONT>
<BR>
Interface ResourceXmlPropertyEmitterInterface</H2>
<HR>
<DL>
<DT><PRE>public interface <B>ResourceXmlPropertyEmitterInterface</B></DL>
</PRE>
<P>
<HR>
<P>
<!-- ========== METHOD SUMMARY =========== -->
<A NAME="method_summary"><!-- --></A>
<TABLE BORDER="1" WIDTH="100%" CELLPADDING="3" CELLSPACING="0" SUMMARY="">
<TR BGCOLOR="#CCCCFF" CLASS="TableHeadingColor">
<TH ALIGN="left" COLSPAN="2"><FONT SIZE="+2">
<B>Method Summary</B></FONT></TH>
</TR>
<TR BGCOLOR="white" CLASS="TableRowColor">
<TD ALIGN="right" VALIGN="top" WIDTH="1%"><FONT SIZE="-1">
<CODE> <A HREF="http://java.sun.com/j2se/1.5.0/docs/api/java/lang/String.html?is-external=true" title="class or interface in java.lang">String</A></CODE></FONT></TD>
<TD><CODE><B><A HREF="../../../../org/pentaho/di/resource/ResourceXmlPropertyEmitterInterface.html#getExtraResourceProperties(org.pentaho.di.resource.ResourceHolderInterface, int)">getExtraResourceProperties</A></B>(<A HREF="../../../../org/pentaho/di/resource/ResourceHolderInterface.html" title="interface in org.pentaho.di.resource">ResourceHolderInterface</A> ref,
int indention)</CODE>
<BR>
Allows injection of additional relevant properties in the
to-xml of the Resource Reference.</TD>
</TR>
</TABLE>
<P>
<!-- ============ METHOD DETAIL ========== -->
<A NAME="method_detail"><!-- --></A>
<TABLE BORDER="1" WIDTH="100%" CELLPADDING="3" CELLSPACING="0" SUMMARY="">
<TR BGCOLOR="#CCCCFF" CLASS="TableHeadingColor">
<TH ALIGN="left" COLSPAN="1"><FONT SIZE="+2">
<B>Method Detail</B></FONT></TH>
</TR>
</TABLE>
<A NAME="getExtraResourceProperties(org.pentaho.di.resource.ResourceHolderInterface, int)"><!-- --></A><H3>
getExtraResourceProperties</H3>
<PRE>
<A HREF="http://java.sun.com/j2se/1.5.0/docs/api/java/lang/String.html?is-external=true" title="class or interface in java.lang">String</A> <B>getExtraResourceProperties</B>(<A HREF="../../../../org/pentaho/di/resource/ResourceHolderInterface.html" title="interface in org.pentaho.di.resource">ResourceHolderInterface</A> ref,
int indention)</PRE>
<DL>
<DD>Allows injection of additional relevant properties in the
to-xml of the Resource Reference.
<P>
<DD><DL>
<DT><B>Parameters:</B><DD><CODE>ref</CODE> - The Resource Reference Holder (a step, or a job entry)<DD><CODE>indention</CODE> - If -1, then no indenting, otherwise, it's the indent level to indent the XML strings
<DT><B>Returns:</B><DD>String of injected XML</DL>
</DD>
</DL>
<!-- ========= END OF CLASS DATA ========= -->
<HR>
<!-- ======= START OF BOTTOM NAVBAR ====== -->
<A NAME="navbar_bottom"><!-- --></A>
<A HREF="#skip-navbar_bottom" title="Skip navigation links"></A>
<TABLE BORDER="0" WIDTH="100%" CELLPADDING="1" CELLSPACING="0" SUMMARY="">
<TR>
<TD COLSPAN=2 BGCOLOR="#EEEEFF" CLASS="NavBarCell1">
<A NAME="navbar_bottom_firstrow"><!-- --></A>
<TABLE BORDER="0" CELLPADDING="0" CELLSPACING="3" SUMMARY="">
<TR ALIGN="center" VALIGN="top">
<TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../overview-summary.html"><FONT CLASS="NavBarFont1"><B>Overview</B></FONT></A> </TD>
<TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="package-summary.html"><FONT CLASS="NavBarFont1"><B>Package</B></FONT></A> </TD>
<TD BGCOLOR="#FFFFFF" CLASS="NavBarCell1Rev"> <FONT CLASS="NavBarFont1Rev"><B>Class</B></FONT> </TD>
<TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="class-use/ResourceXmlPropertyEmitterInterface.html"><FONT CLASS="NavBarFont1"><B>Use</B></FONT></A> </TD>
<TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="package-tree.html"><FONT CLASS="NavBarFont1"><B>Tree</B></FONT></A> </TD>
<TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../deprecated-list.html"><FONT CLASS="NavBarFont1"><B>Deprecated</B></FONT></A> </TD>
<TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../index-files/index-1.html"><FONT CLASS="NavBarFont1"><B>Index</B></FONT></A> </TD>
<TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../help-doc.html"><FONT CLASS="NavBarFont1"><B>Help</B></FONT></A> </TD>
</TR>
</TABLE>
</TD>
<TD ALIGN="right" VALIGN="top" ROWSPAN=3><EM>
</EM>
</TD>
</TR>
<TR>
<TD BGCOLOR="white" CLASS="NavBarCell2"><FONT SIZE="-2">
<A HREF="../../../../org/pentaho/di/resource/ResourceUtil.html" title="class in org.pentaho.di.resource"><B>PREV CLASS</B></A>
<A HREF="../../../../org/pentaho/di/resource/SequenceResourceNaming.html" title="class in org.pentaho.di.resource"><B>NEXT CLASS</B></A></FONT></TD>
<TD BGCOLOR="white" CLASS="NavBarCell2"><FONT SIZE="-2">
<A HREF="../../../../index.html?org/pentaho/di/resource/ResourceXmlPropertyEmitterInterface.html" target="_top"><B>FRAMES</B></A>
<A HREF="ResourceXmlPropertyEmitterInterface.html" target="_top"><B>NO FRAMES</B></A>
<SCRIPT type="text/javascript">
<!--
if(window==top) {
document.writeln('<A HREF="../../../../allclasses-noframe.html"><B>All Classes</B></A>');
}
//-->
</SCRIPT>
<NOSCRIPT>
<A HREF="../../../../allclasses-noframe.html"><B>All Classes</B></A>
</NOSCRIPT>
</FONT></TD>
</TR>
<TR>
<TD VALIGN="top" CLASS="NavBarCell3"><FONT SIZE="-2">
SUMMARY: NESTED | FIELD | CONSTR | <A HREF="#method_summary">METHOD</A></FONT></TD>
<TD VALIGN="top" CLASS="NavBarCell3"><FONT SIZE="-2">
DETAIL: FIELD | CONSTR | <A HREF="#method_detail">METHOD</A></FONT></TD>
</TR>
</TABLE>
<A NAME="skip-navbar_bottom"></A>
<!-- ======== END OF BOTTOM NAVBAR ======= -->
<HR>
</BODY>
</HTML>
| ColFusion/PentahoKettle | kettle-data-integration/docs/api/org/pentaho/di/resource/ResourceXmlPropertyEmitterInterface.html | HTML | mit | 9,567 |
require 'ffi'
module ProcessShared
module Posix
module Errno
extend FFI::Library
ffi_lib FFI::Library::LIBC
attach_variable :errno, :int
# Replace methods in +syms+ with error checking wrappers that
# invoke the original method and raise a {SystemCallError} with
# the current errno if the return value is an error.
#
# Errors are detected if the block returns true when called with
# the original method's return value.
def error_check(*syms, &is_err)
unless block_given?
is_err = lambda { |v| (v == -1) }
end
syms.each do |sym|
method = self.method(sym)
new_method_body = proc do |*args|
ret = method.call(*args)
if is_err.call(ret)
raise SystemCallError.new("error in #{sym}", Errno.errno)
else
ret
end
end
define_singleton_method(sym, &new_method_body)
define_method(sym, &new_method_body)
end
end
end
end
end
| pmahoney/process_shared | lib/process_shared/posix/errno.rb | Ruby | mit | 1,066 |
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
<!-- UASR: Unified Approach to Speech Synthesis and Recognition
< - Documentation home page
<
< AUTHOR : Matthias Wolff
< PACKAGE: n/a
<
< Copyright 2013 UASR contributors (see COPYRIGHT file)
< - Chair of System Theory and Speech Technology, TU Dresden
< - Chair of Communications Engineering, BTU Cottbus
<
< This file is part of UASR.
<
< UASR is free software: you can redistribute it and/or modify it under the
< terms of the GNU Lesser General Public License as published by the Free
< Software Foundation, either version 3 of the License, or (at your option)
< any later version.
<
< UASR is distributed in the hope that it will be useful, but WITHOUT ANY
< WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
< FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for
< more details.
<
< You should have received a copy of the GNU Lesser General Public License
< along with UASR. If not, see [http://www.gnu.org/licenses/].
-->
<html>
<head>
<link rel=stylesheet type="text/css" href="toc.css">
</head>
<script type="text/javascript">
if (top==self)
top.location = "index.html";
</script>
<script type="text/javascript" src="default.js"></script>
<body onload="void(__tocInit('tocRoot'));">
<h2 class="CONT">Manual</h2>
<noscript><div class="noscript">
JavaScript is not enabled.
</div></noscript>
<div class="tocRoot" id="tocRoot">
<div class="tocLeaf"><a class="toc" href="home.html" target="contFrame" title="Database DocumentationHome Page">Home</a></div>
<div class="tocNode" id="tocPackageDocumentation">
<a class="toc" href="javascript:__tocToggle('tocPackageDocumentation');">[−]</a> <img src="resources/book_obj.gif" class="tocIcon"> Scripts
<!--{{ TOC -->
<div class="tocLeaf"><a href="automatic/vau.itp.html" target="contFrame" title="Voice authentication database plug-in. "
><img src="resources/blank_stc.gif" class="tocIcon"> <img src="resources/lib_obj.gif" class="tocIcon"> vau.itp</a></div>
<!--}} TOC -->
</div>
</div>
</body>
</html>
| matthias-wolff/C-VAU | manual/toc.html | HTML | mit | 2,328 |
'use strict';
const _ = require('lodash');
const co = require('co');
const Promise = require('bluebird');
const AWS = require('aws-sdk');
AWS.config.region = 'us-east-1';
const cloudwatch = Promise.promisifyAll(new AWS.CloudWatch());
const Lambda = new AWS.Lambda();
const START_TIME = new Date('2017-06-07T01:00:00.000Z');
const DAYS = 2;
const ONE_DAY = 24 * 60 * 60 * 1000;
let addDays = (startDt, n) => new Date(startDt.getTime() + ONE_DAY * n);
let getFuncStats = co.wrap(function* (funcName) {
let getStats = co.wrap(function* (startTime, endTime) {
let req = {
MetricName: 'Duration',
Namespace: 'AWS/Lambda',
Period: 60,
Dimensions: [ { Name: 'FunctionName', Value: funcName } ],
Statistics: [ 'Maximum' ],
Unit: 'Milliseconds',
StartTime: startTime,
EndTime: endTime
};
let resp = yield cloudwatch.getMetricStatisticsAsync(req);
return resp.Datapoints.map(dp => {
return {
timestamp: dp.Timestamp,
value: dp.Maximum
};
});
});
let stats = [];
for (let i = 0; i < DAYS; i++) {
// CloudWatch only allows us to query 1440 data points per request, which
// at 1 min period is 24 hours
let startTime = addDays(START_TIME, i);
let endTime = addDays(startTime, 1);
let oneDayStats = yield getStats(startTime, endTime);
stats = stats.concat(oneDayStats);
}
return _.sortBy(stats, s => s.timestamp);
});
let listFunctions = co.wrap(function* (marker, acc) {
acc = acc || [];
let resp = yield Lambda.listFunctions({ Marker: marker, MaxItems: 100 }).promise();
let functions = resp.Functions
.map(f => f.FunctionName)
.filter(fn => fn.includes("aws-coldstart") && !fn.endsWith("run"));
acc = acc.concat(functions);
if (resp.NextMarker) {
return yield listFunctions(resp.NextMarker, acc);
} else {
return acc;
}
});
listFunctions()
.then(co.wrap(function* (funcs) {
for (let func of funcs) {
let stats = yield getFuncStats(func);
stats.forEach(stat => console.log(`${func},${stat.timestamp},${stat.value}`));
}
})); | theburningmonk/lambda-coldstart-comparison | download-stats.js | JavaScript | mit | 2,153 |
from __future__ import absolute_import, division, print_function, unicode_literals
import string
import urllib
try:
from urllib.parse import urlparse, urlencode, urljoin, parse_qsl, urlunparse
from urllib.request import urlopen, Request
from urllib.error import HTTPError
except ImportError:
from urlparse import urlparse, urljoin, urlunparse, parse_qsl
from urllib import urlencode
from urllib2 import urlopen, Request, HTTPError
from random import SystemRandom
try:
UNICODE_ASCII_CHARACTERS = (string.ascii_letters +
string.digits)
except AttributeError:
UNICODE_ASCII_CHARACTERS = (string.ascii_letters.decode('ascii') +
string.digits.decode('ascii'))
def random_ascii_string(length):
random = SystemRandom()
return ''.join([random.choice(UNICODE_ASCII_CHARACTERS) for x in range(length)])
def url_query_params(url):
"""Return query parameters as a dict from the specified URL.
:param url: URL.
:type url: str
:rtype: dict
"""
return dict(parse_qsl(urlparse(url).query, True))
def url_dequery(url):
"""Return a URL with the query component removed.
:param url: URL to dequery.
:type url: str
:rtype: str
"""
url = urlparse(url)
return urlunparse((url.scheme,
url.netloc,
url.path,
url.params,
'',
url.fragment))
def build_url(base, additional_params=None):
"""Construct a URL based off of base containing all parameters in
the query portion of base plus any additional parameters.
:param base: Base URL
:type base: str
::param additional_params: Additional query parameters to include.
:type additional_params: dict
:rtype: str
"""
url = urlparse(base)
query_params = {}
query_params.update(parse_qsl(url.query, True))
if additional_params is not None:
query_params.update(additional_params)
for k, v in additional_params.items():
if v is None:
query_params.pop(k)
return urlunparse((url.scheme,
url.netloc,
url.path,
url.params,
urlencode(query_params),
url.fragment))
| VulcanTechnologies/oauth2lib | oauth2lib/utils.py | Python | mit | 2,411 |
#!/bin/bash
# data in Empar_paper/data/simul_balanc4GenNonhSSM
#length1000_b100.tar length1000_b150.tar length1000_b200.tar
#length1000_b100_num98.fa
MOD=ssm
ITER=2 # number of data sets
bl=100
#prep output files
OUT_lik='likel_balanced4_gennonh_'$bl'_'$MOD'_E.txt'
OUT_iter='iter_balanced4_gennonh_'$bl'_'$MOD'_E.txt'
OUT_time='time_balanced4_gennonh_'$bl'_'$MOD'_E.txt'
OUT_nc='neg_cases_balanced4_gennonh_'$bl'_'$MOD'_E.txt'
[[ -f $OUT_lik ]] && rm -f $OUT_lik
[[ -f $OUT_iter ]] && rm -f $OUT_iter
[[ -f $OUT_time ]] && rm -f $OUT_time
[[ -f $OUT_nc ]] && rm -f $OUT_nc
touch $OUT_lik
touch $OUT_iter
touch $OUT_time
touch $OUT_nc
# run from within the scripts folder
for i in $(seq 0 1 $ITER)
do
#extract a single file from tar
tar -xvf ../data/simul_balanc4GenNonhSSM/length1000_b$bl.tar length1000_b$bl\_num$i.fa
./main ../data/trees/treeE.tree length1000_b$bl\_num$i.fa $MOD > out.txt
cat out.txt | grep Likelihood | cut -d':' -f2 | xargs >> $OUT_lik
cat out.txt | grep Iter | cut -d':' -f2 | xargs >> $OUT_iter
cat out.txt | grep Time | cut -d':' -f2 | xargs >> $OUT_time
cat out.txt | grep "negative branches" | cut -d':' -f2 | xargs >> $OUT_nc
rm out.txt
# not poluting the folder with single files
rm length1000_b$bl\_num$i.fa
done
mv $OUT_time ../results/ssm/gennonh_data/balanc4GenNonh/.
mv $OUT_lik ../results/ssm/gennonh_data/balanc4GenNonh/.
mv $OUT_iter ../results/ssm/gennonh_data/balanc4GenNonh/.
mv $OUT_nc ../results/ssm/gennonh_data/balanc4GenNonh/.
| Algebraicphylogenetics/Empar_paper | scripts/process_balanced4_gennonh_ssm.sh | Shell | mit | 1,509 |
/*
The MIT License (MIT)
Copyright (c) 2014 Banbury & Play-Em
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
*/
using UnityEngine;
#if UNITY_EDITOR
using UnityEditor;
using System.IO;
#endif
namespace SpritesAndBones.Editor
{
[CustomEditor(typeof(Skin2D))]
public class Skin2DEditor : UnityEditor.Editor
{
private Skin2D skin;
private float baseSelectDistance = 0.1f;
private float changedBaseSelectDistance = 0.1f;
private int selectedIndex = -1;
private Color handleColor = Color.green;
private void OnEnable()
{
skin = (Skin2D)target;
}
public override void OnInspectorGUI()
{
DrawDefaultInspector();
EditorGUILayout.Separator();
if (GUILayout.Button("Toggle Mesh Outline"))
{
Skin2D.showMeshOutline = !Skin2D.showMeshOutline;
}
EditorGUILayout.Separator();
if (skin.GetComponent<SkinnedMeshRenderer>().sharedMesh != null && GUILayout.Button("Save as Prefab"))
{
skin.SaveAsPrefab();
}
EditorGUILayout.Separator();
if (skin.GetComponent<SkinnedMeshRenderer>().sharedMesh != null && GUILayout.Button("Recalculate Bone Weights"))
{
skin.RecalculateBoneWeights();
}
EditorGUILayout.Separator();
handleColor = EditorGUILayout.ColorField("Handle Color", handleColor);
changedBaseSelectDistance = EditorGUILayout.Slider("Handle Size", baseSelectDistance, 0, 1);
if (baseSelectDistance != changedBaseSelectDistance)
{
baseSelectDistance = changedBaseSelectDistance;
EditorUtility.SetDirty(this);
SceneView.RepaintAll();
}
if (skin.GetComponent<SkinnedMeshRenderer>().sharedMesh != null && GUILayout.Button("Create Control Points"))
{
skin.CreateControlPoints(skin.GetComponent<SkinnedMeshRenderer>());
}
if (skin.GetComponent<SkinnedMeshRenderer>().sharedMesh != null && GUILayout.Button("Reset Control Points"))
{
skin.ResetControlPointPositions();
}
if (skin.points != null && skin.controlPoints != null && skin.controlPoints.Length > 0
&& selectedIndex != -1 && GUILayout.Button("Reset Selected Control Point"))
{
if (skin.controlPoints[selectedIndex].originalPosition != skin.GetComponent<MeshFilter>().sharedMesh.vertices[selectedIndex])
{
skin.controlPoints[selectedIndex].originalPosition = skin.GetComponent<MeshFilter>().sharedMesh.vertices[selectedIndex];
}
skin.controlPoints[selectedIndex].ResetPosition();
skin.points.SetPoint(skin.controlPoints[selectedIndex]);
}
if (GUILayout.Button("Remove Control Points"))
{
skin.RemoveControlPoints();
}
EditorGUILayout.Separator();
if (skin.GetComponent<SkinnedMeshRenderer>().sharedMesh != null && GUILayout.Button("Generate Mesh Asset"))
{
#if UNITY_EDITOR
// Check if the Meshes directory exists, if not, create it.
if (!Directory.Exists("Assets/Meshes"))
{
AssetDatabase.CreateFolder("Assets", "Meshes");
AssetDatabase.Refresh();
}
Mesh mesh = new Mesh();
mesh.name = skin.GetComponent<SkinnedMeshRenderer>().sharedMesh.name.Replace(".SkinnedMesh", ".Mesh"); ;
mesh.vertices = skin.GetComponent<SkinnedMeshRenderer>().sharedMesh.vertices;
mesh.triangles = skin.GetComponent<SkinnedMeshRenderer>().sharedMesh.triangles;
mesh.normals = skin.GetComponent<SkinnedMeshRenderer>().sharedMesh.normals;
mesh.uv = skin.GetComponent<SkinnedMeshRenderer>().sharedMesh.uv;
mesh.uv2 = skin.GetComponent<SkinnedMeshRenderer>().sharedMesh.uv2;
mesh.bounds = skin.GetComponent<SkinnedMeshRenderer>().sharedMesh.bounds;
ScriptableObjectUtility.CreateAsset(mesh, "Meshes/" + skin.gameObject.name + ".Mesh");
#endif
}
if (skin.GetComponent<SkinnedMeshRenderer>().sharedMaterial != null && GUILayout.Button("Generate Material Asset"))
{
#if UNITY_EDITOR
Material material = new Material(skin.GetComponent<SkinnedMeshRenderer>().sharedMaterial);
material.CopyPropertiesFromMaterial(skin.GetComponent<SkinnedMeshRenderer>().sharedMaterial);
skin.GetComponent<SkinnedMeshRenderer>().sharedMaterial = material;
if (!Directory.Exists("Assets/Materials"))
{
AssetDatabase.CreateFolder("Assets", "Materials");
AssetDatabase.Refresh();
}
AssetDatabase.CreateAsset(material, "Assets/Materials/" + material.mainTexture.name + ".mat");
Debug.Log("Created material " + material.mainTexture.name + " for " + skin.gameObject.name);
#endif
}
}
private void OnSceneGUI()
{
if (skin != null && skin.GetComponent<SkinnedMeshRenderer>().sharedMesh != null
&& skin.controlPoints != null && skin.controlPoints.Length > 0 && skin.points != null)
{
Event e = Event.current;
Handles.matrix = skin.transform.localToWorldMatrix;
EditorGUI.BeginChangeCheck();
Ray r = HandleUtility.GUIPointToWorldRay(e.mousePosition);
Vector2 mousePos = r.origin;
float selectDistance = HandleUtility.GetHandleSize(mousePos) * baseSelectDistance;
#region Draw vertex handles
Handles.color = handleColor;
for (int i = 0; i < skin.controlPoints.Length; i++)
{
if (Handles.Button(skin.points.GetPoint(skin.controlPoints[i]), Quaternion.identity, selectDistance, selectDistance, Handles.CircleCap))
{
selectedIndex = i;
}
if (selectedIndex == i)
{
EditorGUI.BeginChangeCheck();
skin.controlPoints[i].position = Handles.DoPositionHandle(skin.points.GetPoint(skin.controlPoints[i]), Quaternion.identity);
if (EditorGUI.EndChangeCheck())
{
skin.points.SetPoint(skin.controlPoints[i]);
Undo.RecordObject(skin, "Changed Control Point");
Undo.RecordObject(skin.points, "Changed Control Point");
EditorUtility.SetDirty(this);
}
}
}
#endregion Draw vertex handles
}
}
}
} | Apelsin/UnitySpritesAndBones | Assets/SpritesAndBones/Scripts/Editor/Skin2DEditor.cs | C# | mit | 8,335 |
package com.thilko.springdoc;
@SuppressWarnings("all")
public class CredentialsCode {
Integer age;
double anotherValue;
public Integer getAge() {
return age;
}
public void setAge(Integer age) {
this.age = age;
}
public double getAnotherValue() {
return anotherValue;
}
public void setAnotherValue(double anotherValue) {
this.anotherValue = anotherValue;
}
}
| thilko/gradle-springdoc-plugin | src/test/java/com/thilko/springdoc/CredentialsCode.java | Java | mit | 435 |
<?php
use yii\helpers\Html;
use yii\grid\GridView;
use yii\widgets\Pjax;
/* @var $this yii\web\View */
/* @var $searchModel yii2learning\chartbuilder\models\DatasourceSearch */
/* @var $dataProvider yii\data\ActiveDataProvider */
$this->title = Yii::t('app', 'Datasources');
$this->params['breadcrumbs'][] = $this->title;
?>
<div class="datasource-index">
<h1><?= Html::encode($this->title) ?></h1>
<?=$this->render('/_menus') ?>
<?php // echo $this->render('_search', ['model' => $searchModel]); ?>
<p>
<?= Html::a('<i class="glyphicon glyphicon-plus"></i> '.Yii::t('app', 'Create Datasource'), ['create'], ['class' => 'btn btn-success']) ?>
</p>
<?php Pjax::begin(); ?> <?= GridView::widget([
'dataProvider' => $dataProvider,
'filterModel' => $searchModel,
'columns' => [
['class' => 'yii\grid\SerialColumn'],
'name',
// 'created_at',
// 'updated_at',
// 'created_by',
'updated_by:dateTime',
[
'class' => 'yii\grid\ActionColumn',
'options'=>['style'=>'width:150px;'],
'buttonOptions'=>['class'=>'btn btn-default'],
'template'=>'<div class="btn-group btn-group-sm text-center" role="group">{view} {update} {delete} </div>',
]
],
]); ?>
<?php Pjax::end(); ?></div>
| Yii2Learning/yii2-chart-builder | views/datasource/index.php | PHP | mit | 1,401 |
module Web::Controllers::Books
class Create
include Web::Action
expose :book
params do
param :book do
param :title, presence: true
param :author, presence: true
end
end
def call(params)
if params.valid?
@book = BookRepository.create(Book.new(params[:book]))
redirect_to routes.books_path
end
end
end
end
| matiasleidemer/lotus-bookshelf | apps/web/controllers/books/create.rb | Ruby | mit | 393 |
;idta.asm sets up all the intterupt entry points
extern default_handler
extern idt_ftoi
;error interrupt entry point, we need to only push the error code details to stack
%macro error_interrupt 1
global interrupt_handler_%1
interrupt_handler_%1:
push dword %1
jmp common_handler
%endmacro
;regular interrupt entry point, need to push interrupt number and other data
%macro regular_interrupt 1
global interrupt_handler_%1
interrupt_handler_%1:
push dword 0
push dword %1
jmp common_handler
%endmacro
;common handler for all interrupts, saves all necessary stack data and calls our c intterupt handler
common_handler:
push dword ds
push dword es
push dword fs
push dword gs
pusha
call default_handler
popa
pop dword gs
pop dword fs
pop dword es
pop dword ds
add esp, 8
iret
regular_interrupt 0
regular_interrupt 1
regular_interrupt 2
regular_interrupt 3
regular_interrupt 4
regular_interrupt 5
regular_interrupt 6
regular_interrupt 7
error_interrupt 8
regular_interrupt 9
error_interrupt 10
error_interrupt 11
error_interrupt 12
error_interrupt 13
error_interrupt 14
regular_interrupt 15
regular_interrupt 16
error_interrupt 17
%assign i 18
%rep 12
regular_interrupt i
%assign i i+1
%endrep
error_interrupt 30
%assign i 31
%rep 225
regular_interrupt i
%assign i i+1
%endrep
;interrupt setup, adds all of out interrupt handlers to the idt
global idtsetup
idtsetup:
%assign i 0
%rep 256
push interrupt_handler_%[i]
push i
call idt_ftoi
add esp, 8
%assign i i+1
%endrep
ret
| MalcolmLorber/kernel | src/idta.asm | Assembly | mit | 1,579 |
// @flow
(require('../../lib/git'): any).rebaseRepoMaster = jest.fn();
import {
_clearCustomCacheDir as clearCustomCacheDir,
_setCustomCacheDir as setCustomCacheDir,
} from '../../lib/cacheRepoUtils';
import {copyDir, mkdirp} from '../../lib/fileUtils';
import {parseDirString as parseFlowDirString} from '../../lib/flowVersion';
import {
add as gitAdd,
commit as gitCommit,
init as gitInit,
setLocalConfig as gitConfig,
} from '../../lib/git';
import {fs, path, child_process} from '../../lib/node';
import {getNpmLibDefs} from '../../lib/npm/npmLibDefs';
import {testProject} from '../../lib/TEST_UTILS';
import {
_determineFlowVersion as determineFlowVersion,
_installNpmLibDefs as installNpmLibDefs,
_installNpmLibDef as installNpmLibDef,
run,
} from '../install';
const BASE_FIXTURE_ROOT = path.join(__dirname, '__install-fixtures__');
function _mock(mockFn) {
return ((mockFn: any): JestMockFn<*, *>);
}
async function touchFile(filePath) {
await fs.close(await fs.open(filePath, 'w'));
}
async function writePkgJson(filePath, pkgJson) {
await fs.writeJson(filePath, pkgJson);
}
describe('install (command)', () => {
describe('determineFlowVersion', () => {
it('infers version from path if arg not passed', () => {
return testProject(async ROOT_DIR => {
const ARBITRARY_PATH = path.join(ROOT_DIR, 'some', 'arbitrary', 'path');
await Promise.all([
mkdirp(ARBITRARY_PATH),
touchFile(path.join(ROOT_DIR, '.flowconfig')),
writePkgJson(path.join(ROOT_DIR, 'package.json'), {
name: 'test',
devDependencies: {
'flow-bin': '^0.40.0',
},
}),
]);
const flowVer = await determineFlowVersion(ARBITRARY_PATH);
expect(flowVer).toEqual({
kind: 'specific',
ver: {
major: 0,
minor: 40,
patch: 0,
prerel: null,
},
});
});
});
it('uses explicitly specified version', async () => {
const explicitVer = await determineFlowVersion('/', '0.7.0');
expect(explicitVer).toEqual({
kind: 'specific',
ver: {
major: 0,
minor: 7,
patch: 0,
prerel: null,
},
});
});
it("uses 'v'-prefixed explicitly specified version", async () => {
const explicitVer = await determineFlowVersion('/', 'v0.7.0');
expect(explicitVer).toEqual({
kind: 'specific',
ver: {
major: 0,
minor: 7,
patch: 0,
prerel: null,
},
});
});
});
describe('installNpmLibDefs', () => {
const origConsoleError = console.error;
beforeEach(() => {
(console: any).error = jest.fn();
});
afterEach(() => {
(console: any).error = origConsoleError;
});
it('errors if unable to find a project root (.flowconfig)', () => {
return testProject(async ROOT_DIR => {
const result = await installNpmLibDefs({
cwd: ROOT_DIR,
flowVersion: parseFlowDirString('flow_v0.40.0'),
explicitLibDefs: [],
libdefDir: 'flow-typed',
verbose: false,
overwrite: false,
skip: false,
ignoreDeps: [],
useCacheUntil: 1000 * 60,
});
expect(result).toBe(1);
expect(_mock(console.error).mock.calls).toEqual([
[
'Error: Unable to find a flow project in the current dir or any of ' +
"it's parent dirs!\n" +
'Please run this command from within a Flow project.',
],
]);
});
});
it(
"errors if an explicitly specified libdef arg doesn't match npm " +
'pkgver format',
() => {
return testProject(async ROOT_DIR => {
await touchFile(path.join(ROOT_DIR, '.flowconfig'));
await writePkgJson(path.join(ROOT_DIR, 'package.json'), {
name: 'test',
devDependencies: {
'flow-bin': '^0.40.0',
},
});
const result = await installNpmLibDefs({
cwd: ROOT_DIR,
flowVersion: parseFlowDirString('flow_v0.40.0'),
explicitLibDefs: ['INVALID'],
libdefDir: 'flow-typed',
verbose: false,
overwrite: false,
skip: false,
ignoreDeps: [],
useCacheUntil: 1000 * 60,
});
expect(result).toBe(1);
expect(_mock(console.error).mock.calls).toEqual([
[
'ERROR: Package not found from package.json.\n' +
'Please specify version for the package in the format of `[email protected]`',
],
]);
});
},
);
it('warns if 0 dependencies are found in package.json', () => {
return testProject(async ROOT_DIR => {
await Promise.all([
touchFile(path.join(ROOT_DIR, '.flowconfig')),
writePkgJson(path.join(ROOT_DIR, 'package.json'), {
name: 'test',
}),
]);
const result = await installNpmLibDefs({
cwd: ROOT_DIR,
flowVersion: parseFlowDirString('flow_v0.40.0'),
explicitLibDefs: [],
libdefDir: 'flow-typed',
verbose: false,
overwrite: false,
skip: false,
ignoreDeps: [],
useCacheUntil: 1000 * 60,
});
expect(result).toBe(0);
expect(_mock(console.error).mock.calls).toEqual([
["No dependencies were found in this project's package.json!"],
]);
});
});
});
describe('installNpmLibDef', () => {
const FIXTURE_ROOT = path.join(BASE_FIXTURE_ROOT, 'installNpmLibDef');
const FIXTURE_FAKE_CACHE_REPO_DIR = path.join(
FIXTURE_ROOT,
'fakeCacheRepo',
);
const origConsoleLog = console.log;
beforeEach(() => {
(console: any).log = jest.fn();
});
afterEach(() => {
(console: any).log = origConsoleLog;
});
it('installs scoped libdefs within a scoped directory', () => {
return testProject(async ROOT_DIR => {
const FAKE_CACHE_DIR = path.join(ROOT_DIR, 'fakeCache');
const FAKE_CACHE_REPO_DIR = path.join(FAKE_CACHE_DIR, 'repo');
const FLOWPROJ_DIR = path.join(ROOT_DIR, 'flowProj');
const FLOWTYPED_DIR = path.join(FLOWPROJ_DIR, 'flow-typed', 'npm');
await Promise.all([mkdirp(FAKE_CACHE_REPO_DIR), mkdirp(FLOWTYPED_DIR)]);
await Promise.all([
copyDir(FIXTURE_FAKE_CACHE_REPO_DIR, FAKE_CACHE_REPO_DIR),
touchFile(path.join(FLOWPROJ_DIR, '.flowconfig')),
writePkgJson(path.join(FLOWPROJ_DIR, 'package.json'), {
name: 'test',
devDependencies: {
'flow-bin': '^0.40.0',
},
}),
]);
await gitInit(FAKE_CACHE_REPO_DIR),
await gitAdd(FAKE_CACHE_REPO_DIR, 'definitions');
await gitCommit(FAKE_CACHE_REPO_DIR, 'FIRST');
setCustomCacheDir(FAKE_CACHE_DIR);
const availableLibDefs = await getNpmLibDefs(
path.join(FAKE_CACHE_REPO_DIR, 'definitions'),
);
await installNpmLibDef(availableLibDefs[0], FLOWTYPED_DIR, false);
});
});
});
describe('end-to-end tests', () => {
const FIXTURE_ROOT = path.join(BASE_FIXTURE_ROOT, 'end-to-end');
const FIXTURE_FAKE_CACHE_REPO_DIR = path.join(
FIXTURE_ROOT,
'fakeCacheRepo',
);
const origConsoleLog = console.log;
const origConsoleError = console.error;
beforeEach(() => {
(console: any).log = jest.fn();
(console: any).error = jest.fn();
});
afterEach(() => {
(console: any).log = origConsoleLog;
(console: any).error = origConsoleError;
});
async function fakeProjectEnv(runTest) {
return await testProject(async ROOT_DIR => {
const FAKE_CACHE_DIR = path.join(ROOT_DIR, 'fakeCache');
const FAKE_CACHE_REPO_DIR = path.join(FAKE_CACHE_DIR, 'repo');
const FLOWPROJ_DIR = path.join(ROOT_DIR, 'flowProj');
const FLOWTYPED_DIR = path.join(FLOWPROJ_DIR, 'flow-typed', 'npm');
await Promise.all([mkdirp(FAKE_CACHE_REPO_DIR), mkdirp(FLOWTYPED_DIR)]);
await copyDir(FIXTURE_FAKE_CACHE_REPO_DIR, FAKE_CACHE_REPO_DIR);
await gitInit(FAKE_CACHE_REPO_DIR),
await Promise.all([
gitConfig(FAKE_CACHE_REPO_DIR, 'user.name', 'Test Author'),
gitConfig(FAKE_CACHE_REPO_DIR, 'user.email', '[email protected]'),
]);
await gitAdd(FAKE_CACHE_REPO_DIR, 'definitions');
await gitCommit(FAKE_CACHE_REPO_DIR, 'FIRST');
setCustomCacheDir(FAKE_CACHE_DIR);
const origCWD = process.cwd;
(process: any).cwd = () => FLOWPROJ_DIR;
try {
await runTest(FLOWPROJ_DIR);
} finally {
(process: any).cwd = origCWD;
clearCustomCacheDir();
}
});
}
it('installs available libdefs', () => {
return fakeProjectEnv(async FLOWPROJ_DIR => {
// Create some dependencies
await Promise.all([
touchFile(path.join(FLOWPROJ_DIR, '.flowconfig')),
writePkgJson(path.join(FLOWPROJ_DIR, 'package.json'), {
name: 'test',
devDependencies: {
'flow-bin': '^0.43.0',
},
dependencies: {
foo: '1.2.3',
},
}),
mkdirp(path.join(FLOWPROJ_DIR, 'node_modules', 'foo')),
mkdirp(path.join(FLOWPROJ_DIR, 'node_modules', 'flow-bin')),
]);
// Run the install command
await run({
overwrite: false,
verbose: false,
skip: false,
ignoreDeps: [],
explicitLibDefs: [],
});
// Installs libdefs
expect(
await Promise.all([
fs.exists(
path.join(
FLOWPROJ_DIR,
'flow-typed',
'npm',
'flow-bin_v0.x.x.js',
),
),
fs.exists(
path.join(FLOWPROJ_DIR, 'flow-typed', 'npm', 'foo_v1.x.x.js'),
),
]),
).toEqual([true, true]);
// Signs installed libdefs
const fooLibDefContents = await fs.readFile(
path.join(FLOWPROJ_DIR, 'flow-typed', 'npm', 'foo_v1.x.x.js'),
'utf8',
);
expect(fooLibDefContents).toContain('// flow-typed signature: ');
expect(fooLibDefContents).toContain('// flow-typed version: ');
});
});
it('installs available libdefs using PnP', () => {
return fakeProjectEnv(async FLOWPROJ_DIR => {
// Create some dependencies
await Promise.all([
touchFile(path.join(FLOWPROJ_DIR, '.flowconfig')),
writePkgJson(path.join(FLOWPROJ_DIR, 'package.json'), {
name: 'test',
installConfig: {
pnp: true,
},
devDependencies: {
'flow-bin': '^0.43.0',
},
dependencies: {
// Use local foo for initial install
foo: 'file:./foo',
},
}),
mkdirp(path.join(FLOWPROJ_DIR, 'foo')),
]);
await writePkgJson(path.join(FLOWPROJ_DIR, 'foo/package.json'), {
name: 'foo',
version: '1.2.3',
});
// Yarn install so PnP file resolves to local foo
await child_process.execP('yarn install', {cwd: FLOWPROJ_DIR});
// Overwrite foo dep so it's like we installed from registry instead
writePkgJson(path.join(FLOWPROJ_DIR, 'package.json'), {
name: 'test',
installConfig: {
pnp: true,
},
devDependencies: {
'flow-bin': '^0.43.0',
},
dependencies: {
foo: '1.2.3',
},
});
// Run the install command
await run({
overwrite: false,
verbose: false,
skip: false,
ignoreDeps: [],
explicitLibDefs: [],
});
// Installs libdefs
expect(
await Promise.all([
fs.exists(
path.join(
FLOWPROJ_DIR,
'flow-typed',
'npm',
'flow-bin_v0.x.x.js',
),
),
fs.exists(
path.join(FLOWPROJ_DIR, 'flow-typed', 'npm', 'foo_v1.x.x.js'),
),
]),
).toEqual([true, true]);
// Signs installed libdefs
const fooLibDefRawContents = await fs.readFile(
path.join(FLOWPROJ_DIR, 'flow-typed', 'npm', 'foo_v1.x.x.js'),
);
const fooLibDefContents = fooLibDefRawContents.toString();
expect(fooLibDefContents).toContain('// flow-typed signature: ');
expect(fooLibDefContents).toContain('// flow-typed version: ');
});
});
it('ignores libdefs in dev, bundled, optional or peer dependencies when flagged', () => {
return fakeProjectEnv(async FLOWPROJ_DIR => {
// Create some dependencies
await Promise.all([
touchFile(path.join(FLOWPROJ_DIR, '.flowconfig')),
writePkgJson(path.join(FLOWPROJ_DIR, 'package.json'), {
name: 'test',
devDependencies: {
foo: '1.2.3',
},
peerDependencies: {
'flow-bin': '^0.43.0',
},
optionalDependencies: {
foo: '2.0.0',
},
bundledDependencies: {
bar: '^1.6.9',
},
dependencies: {
foo: '1.2.3',
},
}),
mkdirp(path.join(FLOWPROJ_DIR, 'node_modules', 'foo')),
mkdirp(path.join(FLOWPROJ_DIR, 'node_modules', 'flow-bin')),
mkdirp(path.join(FLOWPROJ_DIR, 'node_modules', 'bar')),
]);
// Run the install command
await run({
overwrite: false,
verbose: false,
skip: false,
ignoreDeps: ['dev', 'optional', 'bundled'],
explicitLibDefs: [],
});
// Installs libdefs
expect(
await Promise.all([
fs.exists(
path.join(
FLOWPROJ_DIR,
'flow-typed',
'npm',
'flow-bin_v0.x.x.js',
),
),
fs.exists(
path.join(FLOWPROJ_DIR, 'flow-typed', 'npm', 'foo_v1.x.x.js'),
),
fs.exists(
path.join(FLOWPROJ_DIR, 'flow-typed', 'npm', 'bar_v1.x.x.js'),
),
]),
).toEqual([true, true, false]);
});
});
it('stubs unavailable libdefs', () => {
return fakeProjectEnv(async FLOWPROJ_DIR => {
// Create some dependencies
await Promise.all([
touchFile(path.join(FLOWPROJ_DIR, '.flowconfig')),
writePkgJson(path.join(FLOWPROJ_DIR, 'package.json'), {
name: 'test',
devDependencies: {
'flow-bin': '^0.43.0',
},
dependencies: {
someUntypedDep: '1.2.3',
},
}),
mkdirp(path.join(FLOWPROJ_DIR, 'node_modules', 'someUntypedDep')),
mkdirp(path.join(FLOWPROJ_DIR, 'node_modules', 'flow-bin')),
]);
// Run the install command
await run({
overwrite: false,
verbose: false,
skip: false,
explicitLibDefs: [],
});
// Installs a stub for someUntypedDep
expect(
await fs.exists(
path.join(
FLOWPROJ_DIR,
'flow-typed',
'npm',
'someUntypedDep_vx.x.x.js',
),
),
).toBe(true);
});
});
it("doesn't stub unavailable libdefs when --skip is passed", () => {
return fakeProjectEnv(async FLOWPROJ_DIR => {
// Create some dependencies
await Promise.all([
touchFile(path.join(FLOWPROJ_DIR, '.flowconfig')),
writePkgJson(path.join(FLOWPROJ_DIR, 'package.json'), {
name: 'test',
devDependencies: {
'flow-bin': '^0.43.0',
},
dependencies: {
someUntypedDep: '1.2.3',
},
}),
mkdirp(path.join(FLOWPROJ_DIR, 'node_modules', 'someUntypedDep')),
mkdirp(path.join(FLOWPROJ_DIR, 'node_modules', 'flow-bin')),
]);
// Run the install command
await run({
overwrite: false,
verbose: false,
skip: true,
explicitLibDefs: [],
});
// Installs a stub for someUntypedDep
expect(
await fs.exists(path.join(FLOWPROJ_DIR, 'flow-typed', 'npm')),
).toBe(true);
});
});
it('overwrites stubs when libdef becomes available (with --overwrite)', () => {
return fakeProjectEnv(async FLOWPROJ_DIR => {
// Create some dependencies
await Promise.all([
touchFile(path.join(FLOWPROJ_DIR, '.flowconfig')),
writePkgJson(path.join(FLOWPROJ_DIR, 'package.json'), {
name: 'test',
devDependencies: {
'flow-bin': '^0.43.0',
},
dependencies: {
foo: '1.2.3',
},
}),
mkdirp(path.join(FLOWPROJ_DIR, 'node_modules', 'foo')),
mkdirp(path.join(FLOWPROJ_DIR, 'node_modules', 'flow-bin')),
]);
await fs.writeFile(
path.join(FLOWPROJ_DIR, 'flow-typed', 'npm', 'foo_vx.x.x.js'),
'',
);
// Run the install command
await run({
overwrite: true,
verbose: false,
skip: false,
explicitLibDefs: [],
});
// Replaces the stub with the real typedef
expect(
await Promise.all([
fs.exists(
path.join(FLOWPROJ_DIR, 'flow-typed', 'npm', 'foo_vx.x.x.js'),
),
fs.exists(
path.join(FLOWPROJ_DIR, 'flow-typed', 'npm', 'foo_v1.x.x.js'),
),
]),
).toEqual([false, true]);
});
});
it("doesn't overwrite tweaked libdefs (without --overwrite)", () => {
return fakeProjectEnv(async FLOWPROJ_DIR => {
// Create some dependencies
await Promise.all([
touchFile(path.join(FLOWPROJ_DIR, '.flowconfig')),
writePkgJson(path.join(FLOWPROJ_DIR, 'package.json'), {
name: 'test',
devDependencies: {
'flow-bin': '^0.43.0',
},
dependencies: {
foo: '1.2.3',
},
}),
mkdirp(path.join(FLOWPROJ_DIR, 'node_modules', 'foo')),
mkdirp(path.join(FLOWPROJ_DIR, 'node_modules', 'flow-bin')),
]);
// Run the install command
await run({
overwrite: false,
verbose: false,
skip: false,
explicitLibDefs: [],
});
const libdefFilePath = path.join(
FLOWPROJ_DIR,
'flow-typed',
'npm',
'foo_v1.x.x.js',
);
// Tweak the libdef for foo
const libdefFileContent =
(await fs.readFile(libdefFilePath, 'utf8')) + '\n// TWEAKED!';
await fs.writeFile(libdefFilePath, libdefFileContent);
// Run install command again
await run({
overwrite: false,
verbose: false,
skip: false,
explicitLibDefs: [],
});
// Verify that the tweaked libdef file wasn't overwritten
expect(await fs.readFile(libdefFilePath, 'utf8')).toBe(
libdefFileContent,
);
});
});
it('overwrites tweaked libdefs when --overwrite is passed', () => {
return fakeProjectEnv(async FLOWPROJ_DIR => {
// Create some dependencies
await Promise.all([
touchFile(path.join(FLOWPROJ_DIR, '.flowconfig')),
writePkgJson(path.join(FLOWPROJ_DIR, 'package.json'), {
name: 'test',
devDependencies: {
'flow-bin': '^0.43.0',
},
dependencies: {
foo: '1.2.3',
},
}),
mkdirp(path.join(FLOWPROJ_DIR, 'node_modules', 'foo')),
mkdirp(path.join(FLOWPROJ_DIR, 'node_modules', 'flow-bin')),
]);
// Run the install command
await run({
overwrite: false,
verbose: false,
skip: false,
explicitLibDefs: [],
});
const libdefFilePath = path.join(
FLOWPROJ_DIR,
'flow-typed',
'npm',
'foo_v1.x.x.js',
);
// Tweak the libdef for foo
const libdefFileContent = await fs.readFile(libdefFilePath, 'utf8');
await fs.writeFile(libdefFilePath, libdefFileContent + '\n// TWEAKED!');
// Run install command again
await run({
overwrite: true,
skip: false,
verbose: false,
explicitLibDefs: [],
});
// Verify that the tweaked libdef file wasn't overwritten
expect(await fs.readFile(libdefFilePath, 'utf8')).toBe(
libdefFileContent,
);
});
});
it('uses flow-bin defined in another package.json', () => {
return fakeProjectEnv(async FLOWPROJ_DIR => {
// Create some dependencies
await Promise.all([
touchFile(path.join(FLOWPROJ_DIR, '.flowconfig')),
writePkgJson(path.join(FLOWPROJ_DIR, 'package.json'), {
name: 'test',
dependencies: {
foo: '1.2.3',
},
}),
mkdirp(path.join(FLOWPROJ_DIR, 'node_modules', 'foo')),
writePkgJson(path.join(FLOWPROJ_DIR, '..', 'package.json'), {
name: 'parent',
devDependencies: {
'flow-bin': '^0.45.0',
},
}),
mkdirp(path.join(FLOWPROJ_DIR, '..', 'node_modules', 'flow-bin')),
]);
// Run the install command
await run({
overwrite: false,
verbose: false,
skip: false,
packageDir: path.join(FLOWPROJ_DIR, '..'),
explicitLibDefs: [],
});
// Installs libdef
expect(
await fs.exists(
path.join(FLOWPROJ_DIR, 'flow-typed', 'npm', 'foo_v1.x.x.js'),
),
).toEqual(true);
});
});
it('uses .flowconfig from specified root directory', () => {
return fakeProjectEnv(async FLOWPROJ_DIR => {
// Create some dependencies
await Promise.all([
mkdirp(path.join(FLOWPROJ_DIR, 'src')),
writePkgJson(path.join(FLOWPROJ_DIR, 'package.json'), {
name: 'test',
devDependencies: {
'flow-bin': '^0.43.0',
},
dependencies: {
foo: '1.2.3',
},
}),
mkdirp(path.join(FLOWPROJ_DIR, 'node_modules', 'foo')),
mkdirp(path.join(FLOWPROJ_DIR, 'node_modules', 'flow-bin')),
]);
await touchFile(path.join(FLOWPROJ_DIR, 'src', '.flowconfig'));
// Run the install command
await run({
overwrite: false,
verbose: false,
skip: false,
rootDir: path.join(FLOWPROJ_DIR, 'src'),
explicitLibDefs: [],
});
// Installs libdef
expect(
await fs.exists(
path.join(
FLOWPROJ_DIR,
'src',
'flow-typed',
'npm',
'foo_v1.x.x.js',
),
),
).toEqual(true);
});
});
});
});
| splodingsocks/FlowTyped | cli/src/commands/__tests__/install-test.js | JavaScript | mit | 23,904 |
<!DOCTYPE html>
<html lang="en">
<head>
<meta http-equiv="refresh" content="0;URL=../../openssl_sys/fn.BN_exp.html">
</head>
<body>
<p>Redirecting to <a href="../../openssl_sys/fn.BN_exp.html">../../openssl_sys/fn.BN_exp.html</a>...</p>
<script>location.replace("../../openssl_sys/fn.BN_exp.html" + location.search + location.hash);</script>
</body>
</html> | malept/guardhaus | main/openssl_sys/bn/fn.BN_exp.html | HTML | mit | 369 |
import React, { Component } from 'react'
import PropTypes from 'prop-types'
import { assign } from 'lodash'
import autoBind from '../utils/autoBind'
const styles = {
'ClosedPanelWrapper': {
height: '40px'
},
'PanelWrapper': {
position: 'relative'
},
'Over': {
border: '1px dashed white',
overflowY: 'hidden'
},
'PanelTitle': {
width: '100%',
height: '40px',
lineHeight: '40px',
backgroundColor: '#000',
color: '#fff',
paddingLeft: '10px',
position: 'relative',
whiteSpace: 'nowrap',
overflowX: 'hidden',
textOverflow: 'ellipsis',
paddingRight: '8px',
cursor: 'pointer',
WebkitUserSelect: 'none',
userSelect: 'none'
},
'Handle': {
cursor: '-webkit-grab',
position: 'absolute',
zIndex: '2',
color: 'white',
right: '10px',
fontSize: '16px',
top: '12px'
},
'OpenPanel': {
position: 'relative',
zIndex: '2',
top: '0',
left: '0',
padding: '7px',
paddingTop: '5px',
maxHeight: '30%',
display: 'block'
},
'ClosedPanel': {
height: '0',
position: 'relative',
zIndex: '2',
top: '-1000px',
left: '0',
overflow: 'hidden',
maxHeight: '0',
display: 'none'
}
}
class Panel extends Component {
constructor() {
super()
this.state = {
dragIndex: null,
overIndex: null,
isOver: false
}
autoBind(this, [
'handleTitleClick', 'handleDragStart', 'handleDragOver', 'handleDragEnter',
'handleDragLeave', 'handleDrop', 'handleDragEnd'
])
}
handleTitleClick() {
const { index, isOpen, openPanel } = this.props
openPanel(isOpen ? -1 : index)
}
handleDragStart(e) {
// e.target.style.opacity = '0.4'; // this / e.target is the source node.
e.dataTransfer.setData('index', e.target.dataset.index)
}
handleDragOver(e) {
if (e.preventDefault) {
e.preventDefault() // Necessary. Allows us to drop.
}
return false
}
handleDragEnter(e) {
const overIndex = e.target.dataset.index
if (e.dataTransfer.getData('index') !== overIndex) {
// e.target.classList.add('Over') // e.target is the current hover target.
this.setState({ isOver: true })
}
}
handleDragLeave() {
this.setState({ isOver: false })
// e.target.classList.remove('Over') // e.target is previous target element.
}
handleDrop(e) {
if (e.stopPropagation) {
e.stopPropagation() // stops the browser from redirecting.
}
const dragIndex = e.dataTransfer.getData('index')
const dropIndex = this.props.index.toString()
if (dragIndex !== dropIndex) {
this.props.reorder(dragIndex, dropIndex)
}
return false
}
handleDragEnd() {
this.setState({ isOver: false, dragIndex: null, overIndex: null })
}
render() {
const { isOpen, orderable } = this.props
const { isOver } = this.state
return (
<div
style={assign({}, styles.PanelWrapper, isOpen ? {} : styles.ClosedPanelWrapper, isOver ? styles.Over : {})}
onDragStart={this.handleDragStart}
onDragEnter={this.handleDragEnter}
onDragOver={this.handleDragOver}
onDragLeave={this.handleDragLeave}
onDrop={this.handleDrop}
onDragEnd={this.handleDragEnd}
>
<div
style={styles.PanelTitle}
onClick={this.handleTitleClick}
draggable={orderable}
data-index={this.props.index}
>
{this.props.header}
{orderable && (<i className="fa fa-th" style={styles.Handle}></i>)}
</div>
{
isOpen &&
(
<div style={isOpen ? styles.OpenPanel : styles.ClosedPanel}>
{this.props.children}
</div>
)
}
</div>
)
}
}
Panel.propTypes = {
children: PropTypes.any,
index: PropTypes.any,
openPanel: PropTypes.func,
isOpen: PropTypes.any,
header: PropTypes.any,
orderable: PropTypes.any,
reorder: PropTypes.func
}
Panel.defaultProps = {
isOpen: false,
header: '',
orderable: false
}
export default Panel
| jcgertig/react-struct-editor | src/components/Panel.js | JavaScript | mit | 4,120 |
'use strict';
// src\services\message\hooks\timestamp.js
//
// Use this hook to manipulate incoming or outgoing data.
// For more information on hooks see: http://docs.feathersjs.com/hooks/readme.html
const defaults = {};
module.exports = function(options) {
options = Object.assign({}, defaults, options);
return function(hook) {
const usr = hook.params.user;
const txt = hook.data.text;
hook.data = {
text: txt,
createdBy: usr._id,
createdAt: Date.now()
}
};
};
| zorqie/bfests | src/services/message/hooks/timestamp.js | JavaScript | mit | 488 |
/*
---------------------------------------------------------------------------
Open Asset Import Library (assimp)
---------------------------------------------------------------------------
Copyright (c) 2006-2021, assimp team
All rights reserved.
Redistribution and use of this software in source and binary forms,
with or without modification, are permitted provided that the following
conditions are met:
* Redistributions of source code must retain the above
copyright notice, this list of conditions and the
following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the
following disclaimer in the documentation and/or other
materials provided with the distribution.
* Neither the name of the assimp team, nor the names of its
contributors may be used to endorse or promote products
derived from this software without specific prior
written permission of the assimp team.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
---------------------------------------------------------------------------
*/
/** @file Bitmap.h
* @brief Defines bitmap format helper for textures
*
* Used for file formats which embed their textures into the model file.
*/
#pragma once
#ifndef AI_BITMAP_H_INC
#define AI_BITMAP_H_INC
#ifdef __GNUC__
# pragma GCC system_header
#endif
#include "defs.h"
#include <stdint.h>
#include <cstddef>
struct aiTexture;
namespace Assimp {
class IOStream;
class ASSIMP_API Bitmap {
protected:
struct Header {
uint16_t type;
uint32_t size;
uint16_t reserved1;
uint16_t reserved2;
uint32_t offset;
// We define the struct size because sizeof(Header) might return a wrong result because of structure padding.
// Moreover, we must use this ugly and error prone syntax because Visual Studio neither support constexpr or sizeof(name_of_field).
static const std::size_t header_size =
sizeof(uint16_t) + // type
sizeof(uint32_t) + // size
sizeof(uint16_t) + // reserved1
sizeof(uint16_t) + // reserved2
sizeof(uint32_t); // offset
};
struct DIB {
uint32_t size;
int32_t width;
int32_t height;
uint16_t planes;
uint16_t bits_per_pixel;
uint32_t compression;
uint32_t image_size;
int32_t x_resolution;
int32_t y_resolution;
uint32_t nb_colors;
uint32_t nb_important_colors;
// We define the struct size because sizeof(DIB) might return a wrong result because of structure padding.
// Moreover, we must use this ugly and error prone syntax because Visual Studio neither support constexpr or sizeof(name_of_field).
static const std::size_t dib_size =
sizeof(uint32_t) + // size
sizeof(int32_t) + // width
sizeof(int32_t) + // height
sizeof(uint16_t) + // planes
sizeof(uint16_t) + // bits_per_pixel
sizeof(uint32_t) + // compression
sizeof(uint32_t) + // image_size
sizeof(int32_t) + // x_resolution
sizeof(int32_t) + // y_resolution
sizeof(uint32_t) + // nb_colors
sizeof(uint32_t); // nb_important_colors
};
static const std::size_t mBytesPerPixel = 4;
public:
static void Save(aiTexture* texture, IOStream* file);
protected:
static void WriteHeader(Header& header, IOStream* file);
static void WriteDIB(DIB& dib, IOStream* file);
static void WriteData(aiTexture* texture, IOStream* file);
};
}
#endif // AI_BITMAP_H_INC
| andrerogers/Enjin | src/includes/assimp/Bitmap.h | C | mit | 4,360 |
# LeadifyTest | JomoLumina/LeadifyTest | README.md | Markdown | mit | 13 |
package fr.lteconsulting.pomexplorer.commands;
import fr.lteconsulting.pomexplorer.AppFactory;
import fr.lteconsulting.pomexplorer.Client;
import fr.lteconsulting.pomexplorer.Log;
public class HelpCommand
{
@Help( "gives this message" )
public void main( Client client, Log log )
{
log.html( AppFactory.get().commands().help() );
}
}
| ltearno/pom-explorer | pom-explorer/src/main/java/fr/lteconsulting/pomexplorer/commands/HelpCommand.java | Java | mit | 342 |
//
// DORDoneHUD.h
// DORDoneHUD
//
// Created by Pawel Bednorz on 23/09/15.
// Copyright © 2015 Droids on Roids. All rights reserved.
//
#import <UIKit/UIKit.h>
@interface DORDoneHUD : NSObject
+ (void)show:(UIView *)view message:(NSString *)messageText completion:(void (^)(void))completionBlock;
+ (void)show:(UIView *)view message:(NSString *)messageText;
+ (void)show:(UIView *)view;
@end
| DroidsOnRoids/DORDoneHUD | Source/DORDoneHUD.h | C | mit | 401 |
namespace CAAssistant.Models
{
public class ClientFileViewModel
{
public ClientFileViewModel()
{
}
public ClientFileViewModel(ClientFile clientFile)
{
Id = clientFile.Id;
FileNumber = clientFile.FileNumber;
ClientName = clientFile.ClientName;
ClientContactPerson = clientFile.ClientContactPerson;
AssociateReponsible = clientFile.AssociateReponsible;
CaSign = clientFile.CaSign;
DscExpiryDate = clientFile.DscExpiryDate;
FileStatus = clientFile.FileStatus;
}
public string Id { get; set; }
public int FileNumber { get; set; }
public string ClientName { get; set; }
public string ClientContactPerson { get; set; }
public string AssociateReponsible { get; set; }
public string CaSign { get; set; }
public string DscExpiryDate { get; set; }
public string FileStatus { get; set; }
public string UserName { get; set; }
public FileStatusModification InitialFileStatus { get; set; }
}
} | vishipayyallore/CAAssitant | CAAssistant/Models/ClientFileViewModel.cs | C# | mit | 1,131 |
<?php
/****************************************************************************
* todoyu is published under the BSD License:
* http://www.opensource.org/licenses/bsd-license.php
*
* Copyright (c) 2013, snowflake productions GmbH, Switzerland
* All rights reserved.
*
* This script is part of the todoyu project.
* The todoyu project is free software; you can redistribute it and/or modify
* it under the terms of the BSD License.
*
* This script is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the BSD License
* for more details.
*
* This copyright notice MUST APPEAR in all copies of the script.
*****************************************************************************/
/**
* Task asset object
*
* @package Todoyu
* @subpackage Assets
*/
class TodoyuAssetsTaskAsset extends TodoyuAssetsAsset {
/**
* Get task ID
*
* @return Integer
*/
public function getTaskID() {
return $this->getParentID();
}
/**
* Get task object
*
* @return Task
*/
public function getTask() {
return TodoyuProjectTaskManager::getTask($this->getTaskID());
}
}
?> | JoAutomation/todo-for-you | ext/assets/model/TodoyuAssetsTaskAsset.class.php | PHP | mit | 1,210 |
//------------------------------------------------------------------------------
// <auto-generated>
// This code was generated by a tool.
//
// Changes to this file may cause incorrect behavior and will be lost if
// the code is regenerated.
// </auto-generated>
//------------------------------------------------------------------------------
namespace NewsSystem.Web.Admin
{
public partial class Edit
{
}
}
| MystFan/TelerikAcademy | ASP.NET WebForms/NewsSystem/NewsSystem.Web/Admin/Edit.aspx.designer.cs | C# | mit | 440 |
---
title: Stylesheets and JavaScript - Fabricator
layout: 2-column
section: Documentation
---
{{#markdown}}
# Stylesheets and JavaScript
> How to work with CSS and JS within Fabricator
Fabricator comes with little opinion about how you should architect your Stylesheets and JavaScript. Each use case is different, so it's up to you to define what works best.
Out of the box, you'll find a single `.scss` and `.js` file. These are the entry points for Sass compilation and Webpack respectively. It is recommended that you leverage the module importing features of each preprocessor to compile your toolkit down to a single `.css` and `.js` file. Practically speaking, you should be able to drop these two files into any application and have full access to your entire toolkit.
{{/markdown}}
| fbrctr/fbrctr.github.io | src/views/docs/building-a-toolkit/assets.html | HTML | mit | 797 |
<?php
namespace IdeHelper\Test\TestCase\Utility;
use Cake\Core\Configure;
use Cake\TestSuite\TestCase;
use IdeHelper\Utility\Plugin;
class PluginTest extends TestCase {
/**
* @return void
*/
protected function setUp(): void {
parent::setUp();
Configure::delete('IdeHelper.plugins');
}
/**
* @return void
*/
protected function tearDown(): void {
parent::tearDown();
Configure::delete('IdeHelper.plugins');
}
/**
* @return void
*/
public function testAll() {
$result = Plugin::all();
$this->assertArrayHasKey('IdeHelper', $result);
$this->assertArrayHasKey('Awesome', $result);
$this->assertArrayHasKey('MyNamespace/MyPlugin', $result);
$this->assertArrayNotHasKey('FooBar', $result);
Configure::write('IdeHelper.plugins', ['FooBar', '-MyNamespace/MyPlugin']);
$result = Plugin::all();
$this->assertArrayHasKey('FooBar', $result);
$this->assertArrayNotHasKey('MyNamespace/MyPlugin', $result);
}
}
| dereuromark/cakephp-ide-helper | tests/TestCase/Utility/PluginTest.php | PHP | mit | 953 |
# Hubot: hubot-loggly-slack
A hubot script to post alerts from Loggly into a Slack room as an attachment.
An attachment has additional formatting options.
See [`src/loggly-slack.coffee`](src/loggly-slack.coffee) for documentation.
# Installation
npm install hubot-loggly-slack
# Add "hubot-loggly-slack" to external-scripts.json
# Other hubot slack modules
https://github.com/spanishdict/hubot-awssns-slack
https://github.com/spanishdict/hubot-loggly-slack
https://github.com/spanishdict/hubot-scoutapp-slack
| spanishdict/hubot-loggly-slack | README.md | Markdown | mit | 526 |
<?php
/**
* The Initial Developer of the Original Code is
* Tarmo Alexander Sundström <[email protected]>.
*
* Portions created by the Initial Developer are
* Copyright (C) 2014 Tarmo Alexander Sundström <[email protected]>
*
* All Rights Reserved.
*
* Contributor(s):
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included
* in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
* IN THE SOFTWARE.
*/
namespace Webvaloa;
use Libvaloa\Db;
use RuntimeException;
/**
* Manage and run plugins.
*/
class Plugin
{
private $db;
private $plugins;
private $runnablePlugins;
private $plugin;
// Objects that plugins can access
public $_properties;
public $ui;
public $controller;
public $request;
public $view;
public $xhtml;
public static $properties = array(
// Vendor tag
'vendor' => 'ValoaApplication',
// Events
'events' => array(
'onAfterFrontControllerInit',
'onBeforeController',
'onAfterController',
'onBeforeRender',
'onAfterRender',
),
// Skip plugins in these controllers
'skipControllers' => array(
'Setup',
),
);
public function __construct($plugin = false)
{
$this->plugin = $plugin;
$this->event = false;
$this->plugins = false;
$this->runnablePlugins = false;
// Plugins can access and modify these
$this->_properties = false;
$this->ui = false;
$this->controller = false;
$this->request = false;
$this->view = false;
$this->xhtml = false;
try {
$this->db = \Webvaloa\Webvaloa::DBConnection();
} catch (Exception $e) {
}
}
public function setEvent($e)
{
if (in_array($e, self::$properties['events'])) {
$this->event = $e;
}
}
public function plugins()
{
if (!method_exists($this->db, 'prepare')) {
// Just bail out
return false;
}
if (method_exists($this->request, 'getMainController') && (in_array($this->request->getMainController(), self::$properties['skipControllers']))) {
return false;
}
$query = '
SELECT id, plugin, system_plugin
FROM plugin
WHERE blocked = 0
ORDER BY ordering ASC';
try {
$stmt = $this->db->prepare($query);
$stmt->execute();
$this->plugins = $stmt->fetchAll();
return $this->plugins;
} catch (PDOException $e) {
}
}
public function pluginExists($name)
{
$name = trim($name);
foreach ($this->plugins as $k => $plugin) {
if ($plugin->plugin == $name) {
return true;
}
}
return false;
}
public function hasRunnablePlugins()
{
// Return runnable plugins if we already gathered them
if ($this->runnablePlugins) {
return $this->runnablePlugins;
}
if (!$this->request) {
throw new RuntimeException('Instance of request is required');
}
if (in_array($this->request->getMainController(), self::$properties['skipControllers'])) {
return false;
}
// Load plugins
if (!$this->plugins) {
$this->plugins();
}
if (!is_array($this->plugins)) {
return false;
}
$controller = $this->request->getMainController();
// Look for executable plugins
foreach ($this->plugins as $k => $plugin) {
if ($controller && strpos($plugin->plugin, $controller) === false
&& strpos($plugin->plugin, 'Plugin') === false) {
continue;
}
$this->runnablePlugins[] = $plugin;
}
return (bool) ($this->runnablePlugins && !empty($this->runnablePlugins)) ? $this->runnablePlugins : false;
}
public function runPlugins()
{
if (!$this->runnablePlugins || empty($this->runnablePlugins)) {
return false;
}
$e = $this->event;
foreach ($this->runnablePlugins as $k => $v) {
$p = '\\'.self::$properties['vendor'].'\Plugins\\'.$v->plugin.'Plugin';
$plugin = new $p();
$plugin->view = &$this->view;
$plugin->ui = &$this->ui;
$plugin->request = &$this->request;
$plugin->controller = &$this->controller;
$plugin->xhtml = &$this->xhtml;
$plugin->_properties = &$this->_properties;
if (method_exists($plugin, $e)) {
$plugin->{$e}();
}
}
}
public static function getPluginStatus($pluginID)
{
$query = '
SELECT blocked
FROM plugin
WHERE system_plugin = 0
AND id = ?';
try {
$db = \Webvaloa\Webvaloa::DBConnection();
$stmt = $db->prepare($query);
$stmt->set((int) $pluginID);
$stmt->execute();
$row = $stmt->fetch();
if (isset($row->blocked)) {
return $row->blocked;
}
return false;
} catch (PDOException $e) {
}
}
public static function setPluginStatus($pluginID, $status = 0)
{
$query = '
UPDATE plugin
SET blocked = ?
WHERE id = ?';
try {
$db = \Webvaloa\Webvaloa::DBConnection();
$stmt = $db->prepare($query);
$stmt->set((int) $status);
$stmt->set((int) $pluginID);
$stmt->execute();
} catch (PDOException $e) {
}
}
public static function setPluginOrder($pluginID, $ordering = 0)
{
$query = '
UPDATE plugin
SET ordering = ?
WHERE id = ?';
try {
$db = \Webvaloa\Webvaloa::DBConnection();
$stmt = $db->prepare($query);
$stmt->set((int) $ordering);
$stmt->set((int) $pluginID);
$stmt->execute();
} catch (PDOException $e) {
}
}
public function install()
{
if (!$this->plugin) {
return false;
}
$installable = $this->discover();
if (!in_array($this->plugin, $installable)) {
return false;
}
$db = \Webvaloa\Webvaloa::DBConnection();
// Install plugin
$object = new Db\Object('plugin', $db);
$object->plugin = $this->plugin;
$object->system_plugin = 0;
$object->blocked = 0;
$object->ordering = 1;
$id = $object->save();
return $id;
}
public function uninstall()
{
if (!$this->plugin) {
return false;
}
$db = \Webvaloa\Webvaloa::DBConnection();
$query = '
DELETE FROM plugin
WHERE system_plugin = 0
AND plugin = ?';
$stmt = $db->prepare($query);
try {
$stmt->set($this->plugin);
$stmt->execute();
return true;
} catch (Exception $e) {
}
return false;
}
public function discover()
{
// Installed plugins
$tmp = $this->plugins();
foreach ($tmp as $v => $plugin) {
$plugins[] = $plugin->plugin;
}
// Discovery paths
$paths[] = LIBVALOA_INSTALLPATH.DIRECTORY_SEPARATOR.self::$properties['vendor'].DIRECTORY_SEPARATOR.'Plugins';
$paths[] = LIBVALOA_EXTENSIONSPATH.DIRECTORY_SEPARATOR.self::$properties['vendor'].DIRECTORY_SEPARATOR.'Plugins';
$skip = array(
'.',
'..',
);
$plugins = array_merge($plugins, $skip);
// Look for new plugins
foreach ($paths as $path) {
if ($handle = opendir($path)) {
while (false !== ($entry = readdir($handle))) {
if ($entry == '.' || $entry == '..') {
continue;
}
if (substr($entry, -3) != 'php') {
continue;
}
$pluginName = str_replace('Plugin.php', '', $entry);
if (!isset($installablePlugins)) {
$installablePlugins = array();
}
if (!in_array($pluginName, $plugins) && !in_array($pluginName, $installablePlugins)) {
$installablePlugins[] = $pluginName;
}
}
closedir($handle);
}
}
if (isset($installablePlugins)) {
return $installablePlugins;
}
return array();
}
}
| lahdekorpi/webvaloa | vendor/Webvaloa/Plugin.php | PHP | mit | 9,863 |
const electron = window.require('electron');
const events = window.require('events');
const {
ipcRenderer
} = electron;
const {
EventEmitter
} = events;
class Emitter extends EventEmitter {}
window.Events = new Emitter();
module.exports = () => {
let settings = window.localStorage.getItem('settings');
if (settings === null) {
const defaultSettings = {
general: {
launch: true,
clipboard: true
},
images: {
copy: false,
delete: true
},
notifications: {
enabled: true
}
};
window.localStorage.setItem('settings', JSON.stringify(defaultSettings));
settings = defaultSettings;
}
ipcRenderer.send('settings', JSON.parse(settings));
};
| vevix/focus | app/js/init.js | JavaScript | mit | 740 |
---
title: 'Production applications updated 9.1.2018 10:05 - 10:54'
lang: en
ref: 2018-01-09-release
image:
published: true
categories: en News
traffictypes:
- Road
tags:
- APIs
- Admin
---
Digitraffic production applications have been updated.
Changelog:
TIE
- DPO-336 - LAM binääritietovirta jakautuu kahdeksi LOTJU 2.5 versiossa
- Ei vaikuta datan formaattiin. Reaaliaika-asemien tiedot ovat nyt tuoreempia.
- DPO-399 - CameraStationsStatusMetadataUpdateJob ei käsittele obsolete tietoa oikein
We apologize for any inconvenience.
| lapintom/digitraffic | _posts/2018-01-09-release-en.md | Markdown | mit | 549 |
using System;
using System.Collections.Generic;
using System.Linq;
using BohFoundation.ApplicantsRepository.Repositories.Implementations;
using BohFoundation.AzureStorage.TableStorage.Implementations.Essay.Entities;
using BohFoundation.AzureStorage.TableStorage.Interfaces.Essay;
using BohFoundation.AzureStorage.TableStorage.Interfaces.Essay.Helpers;
using BohFoundation.Domain.Dtos.Applicant.Essay;
using BohFoundation.Domain.Dtos.Applicant.Notifications;
using BohFoundation.Domain.Dtos.Common.AzureQueuryObjects;
using BohFoundation.Domain.EntityFrameworkModels.Applicants;
using BohFoundation.Domain.EntityFrameworkModels.Common;
using BohFoundation.Domain.EntityFrameworkModels.Persons;
using BohFoundation.EntityFrameworkBaseClass;
using BohFoundation.TestHelpers;
using EntityFramework.Extensions;
using FakeItEasy;
using Microsoft.VisualStudio.TestTools.UnitTesting;
namespace BohFoundation.ApplicantsRepository.Tests.IntegrationTests
{
[TestClass]
public class ApplicantsEssayRepositoryIntegrationTests
{
private static IEssayRowKeyGenerator _rowKeyGenerator;
private static IAzureEssayRepository _azureAzureEssayRepository;
private static ApplicantsEssayRepository _applicantsEssayRepository;
private static ApplicantsesNotificationRepository _applicantsesNotification;
[ClassInitialize]
public static void InitializeClass(TestContext ctx)
{
Setup();
FirstTestOfNotifications();
FirstUpsert();
SecondUpsert();
SecondTestOfNotifications();
}
#region SettingUp
private static void Setup()
{
TestHelpersCommonFields.InitializeFields();
TestHelpersCommonFakes.InitializeFakes();
ApplicantsGuid = Guid.NewGuid();
Prompt = "prompt" + ApplicantsGuid;
TitleOfEssay = "title" + ApplicantsGuid;
_azureAzureEssayRepository = A.Fake<IAzureEssayRepository>();
_rowKeyGenerator = A.Fake<IEssayRowKeyGenerator>();
CreateEssayTopicAndApplicant();
SetupFakes();
_applicantsesNotification = new ApplicantsesNotificationRepository(TestHelpersCommonFields.DatabaseName,
TestHelpersCommonFakes.ClaimsInformationGetters, TestHelpersCommonFakes.DeadlineUtilities);
_applicantsEssayRepository = new ApplicantsEssayRepository(TestHelpersCommonFields.DatabaseName,
TestHelpersCommonFakes.ClaimsInformationGetters, _azureAzureEssayRepository, _rowKeyGenerator);
}
private static void CreateEssayTopicAndApplicant()
{
var random = new Random();
GraduatingYear = random.Next();
var subject = new EssayTopic
{
EssayPrompt = Prompt,
TitleOfEssay = TitleOfEssay,
RevisionDateTime = DateTime.UtcNow
};
var subject2 = new EssayTopic
{
EssayPrompt = Prompt + 2,
TitleOfEssay = TitleOfEssay + 2,
RevisionDateTime = DateTime.UtcNow
};
var subject3 = new EssayTopic
{
EssayPrompt = "SHOULD NOT SHOW UP IN LIST",
TitleOfEssay = "REALLY SHOULDN't SHOW up",
RevisionDateTime = DateTime.UtcNow,
};
var graduatingYear = new GraduatingClass
{
GraduatingYear = GraduatingYear,
EssayTopics = new List<EssayTopic> { subject, subject2 }
};
var applicant = new Applicant
{
Person = new Person { Guid = ApplicantsGuid, DateCreated = DateTime.UtcNow },
ApplicantPersonalInformation =
new ApplicantPersonalInformation
{
GraduatingClass = graduatingYear,
Birthdate = DateTime.UtcNow,
LastUpdated = DateTime.UtcNow
}
};
using (var context = GetRootContext())
{
context.EssayTopics.Add(subject3);
context.GraduatingClasses.Add(graduatingYear);
context.Applicants.Add(applicant);
context.EssayTopics.Add(subject);
context.SaveChanges();
EssayTopicId = context.EssayTopics.First(topic => topic.EssayPrompt == Prompt).Id;
EssayTopicId2 = context.EssayTopics.First(topic => topic.EssayPrompt == Prompt + 2).Id;
}
}
private static int EssayTopicId2 { get; set; }
private static void SetupFakes()
{
RowKey = "THISISTHEROWKEYFORTHEAPPLICANT";
A.CallTo(() => TestHelpersCommonFakes.ClaimsInformationGetters.GetApplicantsGraduatingYear())
.Returns(GraduatingYear);
A.CallTo(() => TestHelpersCommonFakes.ClaimsInformationGetters.GetUsersGuid()).Returns(ApplicantsGuid);
A.CallTo(() => _rowKeyGenerator.CreateRowKeyForEssay(ApplicantsGuid, EssayTopicId)).Returns(RowKey);
}
private static string RowKey { get; set; }
private static int GraduatingYear { get; set; }
private static string TitleOfEssay { get; set; }
private static string Prompt { get; set; }
private static Guid ApplicantsGuid { get; set; }
#endregion
#region FirstNotifications
private static void FirstTestOfNotifications()
{
FirstNotificationResult = _applicantsesNotification.GetApplicantNotifications();
}
private static ApplicantNotificationsDto FirstNotificationResult { get; set; }
[TestMethod, TestCategory("Integration")]
public void ApplicantsNotificationRepository_FirstGetNotifications_Should_Have_Two_EssayTopics()
{
Assert.AreEqual(2, FirstNotificationResult.EssayNotifications.Count);
}
[TestMethod, TestCategory("Integration")]
public void ApplicantsNotificationRepository_FirstGetNotifications_EssayTopics_Should_Have_No_LastUpdated()
{
foreach (var essayTopic in FirstNotificationResult.EssayNotifications)
{
Assert.IsNull(essayTopic.RevisionDateTime);
}
}
[TestMethod, TestCategory("Integration")]
public void ApplicantsNotificationRepository_FirstGetNotifications_EssayTopics_Should_Have_Right_EssayTopic()
{
foreach (var essayTopic in FirstNotificationResult.EssayNotifications)
{
if (essayTopic.EssayPrompt == Prompt)
{
Assert.AreEqual(TitleOfEssay, essayTopic.TitleOfEssay);
}
else
{
Assert.AreEqual(TitleOfEssay + 2, essayTopic.TitleOfEssay);
}
}
}
[TestMethod, TestCategory("Integration")]
public void ApplicantsNotificationRepository_FirstGetNotifications_EssayTopics_Should_Have_Right_Ids()
{
foreach (var essayTopic in FirstNotificationResult.EssayNotifications)
{
Assert.AreEqual(essayTopic.EssayPrompt == Prompt ? EssayTopicId : EssayTopicId2, essayTopic.EssayTopicId);
}
}
#endregion
#region FirstUpsert
private static void FirstUpsert()
{
Essay = "Essay";
var dto = new EssayDto {Essay = Essay + 1, EssayPrompt = Prompt, EssayTopicId = EssayTopicId};
_applicantsEssayRepository.UpsertEssay(dto);
using (var context = GetRootContext())
{
EssayUpsertResult1 =
context.Essays.First(
essay => essay.EssayTopic.Id == EssayTopicId && essay.Applicant.Person.Guid == ApplicantsGuid);
}
}
private static Essay EssayUpsertResult1 { get; set; }
private static string Essay { get; set; }
[TestMethod, TestCategory("Integration")]
public void ApplicantsEssayRepository_FirstUpsert_Should_Have_6_Characters()
{
Assert.AreEqual(6, EssayUpsertResult1.CharacterLength);
}
[TestMethod, TestCategory("Integration")]
public void ApplicantsEssayRepository_FirstUpsert_Should_Have_Have_RecentUpdated()
{
TestHelpersTimeAsserts.RecentTime(EssayUpsertResult1.RevisionDateTime);
}
[TestMethod, TestCategory("Integration")]
public void ApplicantsEssayRepository_FirstUpsert_Should_Have_Have_Correct_RowKey()
{
Assert.AreEqual(RowKey, EssayUpsertResult1.RowKey);
}
[TestMethod, TestCategory("Integration")]
public void ApplicantsEssayRepository_FirstUpsert_Should_Have_Have_Correct_PartitionKey()
{
Assert.AreEqual(GraduatingYear.ToString(), EssayUpsertResult1.PartitionKey);
}
[TestMethod, TestCategory("Integration")]
public void ApplicantsEssayRepository_FirstUpsert_Should_Have_Positive_Id()
{
TestHelpersCommonAsserts.IsGreaterThanZero(EssayUpsertResult1.Id);
}
[TestMethod, TestCategory("Integration")]
public void ApplicantsEssayRepository_FirstUpsert_Should_Call_CreateRowKey()
{
A.CallTo(() => _rowKeyGenerator.CreateRowKeyForEssay(ApplicantsGuid, EssayTopicId)).MustHaveHappened();
}
[TestMethod, TestCategory("Integration")]
public void ApplicantsEssayRepository_FirstUpsert_Should_Call_UpsertEssay()
{
//Not checking time. It just isn't coming up. I did an in class check to see if it worked. It did.
A.CallTo(() => _azureAzureEssayRepository.UpsertEssay(A<EssayAzureTableEntityDto>
.That.Matches(x =>
x.Essay == Essay + 1 &&
x.EssayPrompt == Prompt &&
x.EssayTopicId == EssayTopicId &&
x.PartitionKey == GraduatingYear.ToString() &&
x.RowKey == RowKey
))).MustHaveHappened();
}
#endregion
#region SecondUpsert
private static void SecondUpsert()
{
var dto = new EssayDto {Essay = Essay + Essay + Essay, EssayPrompt = Prompt, EssayTopicId = EssayTopicId};
_applicantsEssayRepository.UpsertEssay(dto);
using (var context = GetRootContext())
{
EssayUpsertResult2 =
context.Essays.First(
essay => essay.EssayTopic.Id == EssayTopicId && essay.Applicant.Person.Guid == ApplicantsGuid);
}
}
private static Essay EssayUpsertResult2 { get; set; }
private static int EssayTopicId { get; set; }
[TestMethod, TestCategory("Integration")]
public void ApplicantsEssayRepository_SecondUpsert_Should_Have_15_Characters()
{
Assert.AreEqual(15, EssayUpsertResult2.CharacterLength);
}
[TestMethod, TestCategory("Integration")]
public void ApplicantsEssayRepository_SecondUpsert_Should_Have_Have_RecentUpdated_More_Recent_Than_First()
{
TestHelpersTimeAsserts.IsGreaterThanOrEqual(EssayUpsertResult2.RevisionDateTime,
EssayUpsertResult1.RevisionDateTime);
}
[TestMethod, TestCategory("Integration")]
public void ApplicantsEssayRepository_SecondUpsert_Should_Have_Have_Correct_RowKey()
{
Assert.AreEqual(RowKey, EssayUpsertResult2.RowKey);
}
[TestMethod, TestCategory("Integration")]
public void ApplicantsEssayRepository_SecondUpsert_Should_Have_Have_Correct_PartitionKey()
{
Assert.AreEqual(GraduatingYear.ToString(), EssayUpsertResult2.PartitionKey);
}
[TestMethod, TestCategory("Integration")]
public void ApplicantsEssayRepository_SecondUpsert_Should_Have_Equal_Id_To_First()
{
Assert.AreEqual(EssayUpsertResult1.Id, EssayUpsertResult2.Id);
}
[TestMethod, TestCategory("Integration")]
public void ApplicantsEssayRepository_SecondUpsert_Should_Call_CreateRowKey()
{
A.CallTo(() => _rowKeyGenerator.CreateRowKeyForEssay(ApplicantsGuid, EssayTopicId))
.MustHaveHappened(Repeated.AtLeast.Times(3));
}
[TestMethod, TestCategory("Integration")]
public void ApplicantsEssayRepository_SecondUpsert_Should_Call_UpsertEssay()
{
//Not checking time. It just isn't coming up. I did an in class check to see if it worked. It did.
A.CallTo(() => _azureAzureEssayRepository.UpsertEssay(A<EssayAzureTableEntityDto>
.That.Matches(x =>
x.Essay == Essay + Essay + Essay &&
x.EssayPrompt == Prompt &&
x.EssayTopicId == EssayTopicId &&
x.PartitionKey == GraduatingYear.ToString() &&
x.RowKey == RowKey
))).MustHaveHappened();
}
#endregion
#region SecondNotifications
private static void SecondTestOfNotifications()
{
SecondNotificationResult = _applicantsesNotification.GetApplicantNotifications();
}
private static ApplicantNotificationsDto SecondNotificationResult { get; set; }
[TestMethod, TestCategory("Integration")]
public void ApplicantsNotificationRepository_SecondGetNotifications_Should_Have_Two_EssayTopics()
{
Assert.AreEqual(2, SecondNotificationResult.EssayNotifications.Count);
}
[TestMethod, TestCategory("Integration")]
public void ApplicantsNotificationRepository_SecondGetNotifications_EssayTopics_Should_Have_No_LastUpdated()
{
foreach (var essayTopic in SecondNotificationResult.EssayNotifications)
{
if (essayTopic.EssayPrompt == Prompt)
{
Assert.AreEqual(EssayUpsertResult2.RevisionDateTime, essayTopic.RevisionDateTime);
}
else
{
Assert.IsNull(essayTopic.RevisionDateTime);
}
}
}
[TestMethod, TestCategory("Integration")]
public void ApplicantsNotificationRepository_SecondGetNotifications_EssayTopics_Should_Have_Right_EssayTopic()
{
foreach (var essayTopic in SecondNotificationResult.EssayNotifications)
{
if (essayTopic.EssayPrompt == Prompt)
{
Assert.AreEqual(TitleOfEssay, essayTopic.TitleOfEssay);
}
else
{
Assert.AreEqual(TitleOfEssay + 2, essayTopic.TitleOfEssay);
}
}
}
[TestMethod, TestCategory("Integration")]
public void ApplicantsNotificationRepository_SecondGetNotifications_EssayTopics_Should_Have_Right_Ids()
{
foreach (var essayTopic in SecondNotificationResult.EssayNotifications)
{
Assert.AreEqual(essayTopic.EssayPrompt == Prompt ? EssayTopicId : EssayTopicId2, essayTopic.EssayTopicId);
}
}
#endregion
#region Utilities
private static DatabaseRootContext GetRootContext()
{
return new DatabaseRootContext(TestHelpersCommonFields.DatabaseName);
}
[ClassCleanup]
public static void CleanDb()
{
using (var context = new DatabaseRootContext(TestHelpersCommonFields.DatabaseName))
{
context.Essays.Where(essay => essay.Id > 0).Delete();
context.EssayTopics.Where(essayTopic => essayTopic.Id > 0).Delete();
context.ApplicantPersonalInformations.Where(info => info.Id > 0).Delete();
context.GraduatingClasses.Where(gradClass => gradClass.Id > 0).Delete();
}
}
#endregion
#region GetEssay
[TestMethod, TestCategory("Integration")]
public void ApplicantsEssayRepository_GetEssay_Should_Call_CreateRowKeyForEssay()
{
GetEssay();
A.CallTo(() => _rowKeyGenerator.CreateRowKeyForEssay(ApplicantsGuid, EssayTopicId)).MustHaveHappened();
}
[TestMethod, TestCategory("Integration")]
public void ApplicantsEssayRepository_GetEssay_Should_Call_AzureEssayRepository()
{
GetEssay();
A.CallTo(
() =>
_azureAzureEssayRepository.GetEssay(
A<AzureTableStorageEntityKeyDto>.That.Matches(
x => x.PartitionKey == GraduatingYear.ToString() && x.RowKey == RowKey))).MustHaveHappened();
}
[TestMethod, TestCategory("Integration")]
public void ApplicantsEssayRepository_GetEssay_Should_Return_Whatever_TheAzureRepoReturns()
{
var essayDto = new EssayDto();
A.CallTo(() => _azureAzureEssayRepository.GetEssay(A<AzureTableStorageEntityKeyDto>.Ignored))
.Returns(essayDto);
Assert.AreSame(essayDto, GetEssay());
}
private EssayDto GetEssay()
{
return _applicantsEssayRepository.GetEssay(EssayTopicId);
}
#endregion
}
}
| Sobieck00/BOH-Bulldog-Scholarship-Application-Management | BohFoundation.ApplicantsRepository.Tests/IntegrationTests/ApplicantsEssayRepositoryIntegrationTests.cs | C# | mit | 17,599 |
class AddAuthorAndSubjectToClaimStateTransitions < ActiveRecord::Migration[4.2]
def change
add_column :claim_state_transitions, :author_id, :integer
add_column :claim_state_transitions, :subject_id, :integer
end
end
| ministryofjustice/advocate-defence-payments | db/migrate/20160909150238_add_author_and_subject_to_claim_state_transitions.rb | Ruby | mit | 228 |
# Using a compact OS
FROM registry.dataos.io/library/nginx
MAINTAINER Golfen Guo <[email protected]>
# Install Nginx
# Add 2048 stuff into Nginx server
COPY . /usr/share/nginx/html
EXPOSE 80
| yepengxj/dao-2048 | Dockerfile | Dockerfile | mit | 201 |
require 'test_helper'
require 'cache_value/util'
class UtilTest < Test::Unit::TestCase
include CacheValue::Util
context 'hex_digest' do
should 'return the same digest for identical hashes' do
hex_digest({ :ha => 'ha'}).should == hex_digest({ :ha => 'ha'})
end
end
end
| tobias/cache_value | test/util_test.rb | Ruby | mit | 298 |
# This migration comes from thinkspace_resource (originally 20150502000000)
class AddFingerprintToFile < ActiveRecord::Migration
def change
add_column :thinkspace_resource_files, :file_fingerprint, :string
end
end
| sixthedge/cellar | packages/opentbl/api/db/migrate/20170511210074_add_fingerprint_to_file.thinkspace_resource.rb | Ruby | mit | 222 |
/**
* Copyright (c) Microsoft Corporation. All rights reserved.
* Licensed under the MIT License. See License.txt in the project root for
* license information.
*
* Code generated by Microsoft (R) AutoRest Code Generator.
*/
package com.microsoft.azure.management.network.v2020_04_01;
import java.util.Map;
import com.fasterxml.jackson.annotation.JsonProperty;
/**
* Tags object for patch operations.
*/
public class TagsObject {
/**
* Resource tags.
*/
@JsonProperty(value = "tags")
private Map<String, String> tags;
/**
* Get resource tags.
*
* @return the tags value
*/
public Map<String, String> tags() {
return this.tags;
}
/**
* Set resource tags.
*
* @param tags the tags value to set
* @return the TagsObject object itself.
*/
public TagsObject withTags(Map<String, String> tags) {
this.tags = tags;
return this;
}
}
| selvasingh/azure-sdk-for-java | sdk/network/mgmt-v2020_04_01/src/main/java/com/microsoft/azure/management/network/v2020_04_01/TagsObject.java | Java | mit | 954 |
/*
* This file is part of Sponge, licensed under the MIT License (MIT).
*
* Copyright (c) SpongePowered <https://www.spongepowered.org>
* Copyright (c) contributors
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
package org.spongepowered.common.mixin.core.server.network;
import net.minecraft.network.NetworkManager;
import net.minecraft.network.login.server.S00PacketDisconnect;
import net.minecraft.server.MinecraftServer;
import net.minecraft.server.management.ServerConfigurationManager;
import net.minecraft.server.network.NetHandlerLoginServer;
import net.minecraft.util.ChatComponentTranslation;
import net.minecraft.util.IChatComponent;
import org.apache.logging.log4j.Logger;
import org.spongepowered.api.event.cause.NamedCause;
import org.spongepowered.api.profile.GameProfile;
import org.spongepowered.api.event.SpongeEventFactory;
import org.spongepowered.api.event.cause.Cause;
import org.spongepowered.api.event.network.ClientConnectionEvent;
import org.spongepowered.api.network.RemoteConnection;
import org.spongepowered.api.text.Text;
import org.spongepowered.asm.lib.Opcodes;
import org.spongepowered.asm.mixin.Mixin;
import org.spongepowered.asm.mixin.Shadow;
import org.spongepowered.asm.mixin.injection.At;
import org.spongepowered.asm.mixin.injection.Inject;
import org.spongepowered.asm.mixin.injection.Redirect;
import org.spongepowered.asm.mixin.injection.callback.CallbackInfo;
import org.spongepowered.common.SpongeImpl;
import org.spongepowered.common.interfaces.IMixinNetHandlerLoginServer;
import org.spongepowered.common.text.SpongeTexts;
import java.net.SocketAddress;
import java.util.Optional;
@Mixin(NetHandlerLoginServer.class)
public abstract class MixinNetHandlerLoginServer implements IMixinNetHandlerLoginServer {
@Shadow private static Logger logger;
@Shadow public NetworkManager networkManager;
@Shadow private MinecraftServer server;
@Shadow private com.mojang.authlib.GameProfile loginGameProfile;
@Shadow public abstract String getConnectionInfo();
@Shadow public abstract com.mojang.authlib.GameProfile getOfflineProfile(com.mojang.authlib.GameProfile profile);
@Redirect(method = "tryAcceptPlayer", at = @At(value = "INVOKE", target = "Lnet/minecraft/server/management/ServerConfigurationManager;"
+ "allowUserToConnect(Ljava/net/SocketAddress;Lcom/mojang/authlib/GameProfile;)Ljava/lang/String;"))
public String onAllowUserToConnect(ServerConfigurationManager confMgr, SocketAddress address, com.mojang.authlib.GameProfile profile) {
return null; // We handle disconnecting
}
private void closeConnection(IChatComponent reason) {
try {
logger.info("Disconnecting " + this.getConnectionInfo() + ": " + reason.getUnformattedText());
this.networkManager.sendPacket(new S00PacketDisconnect(reason));
this.networkManager.closeChannel(reason);
} catch (Exception exception) {
logger.error("Error whilst disconnecting player", exception);
}
}
private void disconnectClient(Optional<Text> disconnectMessage) {
IChatComponent reason = null;
if (disconnectMessage.isPresent()) {
reason = SpongeTexts.toComponent(disconnectMessage.get());
} else {
reason = new ChatComponentTranslation("disconnect.disconnected");
}
this.closeConnection(reason);
}
@Override
public boolean fireAuthEvent() {
Optional<Text> disconnectMessage = Optional.of(Text.of("You are not allowed to log in to this server."));
ClientConnectionEvent.Auth event = SpongeEventFactory.createClientConnectionEventAuth(Cause.of(NamedCause.source(this.loginGameProfile)),
disconnectMessage, disconnectMessage, (RemoteConnection) this.networkManager, (GameProfile) this.loginGameProfile);
SpongeImpl.postEvent(event);
if (event.isCancelled()) {
this.disconnectClient(event.getMessage());
}
return event.isCancelled();
}
@Inject(method = "processLoginStart", at = @At(value = "FIELD", target = "Lnet/minecraft/server/network/NetHandlerLoginServer;"
+ "currentLoginState:Lnet/minecraft/server/network/NetHandlerLoginServer$LoginState;",
opcode = Opcodes.PUTFIELD, ordinal = 1), cancellable = true)
public void fireAuthEventOffline(CallbackInfo ci) {
// Move this check up here, so that the UUID isn't null when we fire the event
if (!this.loginGameProfile.isComplete()) {
this.loginGameProfile = this.getOfflineProfile(this.loginGameProfile);
}
if (this.fireAuthEvent()) {
ci.cancel();
}
}
}
| kashike/SpongeCommon | src/main/java/org/spongepowered/common/mixin/core/server/network/MixinNetHandlerLoginServer.java | Java | mit | 5,729 |
from otp.ai.AIBaseGlobal import *
import DistributedCCharBaseAI
from direct.directnotify import DirectNotifyGlobal
from direct.fsm import ClassicFSM, State
from direct.fsm import State
from direct.task import Task
import random
from toontown.toonbase import ToontownGlobals
from toontown.toonbase import TTLocalizer
import CharStateDatasAI
class DistributedGoofySpeedwayAI(DistributedCCharBaseAI.DistributedCCharBaseAI):
notify = DirectNotifyGlobal.directNotify.newCategory('DistributedGoofySpeedwayAI')
def __init__(self, air):
DistributedCCharBaseAI.DistributedCCharBaseAI.__init__(self, air, TTLocalizer.Goofy)
self.fsm = ClassicFSM.ClassicFSM('DistributedGoofySpeedwayAI', [State.State('Off', self.enterOff, self.exitOff, ['Lonely', 'TransitionToCostume', 'Walk']),
State.State('Lonely', self.enterLonely, self.exitLonely, ['Chatty', 'Walk', 'TransitionToCostume']),
State.State('Chatty', self.enterChatty, self.exitChatty, ['Lonely', 'Walk', 'TransitionToCostume']),
State.State('Walk', self.enterWalk, self.exitWalk, ['Lonely', 'Chatty', 'TransitionToCostume']),
State.State('TransitionToCostume', self.enterTransitionToCostume, self.exitTransitionToCostume, ['Off'])], 'Off', 'Off')
self.fsm.enterInitialState()
self.handleHolidays()
def delete(self):
self.fsm.requestFinalState()
DistributedCCharBaseAI.DistributedCCharBaseAI.delete(self)
self.lonelyDoneEvent = None
self.lonely = None
self.chattyDoneEvent = None
self.chatty = None
self.walkDoneEvent = None
self.walk = None
return
def generate(self):
DistributedCCharBaseAI.DistributedCCharBaseAI.generate(self)
name = self.getName()
self.lonelyDoneEvent = self.taskName(name + '-lonely-done')
self.lonely = CharStateDatasAI.CharLonelyStateAI(self.lonelyDoneEvent, self)
self.chattyDoneEvent = self.taskName(name + '-chatty-done')
self.chatty = CharStateDatasAI.CharChattyStateAI(self.chattyDoneEvent, self)
self.walkDoneEvent = self.taskName(name + '-walk-done')
if self.diffPath == None:
self.walk = CharStateDatasAI.CharWalkStateAI(self.walkDoneEvent, self)
else:
self.walk = CharStateDatasAI.CharWalkStateAI(self.walkDoneEvent, self, self.diffPath)
return
def walkSpeed(self):
return ToontownGlobals.GoofySpeed
def start(self):
self.fsm.request('Lonely')
def __decideNextState(self, doneStatus):
if self.transitionToCostume == 1:
curWalkNode = self.walk.getDestNode()
if simbase.air.holidayManager:
if ToontownGlobals.HALLOWEEN_COSTUMES in simbase.air.holidayManager.currentHolidays and simbase.air.holidayManager.currentHolidays[ToontownGlobals.HALLOWEEN_COSTUMES]:
simbase.air.holidayManager.currentHolidays[ToontownGlobals.HALLOWEEN_COSTUMES].triggerSwitch(curWalkNode, self)
self.fsm.request('TransitionToCostume')
elif ToontownGlobals.APRIL_FOOLS_COSTUMES in simbase.air.holidayManager.currentHolidays and simbase.air.holidayManager.currentHolidays[ToontownGlobals.APRIL_FOOLS_COSTUMES]:
simbase.air.holidayManager.currentHolidays[ToontownGlobals.APRIL_FOOLS_COSTUMES].triggerSwitch(curWalkNode, self)
self.fsm.request('TransitionToCostume')
else:
self.notify.warning('transitionToCostume == 1 but no costume holiday')
else:
self.notify.warning('transitionToCostume == 1 but no holiday Manager')
if doneStatus['state'] == 'lonely' and doneStatus['status'] == 'done':
self.fsm.request('Walk')
elif doneStatus['state'] == 'chatty' and doneStatus['status'] == 'done':
self.fsm.request('Walk')
elif doneStatus['state'] == 'walk' and doneStatus['status'] == 'done':
if len(self.nearbyAvatars) > 0:
self.fsm.request('Chatty')
else:
self.fsm.request('Lonely')
def enterOff(self):
pass
def exitOff(self):
DistributedCCharBaseAI.DistributedCCharBaseAI.exitOff(self)
def enterLonely(self):
self.lonely.enter()
self.acceptOnce(self.lonelyDoneEvent, self.__decideNextState)
def exitLonely(self):
self.ignore(self.lonelyDoneEvent)
self.lonely.exit()
def __goForAWalk(self, task):
self.notify.debug('going for a walk')
self.fsm.request('Walk')
return Task.done
def enterChatty(self):
self.chatty.enter()
self.acceptOnce(self.chattyDoneEvent, self.__decideNextState)
def exitChatty(self):
self.ignore(self.chattyDoneEvent)
self.chatty.exit()
def enterWalk(self):
self.notify.debug('going for a walk')
self.walk.enter()
self.acceptOnce(self.walkDoneEvent, self.__decideNextState)
def exitWalk(self):
self.ignore(self.walkDoneEvent)
self.walk.exit()
def avatarEnterNextState(self):
if len(self.nearbyAvatars) == 1:
if self.fsm.getCurrentState().getName() != 'Walk':
self.fsm.request('Chatty')
else:
self.notify.debug('avatarEnterNextState: in walk state')
else:
self.notify.debug('avatarEnterNextState: num avatars: ' + str(len(self.nearbyAvatars)))
def avatarExitNextState(self):
if len(self.nearbyAvatars) == 0:
if self.fsm.getCurrentState().getName() != 'Walk':
self.fsm.request('Lonely')
def handleHolidays(self):
DistributedCCharBaseAI.DistributedCCharBaseAI.handleHolidays(self)
if hasattr(simbase.air, 'holidayManager'):
if ToontownGlobals.APRIL_FOOLS_COSTUMES in simbase.air.holidayManager.currentHolidays:
if simbase.air.holidayManager.currentHolidays[ToontownGlobals.APRIL_FOOLS_COSTUMES] != None and simbase.air.holidayManager.currentHolidays[ToontownGlobals.APRIL_FOOLS_COSTUMES].getRunningState():
self.diffPath = TTLocalizer.Donald
return
def getCCLocation(self):
if self.diffPath == None:
return 1
else:
return 0
return
def enterTransitionToCostume(self):
pass
def exitTransitionToCostume(self):
pass
| ksmit799/Toontown-Source | toontown/classicchars/DistributedGoofySpeedwayAI.py | Python | mit | 6,450 |
<?php
namespace Rmc\Core\StaticPageBundle\DependencyInjection;
use Symfony\Component\Config\Definition\Builder\TreeBuilder;
use Symfony\Component\Config\Definition\ConfigurationInterface;
/**
* This is the class that validates and merges configuration from your app/config files
*
* To learn more see {@link http://symfony.com/doc/current/cookbook/bundles/extension.html#cookbook-bundles-extension-config-class}
*/
class Configuration implements ConfigurationInterface
{
/**
* {@inheritDoc}
*/
public function getConfigTreeBuilder()
{
$treeBuilder = new TreeBuilder();
$rootNode = $treeBuilder->root('rmc_core_static_page');
$rootNode
->children()
->arrayNode('static_page')
->children()
->scalarNode('is_enabled')->end()
->scalarNode('source')->end()
->scalarNode('entity_manager_name')->end()
->scalarNode('entity_class')->end()
->scalarNode('local_feed_path')->defaultFalse()->end()
->end()
->end()
->end();
// Here you should define the parameters that are allowed to
// configure your bundle. See the documentation linked above for
// more information on that topic.
return $treeBuilder;
}
}
| jignesh-russmediatech/rmcdemo | src/Rmc/Core/StaticPageBundle/DependencyInjection/Configuration.php | PHP | mit | 1,405 |
<!--?xml version="1.0"?--><html><head></head><body></body></html> | textlint/textlint-plugin-html | test/ast-test-case/doctype-quirksmode-xml/result.html | HTML | mit | 65 |
import React from 'react';
import {
Link
} from 'react-router';
import HotdotActions from '../actions/HotdotActions';
import HotdotObjStore from '../stores/HotdotObjStore';
import MyInfoNavbar from './MyInfoNavbar';
import Weixin from './Weixin';
class Hotdot extends React.Component {
constructor(props) {
super(props);
this.state = HotdotObjStore.getState();
this.onChange = this.onChange.bind(this);
}
componentDidMount() {
HotdotActions.getHotdotDatas();
$(".month-search").hide();
$(".navbar-hotdot").on("touchend",function(){
var index = $(this).index();
if(index==0){
//本周
$(".month-search").hide();
$(".week-search").show();
}else{
//本月
$(".month-search").show();
$(".week-search").hide();
}
});
HotdotObjStore.listen(this.onChange);
Weixin.getUrl();
Weixin.weixinReady();
}
componentWillUnmount() {
HotdotObjStore.unlisten(this.onChange);
}
onChange(state) {
this.setState(state);
}
getUpOrDown(curData,preData,isWeek){
var preDataItem = isWeek ? preData.week:preData.month;
if(preData==false || preData == [] || preDataItem==undefined){
return (<span className="hotdotRight"><span className="glyphicon-trend glyphicon glyphicon-arrow-up"></span>
<span className="badge">{curData.value}</span></span>);
}else{
for(var i = 0;i < preDataItem.length;i++){
if(preDataItem[i].word == curData.word){
if(preDataItem[i].value < curData.value){
return (<span className="hotdotRight"><span className="glyphicon-trend glyphicon glyphicon-arrow-up"></span>
<span className="badge">{curData.value}</span></span>);
}else{
return (<span className="hotdotRight"><span className="glyphicon-trend glyphicon glyphicon-arrow-down"></span>
<span className="badge" style={{backgroundColor:"#4F81E3"}}>{curData.value}</span></span>);
}
}
}
}
return (<span className="hotdotRight"><span className="glyphicon-trend glyphicon glyphicon-arrow-up"></span>
<span className="badge">{curData.value}</span></span>);
}
render() {
var hotdotData = (this.state.data);
var firstHotData = hotdotData[0];
var preHotData ;
if(hotdotData.length > 7){
preHotData = hotdotData[7];
}else{
preHotData = [];
}
if(firstHotData){
var weekList = firstHotData.week.map((weekItem,i)=>(
<li className="list-group-item" key={i}>
{this.getUpOrDown(weekItem,preHotData,true)}
{weekItem.word}
</li>
));
if(weekList.length==0){
weekList = <div className = "noData">数据还没有准备好,要不去其他页面瞅瞅?</div>
}
var monthList = firstHotData.month.map((monthItem,i)=>(
<li className="list-group-item" key={i}>
{this.getUpOrDown(monthItem,preHotData,false)}
{monthItem.word}
</li>
));
if(monthList.length==0){
monthList = <div className = "noData">Whops,这个页面的数据没有准备好,去其他页面瞅瞅?</div>
}
}else{
var weekList = (<span>正在构建,敬请期待...</span>);
var monthList = (<span>正在构建,敬请期待...</span>);
}
return (<div>
<div className="content-container">
<div className="week-search">
<div className="panel panel-back">
<div className="panel-heading">
<span className="panel-title">本周关键字排行榜</span>
<div className="navbar-key-container">
<span className="navbar-hotdot navbar-week navbar-hotdot-active">本周</span>
<span className="navbar-hotdot navbar-month">本月</span>
</div>
</div>
<div className="panel-body">
<ul className="list-group">
{weekList}
</ul>
</div>
</div>
</div>
<div className="month-search">
<div className="panel panel-back">
<div className="panel-heading">
<span className="panel-title">本月关键字排行榜</span>
<div className="navbar-key-container">
<span className="navbar-hotdot navbar-week">本周</span>
<span className="navbar-hotdot navbar-month navbar-hotdot-active">本月</span>
</div>
</div>
<div className="panel-body">
<ul className="list-group">
{monthList}
</ul>
</div>
</div>
</div>
</div>
</div>);
}
}
export default Hotdot; | kongchun/BigData-Web | app/m_components/Hotdot.js | JavaScript | mit | 5,621 |
// Copyright (c) 2013-2014 PropCoin Developers
#ifndef CLIENTVERSION_H
#define CLIENTVERSION_H
//
// client versioning and copyright year
//
// These need to be macros, as version.cpp's and bitcoin-qt.rc's voodoo requires it
#define CLIENT_VERSION_MAJOR 1
#define CLIENT_VERSION_MINOR 5
#define CLIENT_VERSION_REVISION 1
#define CLIENT_VERSION_BUILD 0
// Set to true for release, false for prerelease or test build
#define CLIENT_VERSION_IS_RELEASE true
// Copyright year (2009-this)
// Todo: update this when changing our copyright comments in the source
#define COPYRIGHT_YEAR 2014
// Converts the parameter X to a string after macro replacement on X has been performed.
// Don't merge these into one macro!
#define STRINGIZE(X) DO_STRINGIZE(X)
#define DO_STRINGIZE(X) #X
#endif // CLIENTVERSION_H
| demomint/prop | src/clientversion.h | C | mit | 829 |
varnish
=======
Varnish to run EOL site
sudo docker run -v /eol/varnish/default.vcl:/etc/varnish/default.vcl \
-p 80:80 eoldocker/varnish:v3.0.5
| EolDocker/varnish | README.md | Markdown | mit | 150 |
<?php
namespace PayU\Api\Response\Builder;
use PayU\Api\Request\RequestInterface;
use PayU\Api\Response\AbstractResponse;
use Psr\Http\Message\ResponseInterface;
/**
* Interface BuilderInterface
*
* Provides a common interface to build response objects based on request context
*
* @package PayU\Api\Response\Builder
* @author Lucas Mendes <[email protected]>
*/
interface BuilderInterface
{
/**
* Build a response object
*
* @param RequestInterface $request
* @param ResponseInterface $response
* @param string $context
* @return AbstractResponse
*/
public function build(RequestInterface $request, ResponseInterface $response, $context = null);
} | devsdmf/payu-php-sdk | src/PayU/Api/Response/Builder/BuilderInterface.php | PHP | mit | 713 |
using SolrExpress.Search.Parameter;
using System;
using System.Globalization;
using System.Linq;
using System.Text;
namespace SolrExpress.Utility
{
/// <summary>
/// Helper class used to extract information inside parameters
/// </summary>
internal static class ParameterUtil
{
/// <summary>
/// Get the sort type and direction
/// </summary>
/// <param name="solrFacetSortType">Type used in match</param>
/// <param name="typeName">Type name</param>
/// <param name="sortName">Sort direction</param>
public static void GetFacetSort(FacetSortType solrFacetSortType, out string typeName, out string sortName)
{
switch (solrFacetSortType)
{
case FacetSortType.IndexAsc:
typeName = "index";
sortName = "asc";
break;
case FacetSortType.IndexDesc:
typeName = "index";
sortName = "desc";
break;
case FacetSortType.CountAsc:
typeName = "count";
sortName = "asc";
break;
case FacetSortType.CountDesc:
typeName = "count";
sortName = "desc";
break;
default:
throw new ArgumentException(nameof(solrFacetSortType));
}
}
/// <summary>
/// Calculate and returns spatial formule
/// </summary>
/// <param name="fieldName">Field name</param>
/// <param name="functionType">Function used in spatial filter</param>
/// <param name="centerPoint">Center point to spatial filter</param>
/// <param name="distance">Distance from center point</param>
/// <returns>Spatial formule</returns>
internal static string GetSpatialFormule(string fieldName, SpatialFunctionType functionType, GeoCoordinate centerPoint, decimal distance)
{
var functionTypeStr = functionType.ToString().ToLower();
var latitude = centerPoint.Latitude.ToString("G", CultureInfo.InvariantCulture);
var longitude = centerPoint.Longitude.ToString("G", CultureInfo.InvariantCulture);
var distanceStr = distance.ToString("G", CultureInfo.InvariantCulture);
return $"{{!{functionTypeStr} sfield={fieldName} pt={latitude},{longitude} d={distanceStr}}}";
}
/// <summary>
/// Get the field with excludes
/// </summary>
/// <param name="excludes">Excludes tags</param>
/// <param name="aliasName">Alias name</param>
/// <param name="fieldName">Field name</param>
internal static string GetFacetName(string[] excludes, string aliasName, string fieldName)
{
var sb = new StringBuilder();
var needsBraces = (excludes?.Any() ?? false) || !string.IsNullOrWhiteSpace(aliasName);
if (needsBraces)
{
sb.Append("{!");
}
if (excludes?.Any() ?? false)
{
sb.Append($"ex={string.Join(",", excludes)}");
}
if (sb.Length > 2)
{
sb.Append(" ");
}
if (!string.IsNullOrWhiteSpace(aliasName))
{
sb.Append($"key={aliasName}");
}
if (needsBraces)
{
sb.Append("}");
}
sb.Append(fieldName);
return sb.ToString();
}
/// <summary>
/// Get the filter with tag
/// </summary>
/// <param name="query">Query value</param>
/// <param name="aliasName">Alias name</param>
public static string GetFilterWithTag(string query, string aliasName)
{
return !string.IsNullOrWhiteSpace(aliasName) ? $"{{!tag={aliasName}}}{query}" : query;
}
}
}
| solr-express/solr-express | src/SolrExpress/Utility/ParameterUtil.cs | C# | mit | 4,034 |
/**
* React Starter Kit (https://www.reactstarterkit.com/)
*
* Copyright © 2014-2016 Kriasoft, LLC. All rights reserved.
*
* This source code is licensed under the MIT license found in the
* LICENSE.txt file in the root directory of this source tree.
*/
import 'babel-polyfill';
import ReactDOM from 'react-dom';
import React from 'react';
import FastClick from 'fastclick';
import Router from './routes';
import Location from './core/Location';
import { addEventListener, removeEventListener } from './core/DOMUtils';
import { ApolloClient, createNetworkInterface } from 'react-apollo';
function getCookie(name) {
let value = "; " + document.cookie;
let parts = value.split("; " + name + "=");
if (parts.length == 2) return parts.pop().split(";").shift();
}
const networkInterface = createNetworkInterface('/graphql', {
credentials: 'same-origin',
uri: '/graphql',
headers: {
Cookie: getCookie("id_token")
}
});
const client = new ApolloClient({
connectToDevTools: true,
networkInterface: networkInterface,
});
let cssContainer = document.getElementById('css');
const appContainer = document.getElementById('app');
const context = {
insertCss: styles => styles._insertCss(),
onSetTitle: value => (document.title = value),
onSetMeta: (name, content) => {
// Remove and create a new <meta /> tag in order to make it work
// with bookmarks in Safari
const elements = document.getElementsByTagName('meta');
Array.from(elements).forEach((element) => {
if (element.getAttribute('name') === name) {
element.parentNode.removeChild(element);
}
});
const meta = document.createElement('meta');
meta.setAttribute('name', name);
meta.setAttribute('content', content);
document
.getElementsByTagName('head')[0]
.appendChild(meta);
},
client
};
// Google Analytics tracking. Don't send 'pageview' event after the first
// rendering, as it was already sent by the Html component.
let trackPageview = () => (trackPageview = () => window.ga('send', 'pageview'));
function render(state) {
Router.dispatch(state, (newState, component) => {
ReactDOM.render( component, appContainer,
() => {
// Restore the scroll position if it was saved into the state
if (state.scrollY !== undefined) {
window.scrollTo(state.scrollX, state.scrollY);
} else {
window.scrollTo(0, 0);
}
trackPageview();
// Remove the pre-rendered CSS because it's no longer used
// after the React app is launched
if (cssContainer) {
cssContainer.parentNode.removeChild(cssContainer);
cssContainer = null;
}
});
});
}
function run() {
let currentLocation = null;
let currentState = null;
// Make taps on links and buttons work fast on mobiles
FastClick.attach(document.body);
// Re-render the app when window.location changes
const unlisten = Location.listen(location => {
currentLocation = location;
currentState = Object.assign({}, location.state, {
path: location.pathname,
query: location.query,
state: location.state,
context,
});
render(currentState);
});
// Save the page scroll position into the current location's state
const supportPageOffset = window.pageXOffset !== undefined;
const isCSS1Compat = ((document.compatMode || '') === 'CSS1Compat');
const setPageOffset = () => {
currentLocation.state = currentLocation.state || Object.create(null);
if (supportPageOffset) {
currentLocation.state.scrollX = window.pageXOffset;
currentLocation.state.scrollY = window.pageYOffset;
} else {
currentLocation.state.scrollX = isCSS1Compat ?
document.documentElement.scrollLeft : document.body.scrollLeft;
currentLocation.state.scrollY = isCSS1Compat ?
document.documentElement.scrollTop : document.body.scrollTop;
}
};
addEventListener(window, 'scroll', setPageOffset);
addEventListener(window, 'pagehide', () => {
removeEventListener(window, 'scroll', setPageOffset);
unlisten();
});
}
// Run the application when both DOM is ready and page content is loaded
if (['complete', 'loaded', 'interactive'].includes(document.readyState) && document.body) {
run();
} else {
document.addEventListener('DOMContentLoaded', run, false);
}
| reicheltp/Sonic | src/client.js | JavaScript | mit | 4,372 |
var $M = require("@effectful/debugger"),
$x = $M.context,
$ret = $M.ret,
$unhandled = $M.unhandled,
$brk = $M.brk,
$lset = $M.lset,
$mcall = $M.mcall,
$m = $M.module("file.js", null, typeof module === "undefined" ? null : module, null, "$", {
__webpack_require__: typeof __webpack_require__ !== "undefined" && __webpack_require__
}, null),
$s$1 = [{
e: [1, "1:9-1:10"]
}, null, 0],
$s$2 = [{}, $s$1, 1],
$m$0 = $M.fun("m$0", "file.js", null, null, [], 0, 2, "1:0-4:0", 32, function ($, $l, $p) {
for (;;) switch ($.state = $.goto) {
case 0:
$lset($l, 1, $m$1($));
$.goto = 2;
continue;
case 1:
$.goto = 2;
return $unhandled($.error);
case 2:
return $ret($.result);
default:
throw new Error("Invalid state");
}
}, null, null, 0, [[0, "1:0-3:1", $s$1], [16, "4:0-4:0", $s$1], [16, "4:0-4:0", $s$1]]),
$m$1 = $M.fun("m$1", "e", null, $m$0, [], 0, 2, "1:0-3:1", 0, function ($, $l, $p) {
for (;;) switch ($.state = $.goto) {
case 0:
$.goto = 1;
$brk();
$.state = 1;
case 1:
$.goto = 2;
$p = ($x.call = eff)(1);
$.state = 2;
case 2:
$l[1] = $p;
$.goto = 3;
$p = ($x.call = eff)(2);
$.state = 3;
case 3:
$.goto = 4;
$mcall("log", console, $l[1] + $p);
$.state = 4;
case 4:
$.goto = 6;
$brk();
continue;
case 5:
$.goto = 6;
return $unhandled($.error);
case 6:
return $ret($.result);
default:
throw new Error("Invalid state");
}
}, null, null, 1, [[4, "2:2-2:31", $s$2], [2, "2:14-2:20", $s$2], [2, "2:23-2:29", $s$2], [2, "2:2-2:30", $s$2], [36, "3:1-3:1", $s$2], [16, "3:1-3:1", $s$2], [16, "3:1-3:1", $s$2]]);
$M.moduleExports(); | awto/effectfuljs | packages/core/test/samples/simple/expr/test04-out-ds.js | JavaScript | mit | 1,801 |
# INTRODUCTION
Triplie is an AI bot based on 2nd up to 5th order Markov model. It uses an
SQLite database for storage.
Triplie learns by creating
1. a dictionary of words
2. a graph representing valid 5-grams (consecutive groups of 5 words)
encountered in the text
3. a graph of associations between words from sentences formed according to the
Hebbian rule
To respond to a user, triplie extracts keywords from the user's text, finds
their most appropriate associated keywords in the Hebbian association network,
and generates replies that contain the associated keywords using multiple
breadth-first-search Markov chains algorithm.
For more information on installing and configuring read below
You can join the project's IRC channel too:
[#triplie on irc.freenode.net](irc://irc.freenode.net/#triplie)
# Install
## Prerequisites
Download and install [node.js](http://nodejs.org/) for your system.
Its recommended to build node from source. If you don't do that, make
sure that npm is also installed alongside with node and that the
node binary is called "node"
Then from a terminal run:
npm install -g triplie
This will install the `triplie` command on your system.
Configure the bot as explained below before running!
# CONFIGURATION
If running the bot for the first time and its not configured,
you should create a new directory and run:
triplie config.yaml --init
to create the initial config file
### Edit config.yaml
config.yaml is already pre-filled with some default for your bot. You will want
to change some of these settings.
The configuration file is really well commented. Open it and edit it according
to the instructions contained inside. Once you run the bot however, the
instructions will disappear the moment you change a setting by giving a command
to the bot.
# RUNNING
After you edited the config file, to run the bot use the command:
triplie config.yaml
# IMPORT EXISTING TEXT
If called with the argument `--feed` triplie will receive data from stdin,
parse it using a regular expression then feed the database.
Example:
cat log.txt | triplie config.yaml --feed --regex '(?<year>\d+)-(?<month>\d+)-(?<day>)T(?<hour>\d+):(?<minute>\d+):(?<second>\d+)Z\s+(?<nick>.+):\s+(?<text>.+)'
will work for a `log.txt` that has lines in the format:
2013-04-04T13:15:00Z someuser: I wrote some text
The syntax is XRegExp and uses named groups. See
[the XRegExp readme](https://npmjs.org/package/xregexp) for more info
Currently, supported named captures are:
* year
* month
* day
* hour
* minute
* second
* timestamp - unix timestamp in seconds, used instead of the date captures
* timestampms - unix timestamp in miliseconds, used instead of both above.
* text - the text content
Timestamp example:
cat log.txt | triplie config.yaml --feed --regex '(?<timestamp>\d+) (?<text>.+)
will match `log.txt` containing lines in the format:
1234567890 example text here
All captures except text are optional - the time is optional and if left out
the feeder will generate reasonable "fake" timestamps.
cat log.txt | triplie config.yaml --feed --regex '(?<text>.+)'
# COMMANDS
List of triplie's commands (assuming "!" is the cmdchar)
1. !join #channel - causes the bot to join and remember the channel
2. !part #channel - part and forget channel
3. !reload - causes reload of the bot code, useful for development
4. !set path value - set a config setting to the specified value. Examples
!set ai.sleep.1 10 - Set the upper sleep limit to 10 seconds
!set ai.sleep [2,3] - Set both sleep limits. Value musn't contain space.
5. !get path - get the config value at the specified path
6. !db stats - triplie will output database statistics
!cmd will return results via private notice
!!cmd returns results via public message
# LICENCE & AUTHOR
See LICENCE and AUTHORS (if present)

| spion/triplie-ng | README.md | Markdown | mit | 3,968 |
<!DOCTYPE html>
<html>
<head>
<!-- meta information -->
<meta charset="utf-8">
<meta name="description"
content="http://feedproxy.google.com/~r/geledes/~3/EdmSLCOQs3o/" >
<meta name="author" content="Kinho">
<!-- Enable responsiveness on mobile devices-->
<meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1">
<!-- title -->
<title>My post · Entropista</title>
<!-- icons -->
<link rel="shortcut icon" href="/public/images/favicon.ico" />
<!-- stylesheets -->
<link rel="stylesheet" href="/public/css/responsive.gs.12col.css">
<link rel="stylesheet" href="/public/css/animate.min.css">
<link rel="stylesheet" href="/public/css/main.css">
<!-- Google fonts -->
<link rel="stylesheet" href="http://fonts.googleapis.com/css?family=Source+Sans+Pro:400,700,400italic&subset=latin-ext">
<!-- feed links -->
<link rel="alternate" href="/feed.xml" type="application/rss+xml" title="">
</head>
<body>
<div class="container amber">
<header class="top row gutters">
<div class="col span_2 center">
<!-- TODO: add baseurl to the logo link -->
<a href="" id="logo" title="Entropista"
style="background-image: url(/public/images/logo.png);"></a>
</div>
<nav class="col span_10 top-navbar">
<a href="/" title="Home"
>Home</a>
<a href="/about" title="About"
>About</a>
<a href="/NormasDeComunicaçãoViolenta" title="N.C.V."
>N.C.V.</a>
<a href="/Táticas" title="Táticas"
>Táticas</a>
<a href="/NovelaFuturista" title="Livro"
>Livro</a>
</nav>
</header>
<article class="single row gutters">
<time class="published" datetime="2015-10-12">12 October 2015</time>
<h2>My post</h2>
<p>http://feedproxy.google.com/~r/geledes/~3/EdmSLCOQs3o/</p>
</article>
<footer>
<p>
This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc/4.0/deed.en_US">Creative Commons Attribution-NonCommercial 4.0 International License</a>.
</p>
</footer>
</div>
</body>
</html>
| oentropista/oentropista.github.io | _site/post/index.html | HTML | mit | 2,129 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.