Skip to content

Supporting scalar tensor broadcasting for AddOp #66

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 107 commits into
base: develop
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 3 commits
Commits
Show all changes
107 commits
Select commit Hold shift + click to select a range
c430949
Merge pull request #21 from neil-tan/integTest
Knight-X Oct 28, 2017
18c18fa
Merge pull request #22 from neil-tan/integTest
neil-tan Oct 28, 2017
91cf419
context draft
neil-tan Oct 29, 2017
e2aaf98
re-vised draft
neil-tan Oct 29, 2017
7b2a5f9
wip
neil-tan Oct 29, 2017
a65a8a9
tensor extend first commit
Knight-X Oct 29, 2017
d219be5
draft for merge
neil-tan Oct 29, 2017
9b47169
Merge pull request #25 from neil-tan/context_dev
Knight-X Oct 29, 2017
c09b38e
Draft for tensor refactor
Knight-X Oct 31, 2017
f669f69
add feature to ram tensor class
Knight-X Oct 31, 2017
e5b67bd
Add python requirements for SD preparation
mbartling Oct 31, 2017
4d9269c
wip
Knight-X Oct 29, 2017
96d3186
suggesting tensor ref counter
neil-tan Nov 1, 2017
cd61f37
typo
neil-tan Nov 1, 2017
de63c85
Merge branch 'featuretensor_refactor' of ssh://github.com/neil-tan/uT…
neil-tan Nov 1, 2017
12d77c8
suggesting tensor ref counter
neil-tan Nov 1, 2017
60e0439
typo
neil-tan Nov 1, 2017
4de1ce8
Merge branch 'feature_tensor_ref' of ssh://github.com/neil-tan/uTenso…
neil-tan Nov 1, 2017
68a905f
make idxTest pass firstly
Knight-X Nov 1, 2017
6b4349a
replace tensor<T> to tensor declaration in other functions such as …
Knight-X Nov 1, 2017
83509d1
fix coding style
Knight-X Nov 1, 2017
0fb021a
Merge pull request #28 from neil-tan/feature_tensor_ref
Knight-X Nov 1, 2017
2d9ca3a
1. change read function syntax according to interface
Knight-X Nov 2, 2017
038631d
make syntax of write function correct
Knight-X Nov 2, 2017
2cc354f
Merge pull request #27 from mbartling/b/getting-started
neil-tan Nov 2, 2017
d21c4ee
Merge pull request #29 from neil-tan/featuretensor_refactor
neil-tan Nov 2, 2017
34c5e30
context ops compile sucessful
neil-tan Nov 2, 2017
bd3e49a
DType WIP; added context, ops, tesnor-ref-count
neil-tan Nov 2, 2017
1b163e0
revise main for test idx and matrixops
Knight-X Nov 2, 2017
af90dce
Merge branch 'featuretensor_refactor' of https://github.com/neil-tan/…
Knight-X Nov 2, 2017
0f31406
1. replace tensor in matriops to new one
Knight-X Nov 3, 2017
b216861
1. remove unnecessary private member
Knight-X Nov 3, 2017
b88c865
modify main function for matrixops test
Knight-X Nov 3, 2017
229abef
for arrayops test
Knight-X Nov 3, 2017
020fa1c
Update README.md
BitYog Nov 4, 2017
5fc5b6c
change readme to explain develop branch for developer
Knight-X Nov 4, 2017
6da84e1
Merge branch 'master' of https://github.com/neil-tan/uTensor
Knight-X Nov 4, 2017
cd226d8
fix tensorChkAlloc call convention
Knight-X Nov 4, 2017
b9dbeda
1. math mathops pass
Knight-X Nov 4, 2017
e804b92
delete the tensor in testcase for avoiding runing out of memory
Knight-X Nov 4, 2017
25b40a3
1. refactor NnOps to use new version tensor
Knight-X Nov 4, 2017
6f46844
1. make tensor_test pass for new tensor
Knight-X Nov 4, 2017
7a3c7fa
QntMatMal Context test
neil-tan Nov 4, 2017
12f8c93
pass the mlp test
Knight-X Nov 5, 2017
cb96d7a
1. change for making auto allocation for Tensor**
Knight-X Nov 5, 2017
10e9fc4
1. change for making auto allocation for tensor**
Knight-X Nov 5, 2017
910c3ee
1. make run_mlp pass
Knight-X Nov 5, 2017
13c2e45
1. make reallocation for tensor
Knight-X Nov 5, 2017
9095980
1. when the code is compiled with release mode, the dequantize error…
Knight-X Nov 5, 2017
5a21dbb
1. changed main function for testing run_mlp
Knight-X Nov 5, 2017
0eb520b
fix typo error
Knight-X Nov 5, 2017
6eda5da
1. add resize function and test case
Knight-X Nov 6, 2017
b0f251d
1. change read interface from
Knight-X Nov 6, 2017
c0f02e2
Merge branch 'master' into patch-1
BitYog Nov 6, 2017
ab313ba
Merge branch 'master' into patch-1
BitYog Nov 6, 2017
272fdde
Merge pull request #30 from BitYog/patch-1
mbartling Nov 8, 2017
4e1e49e
context MatMalTest passed
neil-tan Nov 9, 2017
537606c
polished up the syntax
neil-tan Nov 10, 2017
7e278d4
Op should use resize() for output tensors; syntax updates
neil-tan Nov 10, 2017
1b1a071
RefCountTest bugged
neil-tan Nov 10, 2017
12f4d9b
ref counting seems to be working; added support for UBLOX_EVK_ODIN_W2
neil-tan Nov 10, 2017
514ebc3
1. make copy and copy assignment constructor private
Knight-X Nov 11, 2017
05652bd
Merge branch 'context_smartptr' into featuretensor_refactor
Knight-X Nov 11, 2017
65c3baa
1. make arrayops pass test
Knight-X Nov 11, 2017
1382857
1. make math op test pass
Knight-X Nov 12, 2017
c95da6e
1. add function have different type to mathtest, so make addop have…
Knight-X Nov 12, 2017
b385766
transformation test seems passing
neil-tan Nov 12, 2017
0cdd92e
NnTest passed
neil-tan Nov 12, 2017
e12c231
matrix test passing, moved from context tests
neil-tan Nov 12, 2017
c9d219e
enable tensor tests as it is not dependent on Context
neil-tan Nov 12, 2017
d05aa3d
context.add() now support initializer_list
neil-tan Nov 12, 2017
115eb55
fix tensor constructor bug
Knight-X Nov 13, 2017
1c0392d
1. fix the name of DequantizeOp
Knight-X Nov 13, 2017
1e23677
1. add resize for output ptr in Relu
Knight-X Nov 13, 2017
e3316f3
1. for bug test
Knight-X Nov 13, 2017
bdfd145
sounds like run mlp work (draft)
Knight-X Nov 14, 2017
5174aba
1. remove comment for deep_mnist
Knight-X Nov 14, 2017
1021c36
Merge branch 'master' of github.com:neil-tan/uTensor into featuretens…
mbartling Nov 16, 2017
f852b5e
Merge branch 'featuretensor_refactor' of github.com:neil-tan/uTensor …
mbartling Nov 17, 2017
b4a7823
Refactor non template functions to cpp files
mbartling Nov 17, 2017
0fe89c5
Add vim to .gitignore
mbartling Nov 17, 2017
53206be
Merge pull request #47 from mbartling/f/refactor-take-2
Knight-X Nov 18, 2017
2080b0c
Merge pull request #49 from neil-tan/featuretensor_refactor
Knight-X Nov 18, 2017
838dae1
1. tensor have the name to perform lookup
Knight-X Nov 18, 2017
4c6de65
modifying context class to use TName
neil-tan Nov 18, 2017
23c7ffa
merged
neil-tan Nov 18, 2017
7041775
1. implement lookup for reference count
Knight-X Nov 18, 2017
1dba94b
1. make array pass test for name lookup optimization
Knight-X Nov 18, 2017
1bf0026
1. make nntest pass for name lookup optimization
Knight-X Nov 18, 2017
cb39597
1. make tensor transform pass for name lookup
Knight-X Nov 18, 2017
3d310f3
porting MathTests.hpp; added ctx.get() and ctx.gc(); WIP
neil-tan Nov 18, 2017
e12a69a
1. pass mlp test for name lookup
Knight-X Nov 19, 2017
860cf9f
Merge branch 'F/52' of https://github.com/neil-tan/uTensor into F/52
Knight-X Nov 19, 2017
13071ba
Math, Matrix, Context passed
neil-tan Nov 19, 2017
b7530f5
MathTests.hpp: make variable naming more consistent
neil-tan Nov 19, 2017
13872d7
1. pass run deep mlp demo for name lookup
Knight-X Nov 19, 2017
eacb822
Merge branch 'F/52' of https://github.com/neil-tan/uTensor into F/52
Knight-X Nov 19, 2017
1e58e90
updated readme
neil-tan Nov 19, 2017
147f990
release 0.1.0
neil-tan Nov 19, 2017
a91e2eb
Merge pull request #54 from neil-tan/F/52
neil-tan Nov 19, 2017
0b884b6
context stateful wip
neil-tan Nov 30, 2017
5209e03
codeGenTemplate test passed
neil-tan Nov 30, 2017
66468d8
context lambda wip
neil-tan Dec 2, 2017
1f96bc2
context: add_static, addCached, push_static; context internal gc wip
neil-tan Dec 3, 2017
ae3a7a4
updated comment in context.cpp
neil-tan Dec 3, 2017
b5c1702
Merge pull request #60 from neil-tan/F/context_cg_ref
Knight-X Dec 7, 2017
9c7fcb1
Supporting scalar tensor broadcasting
dboyliao Dec 8, 2017
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
61 changes: 34 additions & 27 deletions context.hpp
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
#ifndef UTENSOR_CTX_H
#define UTENSOR_CTX_H

#include "uTensorBase.hpp"
#include "stdio.h"

//#include <list>

//TODO: how do we deal with dangling tensors?
Expand All @@ -11,45 +14,46 @@
// tensors can be all pointers here, but destructors has to set data to nullptr
// push(op, input_t_list, output_t_list) or push(op, init-list, init-list)
// TensorListModifierOp
class Context : uTensor {
class Context : public uTensor {
protected:
vector<Operator> op_list;
vector<Operator*> op_list;
bool del_onsight;
//std::unordered_map<Tensor*> TensorList; //all tensors alive //kill all unused if malloc failed?
//uint32_t m_size; //remaining memory size
//void registerTensor(Tensor* t);
//void gc(void); //garbage collector, delete any tracked unreferenced tensor

void initOpTensors(TList &t_list);
void deinitTensors(TList &t_list);
void updateInputTensorRef(TList &t_list);
void dcrRefCount(TList &t_list);
void initTensors(const TList &t_list);
void deinitTensors(const TList &t_list);
void updateInputTensorRef(const TList &t_list);
void dcrRefCount(TList t_list);

public:
void push(Operator op, TList &_inputs, TList &_outputs);
void push(Operator *op, TList &_inputs, TList &_outputs);
int run(void);

Context() {
del_onsight = true;
}
};

Context() {
del_onsight = true;
}

void Context::push(Operator op, TList &_inputs, TList &_outputs) {
if(op.getInputCount() != _inputs.size()) {
void Context::push(Operator *op, TList &_inputs, TList &_outputs) {
if(op->getInputs().size() != _inputs.size()) {
ERR_EXIT("valid number of inputs\r\n");
}
if(op.getOutputCount() != _outputs.size()) {
if(op->getOutputs().size() != _outputs.size()) {
ERR_EXIT("valid number of output\r\n");
}

op.setInputs(_inputs);
op.setOutputs(_outputs);
op->setInputs(_inputs);
op->setOutputs(_outputs);
op_list.push_back(op);
updateInputTensorRef(_inputs);

}

void Context::updateInputTensorRef(TList &t_list) {
void Context::updateInputTensorRef(const TList &t_list) {
for(auto t:t_list) {
t->incrRef(); //if an initial ref value is supplied to the tensor at compile time
//then this function does nothing
Expand All @@ -58,42 +62,45 @@ void Context::updateInputTensorRef(TList &t_list) {
}
}

void Context::initOpTensors(vector<Tensor*> &t_list) {
void Context::initTensors(const TList &t_list) {
for(auto t:t_list) {
t->inFocus();
}
}

void Context::deinitTensors(vector<Tensor*> &t_list) {
void Context::deinitTensors(const TList &t_list) {
for(auto t:t_list) {
t->deFocus();
}
}

void Context::dcrRefCount(vector<Tensor*> &t_list) {
void Context::dcrRefCount(TList t_list) {
for(auto t:t_list) {
t->dcrRef();
if(t->getRef() < 1 && del_onsight) {
delete t;
}
}
}

int Context::run(void) {
//unref2nullTensors();

for(auto op:op_list) {
initTensors(op.getInputs());
initTensors(op.getOutputs());
initTensors(op->getInputs());
initTensors(op->getOutputs());

op.init();
op.compute();
op.deinit();
op->inFocus();
op->compute();
op->deFocus();

deinitOpTensors(op.getInputs());
deinitOpTensors(op.getOutputs());
deinitTensors(op->getInputs());
deinitTensors(op->getOutputs());

decreRefCount(op.getInputs());
dcrRefCount(op->getInputs());
}

return 0;
}

#endif // UTENSOR_CTX_H
6 changes: 5 additions & 1 deletion main.cpp
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
#include <random>
#include "FATFileSystem.h"
#include "SDBlockDevice.h"
#include "mbed.h"
#include "stdio.h"
#include "uTensor_util.hpp"
#include "tensor.hpp"
#include "tensorIdxImporterTests.hpp"
#include "context.hpp"
//#include "deep_mnist_mlp.hpp"

Serial pc(USBTX, USBRX, 115200);
Expand Down Expand Up @@ -33,9 +33,13 @@ int main(int argc, char** argv) {
idxTest.printSummary();
printf("Matrix: \r\n");
// matrixTests.printSummary();

Context ctx;
//In [24]: tf.get_default_graph().get_tensor_by_name("import/y_pred:0").eval(feed_dict={x: mnist.test.images[0:1]})
//Out[24]: array([7])

printf("\r\ndone...\r\n");

ON_ERR(fs.unmount(), "fs unmount ");
ON_ERR(bd.deinit(), "SDBlockDevice de-init ");
}
51 changes: 46 additions & 5 deletions tensor.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -8,20 +8,32 @@
#include "stdlib.h"
#include "uTensor_util.hpp"

enum class DType : char {
uint8,
int8,
uint16,
int32,
flt,
dbl,
};

class uTensor {
public:
virtual void inFocus(){};
virtual void deFocus(){};

public:
virtual ~uTensor() = 0;
};


uTensor::~uTensor() {}
class TensorBase {
public:
std::vector<uint32_t> shape;
void* data;
uint32_t total_size;
DType dtype;
uint16_t ref_count;
bool allow_runtime_ref_inc; //to support compile-time ref count

~TensorBase() {
if (data != nullptr) {
Expand All @@ -31,7 +43,7 @@ class TensorBase {
}
};

class Tensor : uTensor {
class Tensor : public uTensor {
virtual void* read(size_t offset, size_t ele) { return nullptr; }
virtual void* write(size_t offset, size_t ele) { return nullptr; }

Expand Down Expand Up @@ -67,13 +79,16 @@ class Tensor : uTensor {
s->data = (void*)malloc(unit_size() * s->total_size);
if (s->data == NULL)
ERR_EXIT("ran out of memory for %lu malloc", unit_size() * s->total_size);

s->ref_count = 0;
s->allow_runtime_ref_inc = false;
}

std::vector<uint32_t> getShape(void) { return s->shape; }

uint32_t getSize(void) { return s->total_size; }

virtual uint16_t unit_size(void) {}
virtual uint16_t unit_size(void) { return 0; }

uint32_t getSize_in_bytes(void) { return s->total_size * unit_size(); }

Expand All @@ -90,6 +105,31 @@ class Tensor : uTensor {
return (const T*)write(offset, ele);
}

DType getDType(void) {
return s->dtype;
}

uint16_t incrRef() {
if(s->allow_runtime_ref_inc) {
s->ref_count += 1;
}

return s->ref_count;
}

uint16_t dcrRef() {
s->ref_count -= 1;
return s->ref_count;
}

uint16_t getRef() {
return s->ref_count;
}

bool is_ref_runtime(void) {
return s->allow_runtime_ref_inc;
}

~Tensor() {
s = nullptr;
DEBUG("Tensor Destructed\r\n");
Expand All @@ -101,9 +141,10 @@ class RamTensor : public Tensor {
// need deep copy
public:
RamTensor() : Tensor() {
std::vector<uint32_t> v(3, 3);
std::vector<uint32_t> v(3, 3); ///NT: why (3,3)?
Tensor::init<T>(v);
cursor = nullptr;
//dtype = something...
}

RamTensor(std::initializer_list<uint32_t> l) : Tensor() {
Expand Down
22 changes: 8 additions & 14 deletions uTensorBase.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -5,20 +5,14 @@

typedef vector<Tensor*> TList;

class uTensor {
virtual void inFocus() {};
virtual void deFocus() {};
virtual ~uTensor() = 0;
};


//isType() https://stackoverflow.com/questions/9974596/how-to-check-whether-two-pointers-point-to-the-same-object-or-not
//double dispatch

//new vs stack
class Operator {
class Operator : public uTensor{
protected:
//setup input/output info in derived constructors
//ref count?
TList inputs;
vector<DType> dtype_in;
TList outputs;
Expand All @@ -29,24 +23,24 @@ class Operator {
void setInputs(TList &_inputs) {
if(_inputs.size() != inputs.size()) ERR_EXIT("Input Tensor list mismatched...");

for(uint8_t i = 0; i < input.size(); i++) {
if(dtype_in[i] != inputs.getType()) {
for(uint8_t i = 0; i < inputs.size(); i++) {
if(dtype_in[i] != inputs[i]->getDType()) {
ERR_EXIT("Tensor Type mismatched...");
}

input[i] = _inputs[i];
inputs[i] = _inputs[i];
}
}

void setOutputs(TList &_outputs) {
if(_outputs.size() != outputs.size()) ERR_EXIT("Input Tensor list mismatched...");

for(uint8_t i = 0; i < output.size(); i++) {
if(dtype_out[i].getType() != output[i].getType()) {
for(uint8_t i = 0; i < outputs.size(); i++) {
if(dtype_out[i] != outputs[i]->getDType()) {
ERR_EXIT("Tensor Type mismatched...");
}

output[i] = _output[i]
outputs[i] = _outputs[i];
}
}

Expand Down
25 changes: 13 additions & 12 deletions uTensor_util.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -18,16 +18,16 @@ void return_error(int ret_val) {
}
}

void errno_error(void* ret_val) {
if (ret_val == NULL) {
printf(" [**Failure**] %d \r\n", errno);
printf("Exiting...\r\n");
fflush(stdout);
exit(-1);
} else {
printf(" [DONE]\r\n");
}
}
// void errno_error(void* ret_val) {
// if (ret_val == NULL) {
// printf(" [**Failure**] %d \r\n", errno);
// printf("Exiting...\r\n");
// fflush(stdout);
// exit(-1);
// } else {
// printf(" [DONE]\r\n");
// }
// }

#define ON_ERR(FUNC, MSG) \
{ \
Expand All @@ -44,8 +44,9 @@ void errno_error(void* ret_val) {

#else // MBED_CONF_APP_DEBUG_MSG

void errno_error(void* ret_val) { /*DOES NOTHING*/
}
// void errno_error(void* ret_val) { /*DOES NOTHING*/
// }

#define ON_ERR(FUNC, MSG) FUNC
#define DEBUG(MSG, ...)

Expand Down
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy