{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":714073518,"defaultBranch":"main","name":"ao","ownerLogin":"pytorch","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2023-11-03T21:27:36.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/21003710?v=4","public":true,"private":false,"isOrgOwned":true},"refInfo":{"name":"","listCacheKey":"v0:1727139867.0","currentOid":""},"activityList":{"items":[{"before":"58c7df2d01811261e010c5789eb5ac26a950e4c8","after":"e3fca7ef83043bf34625e3e1216163ffc22e9c10","ref":"refs/heads/gh-pages","pushedAt":"2024-09-24T01:08:15.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"auto-generating sphinx docs","shortMessageHtmlLink":"auto-generating sphinx docs"}},{"before":"d0cbb85b24db71c7086dac2840bf2dd850b31b80","after":null,"ref":"refs/heads/msaroufim-patch-19","pushedAt":"2024-09-24T01:04:27.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"msaroufim","name":"Mark Saroufim","path":"/msaroufim","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/3282513?s=80&v=4"}},{"before":"9680c48568fdf988ccd0d960d48e76836f0aff54","after":"653efe98749124985155da494a372bea7fe4b383","ref":"refs/heads/main","pushedAt":"2024-09-24T01:04:25.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"msaroufim","name":"Mark Saroufim","path":"/msaroufim","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/3282513?s=80&v=4"},"commit":{"message":"rename cuda mode -> gpu mode (#925)","shortMessageHtmlLink":"rename cuda mode -> gpu mode (#925)"}},{"before":null,"after":"d0cbb85b24db71c7086dac2840bf2dd850b31b80","ref":"refs/heads/msaroufim-patch-19","pushedAt":"2024-09-24T00:05:24.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"msaroufim","name":"Mark Saroufim","path":"/msaroufim","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/3282513?s=80&v=4"},"commit":{"message":"rename cuda mode -> gpu mode","shortMessageHtmlLink":"rename cuda mode -> gpu mode"}},{"before":"27ab4a8159eabe1c4fccc0e0bf785ff8db6f8c5b","after":"b8026e1f37a63f69a9797b6ee8b6372dd4674cf6","ref":"refs/heads/new_eval_metrics","pushedAt":"2024-09-23T23:26:43.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"jainapurva","name":"Apurva Jain","path":"/jainapurva","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/19538305?s=80&v=4"},"commit":{"message":"Revert \"Float8 dynamic autoquant\"\n\nThis reverts commit ff9bfa5805184f1de046e0cfa2642baf217a88c5.","shortMessageHtmlLink":"Revert \"Float8 dynamic autoquant\""}},{"before":null,"after":"5f1879b71a7ac4b12a7349ccd6eef465e8737320","ref":"refs/heads/20240923_float8_test_checks","pushedAt":"2024-09-23T22:36:05.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"vkuzo","name":"Vasiliy Kuznetsov","path":"/vkuzo","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1622561?s=80&v=4"},"commit":{"message":"clean up device checks in float8 unit test files\n\nSummary:\n\nWhile working on rowwise scaling I noticed that some of the CUDA\ndevice capability checks we had in the test files did not make sense,\ncleaning this up.\n\nTest Plan:\n\ntests pass on my H100\n\nCI, it should skip less tests now since CI only has CUDA capability 8, 9\n\nReviewers:\n\nSubscribers:\n\nTasks:\n\nTags:","shortMessageHtmlLink":"clean up device checks in float8 unit test files"}},{"before":"80898b414d335c25b11d5944aa66d022ec88bc6d","after":"58c7df2d01811261e010c5789eb5ac26a950e4c8","ref":"refs/heads/gh-pages","pushedAt":"2024-09-23T20:31:36.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"auto-generating sphinx docs","shortMessageHtmlLink":"auto-generating sphinx docs"}},{"before":"1d6f8e2d2e5cbc30ad7f4738307cae54ef7608ed","after":"9680c48568fdf988ccd0d960d48e76836f0aff54","ref":"refs/heads/main","pushedAt":"2024-09-23T20:28:31.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"jerryzh168","name":"Jerry Zhang","path":"/jerryzh168","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/4958441?s=80&v=4"},"commit":{"message":"Adding example for quantized tensor + tensor parallelism (#785)\n\n* [WIP] Adding example for quantized tensor + tensor parallelism\r\n\r\nSummary:\r\nThis PR adds an example of how quantized tensor subclass can work with DTensor: https://github.com/pytorch/pytorch/blob/main/torch/distributed/_tensor/README.md\r\n\r\nEnd goal is to rewrite https://github.com/sgl-project/sglang/blob/main/python/sglang/srt/models/llama2.py with normal llama2 implementation and show case with DTensor + AffineQuantizedTensor + torch.compile we can get on par performance with the custom tensor parallel implementation\r\n\r\nTest Plan:\r\ntorchrun --standalone --nnodes=1 --nproc-per-node=4 tutorials/developer_api_guide/tensor_parallel.py\r\n\r\nReviewers:\r\n\r\nSubscribers:\r\n\r\nTasks:\r\n\r\nTags:\r\n\r\n* tensor parallel file\r\n\r\n* Use DTensor.from instead of distribute_tensor\r\n\r\n* implementing aten.slice.Tensor (WIP)\r\n\r\n* working\r\n\r\n* some shape fix and use more quant primitive ops\r\n\r\n* Add rowwise test\r\n\r\n* make rowwise sharding work\r\n\r\n* compile still not working yet\r\n\r\n* fake tensor didn't pick up shape changes from transpose\r\n\r\n* backend='eager'\r\n\r\n* change transpose to non-inplace op\r\n\r\n* add error message\r\n\r\n* works now with torch nightly\r\n\r\n* remove print\r\n\r\n* ruff\r\n\r\n* Clean up\r\n\r\n* Fix device id\r\n\r\n---------\r\n\r\nCo-authored-by: Ke Wen ","shortMessageHtmlLink":"Adding example for quantized tensor + tensor parallelism (#785)"}},{"before":"4473ac59e479527610a563338b9cd1c16971ab8d","after":"5711a01325709716b16e4d6bef68bc78038e9169","ref":"refs/heads/gh/vkuzo/10/orig","pushedAt":"2024-09-23T20:22:12.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"vkuzo","name":"Vasiliy Kuznetsov","path":"/vkuzo","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1622561?s=80&v=4"},"commit":{"message":"add axiswise granularity to Float8Tensor\n\nSummary:\n\nThis is a copy-paste of https://github.com/pytorch-labs/float8_experimental/pull/352\nwhich never landed.\n\nTest Plan:\n\n```\n\n```\n\nReviewers:\n\nSubscribers:\n\nTasks:\n\nTags:\n\nghstack-source-id: e998d637e0593760ad5a25d0c852d7a2706c8d1a\nghstack-comment-id: 2368837836\nPull Request resolved: https://github.com/pytorch/ao/pull/919","shortMessageHtmlLink":"add axiswise granularity to Float8Tensor"}},{"before":"ef56618e85333b19ee4824d9570f02e5e8cd84d8","after":"d759f8150714c24de07034f858fbec9bbbae00ec","ref":"refs/heads/gh/vkuzo/11/orig","pushedAt":"2024-09-23T20:22:12.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"vkuzo","name":"Vasiliy Kuznetsov","path":"/vkuzo","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1622561?s=80&v=4"},"commit":{"message":"add axiswise scaling to Float8Linear\n\nSummary:\n\nThis PR: support scaling of all arguments of all gemms to be axiswise,\nand ensure that training with axiswise scaling works e2e.\n\nFuture PR: support more granular configurability and optimize\nperformance, add docs\n\nTest Plan:\n\n```\n// tests pass\n./test/float8/test_everything.sh\n\n// sanity check on torchtitan with LLaMa 3 8B on 4 H100s with float8:\n// 1. verify performance does not regress with tensorwise scaling\n// 2. smoke test that axiswise scaling works and numerics are sane, performance isn't there though\n// logs: https://gist.github.com/vkuzo/70fa5eb3c23375f307d11e7bae48682f\n```\n\nReviewers:\n\nSubscribers:\n\nTasks:\n\nTags:\n\nghstack-source-id: 304a5427739966a9601fa860ed248fc2bb902d67\nghstack-comment-id: 2368837904\nPull Request resolved: https://github.com/pytorch/ao/pull/920","shortMessageHtmlLink":"add axiswise scaling to Float8Linear"}},{"before":"40279fb634b7fac78350e937800ed0869835c310","after":"732b231dffe87bad13a76ccb7cc859b8714a6239","ref":"refs/heads/gh/vkuzo/11/next","pushedAt":"2024-09-23T20:22:11.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"vkuzo","name":"Vasiliy Kuznetsov","path":"/vkuzo","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1622561?s=80&v=4"},"commit":{"message":"Update\n\n[ghstack-poisoned]","shortMessageHtmlLink":"Update"}},{"before":"40279fb634b7fac78350e937800ed0869835c310","after":"732b231dffe87bad13a76ccb7cc859b8714a6239","ref":"refs/heads/gh/vkuzo/11/head","pushedAt":"2024-09-23T20:22:11.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"vkuzo","name":"Vasiliy Kuznetsov","path":"/vkuzo","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1622561?s=80&v=4"},"commit":{"message":"Update\n\n[ghstack-poisoned]","shortMessageHtmlLink":"Update"}},{"before":"c5eee249176572d346121bde822d6cb65747c517","after":"4473ac59e479527610a563338b9cd1c16971ab8d","ref":"refs/heads/gh/vkuzo/10/orig","pushedAt":"2024-09-23T19:25:46.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"vkuzo","name":"Vasiliy Kuznetsov","path":"/vkuzo","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1622561?s=80&v=4"},"commit":{"message":"add axiswise granularity to Float8Tensor\n\nSummary:\n\nThis is a copy-paste of https://github.com/pytorch-labs/float8_experimental/pull/352\nwhich never landed.\n\nTest Plan:\n\n```\n\n```\n\nReviewers:\n\nSubscribers:\n\nTasks:\n\nTags:\n\nghstack-source-id: e998d637e0593760ad5a25d0c852d7a2706c8d1a\nghstack-comment-id: 2368837836\nPull Request resolved: https://github.com/pytorch/ao/pull/919","shortMessageHtmlLink":"add axiswise granularity to Float8Tensor"}},{"before":"a2a50fbcdd049615fcc11c054c581e7213b4a06d","after":"ef56618e85333b19ee4824d9570f02e5e8cd84d8","ref":"refs/heads/gh/vkuzo/11/orig","pushedAt":"2024-09-23T19:25:46.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"vkuzo","name":"Vasiliy Kuznetsov","path":"/vkuzo","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1622561?s=80&v=4"},"commit":{"message":"add axiswise scaling to Float8Linear\n\nSummary:\n\nThis PR: support scaling of all arguments of all gemms to be axiswise,\nand ensure that training with axiswise scaling works e2e.\n\nFuture PR: support more granular configurability and optimize\nperformance, add docs\n\nTest Plan:\n\n```\n// tests pass\n./test/float8/test_everything.sh\n\n// sanity check on torchtitan with LLaMa 3 8B on 4 H100s with float8:\n// 1. verify performance does not regress with tensorwise scaling\n// 2. smoke test that axiswise scaling works and numerics are sane, performance isn't there though\n// logs: https://gist.github.com/vkuzo/70fa5eb3c23375f307d11e7bae48682f\n```\n\nReviewers:\n\nSubscribers:\n\nTasks:\n\nTags:\n\nghstack-source-id: af334fd3f9f0b10e2f0a7cf1e38513741d1b45f7\nghstack-comment-id: 2368837904\nPull Request resolved: https://github.com/pytorch/ao/pull/920","shortMessageHtmlLink":"add axiswise scaling to Float8Linear"}},{"before":"c816ce9fcf7c73315b44310bfaf0bfe13107b37e","after":"40279fb634b7fac78350e937800ed0869835c310","ref":"refs/heads/gh/vkuzo/11/next","pushedAt":"2024-09-23T19:25:45.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"vkuzo","name":"Vasiliy Kuznetsov","path":"/vkuzo","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1622561?s=80&v=4"},"commit":{"message":"Update\n\n[ghstack-poisoned]","shortMessageHtmlLink":"Update"}},{"before":"c816ce9fcf7c73315b44310bfaf0bfe13107b37e","after":"40279fb634b7fac78350e937800ed0869835c310","ref":"refs/heads/gh/vkuzo/11/head","pushedAt":"2024-09-23T19:25:45.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"vkuzo","name":"Vasiliy Kuznetsov","path":"/vkuzo","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1622561?s=80&v=4"},"commit":{"message":"Update\n\n[ghstack-poisoned]","shortMessageHtmlLink":"Update"}},{"before":"9150b4f2b2f0e13d9cd79f876067b9907037841c","after":"459e92c434378eb4d8b70d69299af94a69e4d45c","ref":"refs/heads/gh/vkuzo/10/next","pushedAt":"2024-09-23T19:25:45.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"vkuzo","name":"Vasiliy Kuznetsov","path":"/vkuzo","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1622561?s=80&v=4"},"commit":{"message":"Update\n\n[ghstack-poisoned]","shortMessageHtmlLink":"Update"}},{"before":"9150b4f2b2f0e13d9cd79f876067b9907037841c","after":"459e92c434378eb4d8b70d69299af94a69e4d45c","ref":"refs/heads/gh/vkuzo/10/head","pushedAt":"2024-09-23T19:25:45.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"vkuzo","name":"Vasiliy Kuznetsov","path":"/vkuzo","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1622561?s=80&v=4"},"commit":{"message":"Update\n\n[ghstack-poisoned]","shortMessageHtmlLink":"Update"}},{"before":"585cdfea55f896ff78d9e6a27f112d4ba15056d0","after":"a2a50fbcdd049615fcc11c054c581e7213b4a06d","ref":"refs/heads/gh/vkuzo/11/orig","pushedAt":"2024-09-23T17:55:21.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"vkuzo","name":"Vasiliy Kuznetsov","path":"/vkuzo","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1622561?s=80&v=4"},"commit":{"message":"add axiswise scaling to Float8Linear\n\nSummary:\n\nThis PR: support scaling of all arguments of all gemms to be axiswise,\nand ensure that training with axiswise scaling works e2e.\n\nFuture PR: support more granular configurability and optimize\nperformance, add docs\n\nTest Plan:\n\n```\n// tests pass\n./test/float8/test_everything.sh\n\n// sanity check on torchtitan with LLaMa 3 8B on 4 H100s with float8:\n// 1. verify performance does not regress with tensorwise scaling\n// 2. smoke test that axiswise scaling works and numerics are sane, performance isn't there though\n// logs: https://gist.github.com/vkuzo/70fa5eb3c23375f307d11e7bae48682f\n```\n\nReviewers:\n\nSubscribers:\n\nTasks:\n\nTags:\n\nghstack-source-id: 0d471db431fab2195a86e84bc7d3a93cc25db6e4\nghstack-comment-id: 2368837904\nPull Request resolved: https://github.com/pytorch/ao/pull/920","shortMessageHtmlLink":"add axiswise scaling to Float8Linear"}},{"before":"be5a4a86a6cde4310d97dd690565953985fca3b7","after":"c5eee249176572d346121bde822d6cb65747c517","ref":"refs/heads/gh/vkuzo/10/orig","pushedAt":"2024-09-23T17:55:21.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"vkuzo","name":"Vasiliy Kuznetsov","path":"/vkuzo","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1622561?s=80&v=4"},"commit":{"message":"add axiswise granularity to Float8Tensor\n\nSummary:\n\nThis is a copy-paste of https://github.com/pytorch-labs/float8_experimental/pull/352\nwhich never landed.\n\nTest Plan:\n\n```\n\n```\n\nReviewers:\n\nSubscribers:\n\nTasks:\n\nTags:\n\nghstack-source-id: 7433fc916dd6187aa6c9056fd171eb35079cef51\nghstack-comment-id: 2368837836\nPull Request resolved: https://github.com/pytorch/ao/pull/919","shortMessageHtmlLink":"add axiswise granularity to Float8Tensor"}},{"before":"f15c2a02b0822784f718c51480c1fbe422ea5bb8","after":"9150b4f2b2f0e13d9cd79f876067b9907037841c","ref":"refs/heads/gh/vkuzo/10/next","pushedAt":"2024-09-23T17:55:20.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"vkuzo","name":"Vasiliy Kuznetsov","path":"/vkuzo","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1622561?s=80&v=4"},"commit":{"message":"Update\n\n[ghstack-poisoned]","shortMessageHtmlLink":"Update"}},{"before":"f15c2a02b0822784f718c51480c1fbe422ea5bb8","after":"9150b4f2b2f0e13d9cd79f876067b9907037841c","ref":"refs/heads/gh/vkuzo/10/head","pushedAt":"2024-09-23T17:55:20.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"vkuzo","name":"Vasiliy Kuznetsov","path":"/vkuzo","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1622561?s=80&v=4"},"commit":{"message":"Update\n\n[ghstack-poisoned]","shortMessageHtmlLink":"Update"}},{"before":"d0b100286bc99c11ec12402e69236b9741f2d41a","after":"c816ce9fcf7c73315b44310bfaf0bfe13107b37e","ref":"refs/heads/gh/vkuzo/11/next","pushedAt":"2024-09-23T17:55:20.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"vkuzo","name":"Vasiliy Kuznetsov","path":"/vkuzo","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1622561?s=80&v=4"},"commit":{"message":"Update\n\n[ghstack-poisoned]","shortMessageHtmlLink":"Update"}},{"before":"d0b100286bc99c11ec12402e69236b9741f2d41a","after":"c816ce9fcf7c73315b44310bfaf0bfe13107b37e","ref":"refs/heads/gh/vkuzo/11/head","pushedAt":"2024-09-23T17:55:20.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"vkuzo","name":"Vasiliy Kuznetsov","path":"/vkuzo","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1622561?s=80&v=4"},"commit":{"message":"Update\n\n[ghstack-poisoned]","shortMessageHtmlLink":"Update"}},{"before":"e0cd92c809811a1a2ea78bc13f9500e915015a1a","after":"80898b414d335c25b11d5944aa66d022ec88bc6d","ref":"refs/heads/gh-pages","pushedAt":"2024-09-23T17:02:44.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"auto-generating sphinx docs","shortMessageHtmlLink":"auto-generating sphinx docs"}},{"before":"0bdde92114b470823aa24725bf3b0811e980c8ce","after":"1d6f8e2d2e5cbc30ad7f4738307cae54ef7608ed","ref":"refs/heads/main","pushedAt":"2024-09-23T16:59:27.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"weifengpy","name":"Wei (Will) Feng","path":"/weifengpy","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/134637289?s=80&v=4"},"commit":{"message":"[float8] fix typo in bitwise_identical unit test (#918)\n\nSummary:\r\n\r\nTest Plan:\r\n\r\nReviewers:\r\n\r\nSubscribers:\r\n\r\nTasks:\r\n\r\nTags:","shortMessageHtmlLink":"[float8] fix typo in bitwise_identical unit test (#918)"}},{"before":"c5470113fe2b42aa625889994f201ebe5ab07033","after":"585cdfea55f896ff78d9e6a27f112d4ba15056d0","ref":"refs/heads/gh/vkuzo/11/orig","pushedAt":"2024-09-23T16:54:18.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"vkuzo","name":"Vasiliy Kuznetsov","path":"/vkuzo","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1622561?s=80&v=4"},"commit":{"message":"add axiswise scaling to Float8Linear\n\nSummary:\n\nThis PR: support scaling of all arguments of all gemms to be axiswise,\nand ensure that training with axiswise scaling works e2e.\n\nFuture PR: support more granular configurability and optimize\nperformance, add docs\n\nTest Plan:\n\n```\n// tests pass\n./test/float8/test_everything.sh\n\n// sanity check on torchtitan with LLaMa 3 8B on 4 H100s with float8:\n// 1. verify performance does not regress with tensorwise scaling\n// 2. smoke test that axiswise scaling works and numerics are sane, performance isn't there though\n// logs: https://gist.github.com/vkuzo/70fa5eb3c23375f307d11e7bae48682f\n```\n\nReviewers:\n\nSubscribers:\n\nTasks:\n\nTags:\n\nghstack-source-id: 77d62e8efb3a838035213125476c714290882a08\nghstack-comment-id: 2368837904\nPull Request resolved: https://github.com/pytorch/ao/pull/920","shortMessageHtmlLink":"add axiswise scaling to Float8Linear"}},{"before":"2e0e04609105f1cd7705d0d9198496ec51c44957","after":"be5a4a86a6cde4310d97dd690565953985fca3b7","ref":"refs/heads/gh/vkuzo/10/orig","pushedAt":"2024-09-23T16:54:18.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"vkuzo","name":"Vasiliy Kuznetsov","path":"/vkuzo","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1622561?s=80&v=4"},"commit":{"message":"add axiswise granularity to Float8Tensor\n\nSummary:\n\nThis is a copy-paste of https://github.com/pytorch-labs/float8_experimental/pull/352\nwhich never landed.\n\nTest Plan:\n\n```\n\n```\n\nReviewers:\n\nSubscribers:\n\nTasks:\n\nTags:\n\nghstack-source-id: 33a08ff38550b19f916e6f61054a4be292f54f36\nghstack-comment-id: 2368837836\nPull Request resolved: https://github.com/pytorch/ao/pull/919","shortMessageHtmlLink":"add axiswise granularity to Float8Tensor"}},{"before":"241f815f73b85f27a59fdf9f8e9d0a80cedb5f2c","after":"d0b100286bc99c11ec12402e69236b9741f2d41a","ref":"refs/heads/gh/vkuzo/11/next","pushedAt":"2024-09-23T16:54:17.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"vkuzo","name":"Vasiliy Kuznetsov","path":"/vkuzo","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1622561?s=80&v=4"},"commit":{"message":"Update\n\n[ghstack-poisoned]","shortMessageHtmlLink":"Update"}},{"before":"241f815f73b85f27a59fdf9f8e9d0a80cedb5f2c","after":"d0b100286bc99c11ec12402e69236b9741f2d41a","ref":"refs/heads/gh/vkuzo/11/head","pushedAt":"2024-09-23T16:54:17.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"vkuzo","name":"Vasiliy Kuznetsov","path":"/vkuzo","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1622561?s=80&v=4"},"commit":{"message":"Update\n\n[ghstack-poisoned]","shortMessageHtmlLink":"Update"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"Y3Vyc29yOnYyOpK7MjAyNC0wOS0yNFQwMTowODoxNS4wMDAwMDBazwAAAAS-k9NW","startCursor":"Y3Vyc29yOnYyOpK7MjAyNC0wOS0yNFQwMTowODoxNS4wMDAwMDBazwAAAAS-k9NW","endCursor":"Y3Vyc29yOnYyOpK7MjAyNC0wOS0yM1QxNjo1NDoxNy4wMDAwMDBazwAAAAS-OryH"}},"title":"Activity ยท pytorch/ao"}